id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.05710 | Gravity Amplitudes From Double Bonus Relations | In this letter we derive new expressions for tree-level graviton amplitudes
in $\mathcal{N}=8$ supergravity from BCFW recursion relations combined with new
types of bonus relations. These bonus relations go beyond the famous $1/z^2$
behavior under a large BCFW shift, and use knowledge about certain zeroes of
graviton amplitudes in collinear kinematics. This extra knowledge can be used
in the context of global residue theorems by writing the amplitude in a special
form using canonical building blocks. In the NMHV case these building blocks
are dressed one-loop leading singularities, the same objects that appear in the
expansion of Yang-Mills amplitudes, where each term corresponds to an
$R$-invariant. Unlike other approaches, our formula is not an expansion in
terms of cyclic objects and does not manifest color-kinematics duality, but
rather preserves the permutational symmetry of its building blocks. We also
comment on the possible connection to Grassmannian geometry and give some
non-trivial evidence of such structure for graviton amplitudes. | Shruti Paranjape, Jaroslav Trnka | 2023-09-11T18:00:03Z | http://arxiv.org/abs/2309.05710v1 | # Gravity Amplitudes From Double Bonus Relations
###### Abstract
In this letter we derive new expressions for tree-level graviton amplitudes in \(\mathcal{N}=8\) supergravity from BCFW recursion relations combined with new types of bonus relations. These bonus relations go beyond the famous \(1/z^{2}\) behavior under a large BCFW shift, and use knowledge about certain zeroes of graviton amplitudes in collinear kinematics. This extra knowledge can be used in the context of global residue theorems by writing the amplitude in a special form using canonical building blocks. In the NMHV case these building blocks are dressed one-loop leading singularities, the same objects that appear in the expansion of Yang-Mills amplitudes, where each term corresponds to an \(R\)-invariant. Unlike other approaches, our formula is not an expansion in terms of cyclic objects and does not manifest color-kinematics duality, but rather preserves the permutational symmetry of its building blocks. We also comment on the possible connection to Grassmannian geometry and give some non-trivial evidence of such structure for graviton amplitudes.
## Introduction
In last two decades, the study of gravitational amplitudes has been a very active area of research, leading to major discoveries and great improvement in our theoretical understanding and computational abilities. However, some major mysteries remain unresolved even for tree-level amplitudes. For example, the calculation of higher-point amplitudes using Feynman diagrams is notoriously difficult due to the presence of vertices of any multiplicity and their complicated Feynman rules. Yet the final expressions for gravity amplitudes are surprisingly simple and exhibit interesting properties some of which are yet-to-be linked to underlying theoretical or geometric structure. There are multiple modern tools available which make the calculation of graviton amplitudes simpler and manifest important properties of the final result.
The first of them is the color-kinematics duality. This is motivated by the KLT relations between open and closed string amplitudes [1] which extend to Yang-Mills and gravity amplitudes in the low energy limit,
\[\mathcal{A}_{n}^{\text{GR}}=\sum_{\rho,\sigma}K_{\rho,\sigma}\mathcal{A}_{n}^{ \text{YM}}(\sigma)\mathcal{A}_{n}^{\text{YM}}(\rho)\,, \tag{1}\]
where we sum over two sets of permutations \(\rho\), \(\sigma\) of external states of two color-ordered Yang-Mills amplitudes \(\mathcal{A}_{n}^{\text{YM}}\), and the KLT kernel \(K_{\rho,\sigma}\) is a certain polynomial in Mandelstam variables \(s_{ij}\). It was shown in [2] that there exists a particular representation of Yang-Mills amplitudes which'squares' into gravity amplitudes at the level of cubic graphs, known as the Bern-Carrasco-Johansson (BCJ) form. The construction also extends to loop amplitudes [2; 3; 4; 5]. The double copy structure is also manifest in the worldsheet formalism via ambitwistor strings [6] and the Cachazo-Ye-Huan (CHY) formula [7], where the amplitude is expressed as an integral over worldsheet parameters constrained by scattering equations.
The second approach focuses on helicity amplitudes in four dimensions and uses the Britto-Cachazo-Feng-Witten (BCFW) recursion relations [8; 9] to construct higher-point amplitudes from lower-point ones. The crucial ingredient is the behavior under a large BCFW shift: \(\widehat{\widetilde{\lambda}}_{n}=\widetilde{\lambda}_{n}+z\widetilde{\lambda }_{1}\), \(\widehat{\lambda}_{1}=\lambda_{1}-z\lambda_{n}\), of the BCFW-shifted amplitude \(\mathcal{A}_{n}^{\text{GR}}(z)\) that vanishes as
\[\mathcal{A}_{n}^{\text{GR}}(z)=\mathcal{O}\left(\frac{1}{z^{2}}\right)\quad \text{for}\quad z\to\infty\,. \tag{2}\]
The improved behavior at infinity [10] (which is stronger than the \(1/z\) scaling of Yang-Mills amplitudes) leads to various BCFW formulae [11; 12] and many equivalent yet different-looking expressions. The Cauchy formula for the BCFW-shifted amplitude
\[\oint\frac{dz}{z}(1+\alpha z)A_{n}^{\text{GR}}(z)=0 \tag{3}\]
is valid for any \(\alpha\) and we can choose \(\alpha\) to our liking (unlike in Yang-Mills where we have to set \(\alpha=0\) to prevent poles at infinity) or use this freedom to remove one term in the expansion. Interestingly, multi-line shifts do not seem to work in gravity [13] though it is still an open problem [14; 15]. The elementary building blocks in any recursion are the two three-point helicity amplitudes,
\[\begin{array}{c}\includegraphics[width=142.364pt]{142.364pt}\end{array} \begin{array}{c}\includegraphics[width=142.
where labels \(n\) and \(1\) are special due to the choice of an \((n1)\) shift. These ordered amplitudes can be expressed in terms of dressed planar on-shell diagrams which evaluate to the square of Yang-Mills superfunctions \(R_{k}\) up to a scalar kinematic prefactor \(G_{k}\),
\[\mathcal{A}_{n}^{\text{GR}}=\sum_{k}G_{k}(s_{ij})R_{k}^{2}\quad\text{where}\quad \mathcal{A}_{n}^{\text{YM}}=\sum_{k}R_{k}\,. \tag{7}\]
The first expression of this type was found by Elvang and Freedman for MHV amplitudes [16] inspired both by KLT and BCFW methods. This later served as motivation for an explicit solution of BCFW for the general N\({}^{k}\)MHV amplitude in [17]. More recently, the recursion was also organized in a new way which makes a direct reference to the cubic graph double copy [18]. There are also other closed formulas for MHV amplitudes: the Berends-Giele-Kuijf formula from 1988 [19], later shown to be equivalent to the Mason-Skinner formula [20], the inverse-soft factor construction [21; 22], or for more general amplitudes the embedding of BCFW recursion into the language of gravity on-shell diagrams [23; 24; 25; 26; 27]. At loop-level, poles at infinity are generally present, though there are many unexpected cancellations and improved large-momentum scalings [28; 29; 30; 15; 31]. There are also very interesting twistor-string inspired [32; 33; 34; 35; 36; 37] and matrix-representation [38; 39; 40] approaches to gravity amplitudes.
Note that the KLT formula (1) is different from the double-copy-inspired BCFW expressions (7) - in the former we work with products of Yang-Mills amplitudes with different orderings, while in the latter we square Yang-Mills building blocks with fixed ordering (and sum over permutations). Yet they share the similar idea of building gravity amplitudes (with permutational symmetry) from Yang-Mills amplitudes (with cyclic symmetry).
There is a different approach to the problem, that is much less developed at the moment, which attempts to exhibit the permutational symmetry of the amplitude while necessarily loosing the manifest double copy connection. The most interesting result on this front is the Hodges formula [41] for \(n\)-point MHV (\(k\)=0) amplitudes, which takes a strikingly simple form
\[\mathcal{A}_{n,0}^{\text{GR}}=\frac{|\Phi^{abc}|}{\langle ab\rangle^{2} \langle bc\rangle^{2}\langle ca\rangle^{2}}\,, \tag{8}\]
where \(|\Phi^{abc}|\) is the determinant of \(\Phi\) with rows and columns \(a,\,b,\,c\) removed. The matrix \(\Phi\) has components
\[\Phi_{ij}=\frac{[ij]}{\langle ij\rangle},\qquad\qquad\Phi_{ii}=\sum_{j\neq i }\frac{[ij]\langle jk\rangle\langle jl\rangle}{\langle ij\rangle\langle ik \rangle\langle il\rangle}\,, \tag{9}\]
where \(\lambda_{k}\) and \(\lambda_{l}\) are reference spinors and \(\Phi_{ii}\) is the soft factor for particle \(i\). The formula is independent of the choice of rows and reference spinors. This fascinating expression is manifestly permutation-invariant before fixing the reference rows and columns \(a,b,c\) to be removed (there is also a generalized version with six indices and no apparent double poles), but the explicit kinematic formulae loose this manifest invariance due to the various ways momentum conservation may be implemented. This is a consequence of the basic fact that momentum conservation cannot be imposed democratically and one of the momenta can be eliminated from a kinematic expression. In the simple case of the five-point MHV amplitude
\[\mathcal{A}_{5,0}^{\text{GR}}=\frac{N_{5}\,\delta^{4}(P)\delta^{8}(Q)}{\langle 1 2\rangle\langle 13\rangle\langle 14\rangle\langle 15\rangle\langle 23 \rangle\langle 24\rangle\langle 25\rangle\langle 34\rangle\langle 35\rangle \langle 45\rangle} \tag{10}\]
the numerator \(N_{5}\!=\!\langle 12\rangle[23]\langle 34\rangle[41]\!-\![12]\langle 23\rangle[34] \langle 41\rangle\) is equal to \(\text{Tr}(p_{1}p_{2}p_{3}p_{4})\), which is totally antisymmetric in all five labels. There is no form of \(N_{5}\) that manifests all symmetries - one momentum must be chosen and eliminated. Additionally, Hodges formula holds only for MHV configurations. One of the authors of this letter found an expression for NMHV helicity amplitude \(\mathcal{A}_{n,1}^{\text{GR}}(1^{-}2^{-}3^{-}4^{+}\ldots n^{+})\) that manifests (the full) \(S_{3}\times S_{n-3}\) symmetry of this amplitude, but there was no obvious generalization to the supersymmetric case [42].
In this letter, we will present a new method to express N\({}^{k}\)MHV amplitudes in terms of simple building blocks, (dressed) one-loop leading singularities. This is motivated by the Britto-Cachazo-Feng (BCF) recursion [8] for gluon amplitudes, now using additional properties of gravity amplitudes: bonus relations (a consequence of the \(1/z^{2}\) behavior under large BCFW shifts) and also a new type of relation coming from certain zeroes of the amplitude in the collinear region. In the end, we express the amplitude in terms of new building blocks, very reminiscent of the expansion of gluon amplitudes in terms of Yangian invariants, but never introduce any ordering of the external states - our objects manifest their own permutational symmetry (as a subset of the total permutational symmetry of the amplitude). We focus on the NMHV case, but also illustrate the generalization to higher \(k\). In the end, we discuss a curious connection between these new objects and Grassmannian geometry, motivated by such a connection between Yangian invariants (as building blocks for gluon amplitudes) and dlog forms on the cells in the positive Grassmannian.
### Gluon Amplitudes From Triple Cuts
The BCFW-shifted color-ordered amplitude \(\mathcal{A}_{n}^{\text{YM}}(z)\) can be interpreted as a one-loop triple cut,
\[\frac{1}{z}\mathcal{A}_{n}^{\text{YM}}(z)=\raisebox{-14.226378pt}{\includegraphics[ 14]{A1}{A2}{A3}} \tag{11}\]
where we added a BCFW bridge [23] to the grey blob representing the unshifted amplitude \(\mathcal{A}_{n}^{\text{YM}}\). In these figures all legs are on-shell, both internal (cut propagators) and
external. The momentum flow in the bridge (horizontal internal line) is linear in the shift parameter, \(P=z\lambda_{1}\widetilde{\lambda}_{n}\). The residue theorem for the triple cut function
\[\oint\frac{dz}{z}\mathcal{A}_{n}^{\text{YM}}(z)=\oint dz\left(\parbox{142.26378pt}{ \includegraphics[scale=142.26378pt]{figs/2-
As it turns out, this is not enough and it is _not_ possible to get rid off all terms in the first sum in (18). In the Yang-Mills case (13), there was only one such term but now we have \(n{-}2\) of them, and we do not have enough relations to eliminate all the terms even if we use the improved \(1/z^{2}\) behavior. With no further relations available, the analogue of (16) would not exist and we would only get a generic expansion in terms of higher-loop leading singularities (or on-shell diagrams when rewriting everything in terms of three-point on-shell vertices).
However, there is extra information about the _zeroes_ of triple cuts (20) that we can use to write additional relations. This comes from the collinear behavior of the amplitude, also known as the splitting function [21], applied to on-shell diagrams and leading singularities in [24]. In particular, the function \(\mathcal{F}(z)\) in (20) _vanishes_ for \(z{=}0\) for all triple cuts if \(Q_{2}\) has at least two external legs (one of them being \(n\)). Hence, we get a more general relation
\[\oint dz\frac{(1+\alpha z)}{z}\mathcal{F}(z)=0 \tag{22}\]
which gets no contribution from infinity. Because of the multiplicative _and_ divisive factors in the integrand, we refer to relations between leading singularities from (22) as _double bonus relations_. This does not hold if \(Q_{2}\) has only one leg \(n\), i.e. for the actual shifted amplitude (17). In this case the triple cut function has a pole (rather than a zero) as also evident from (17). Note that there are further relations that stem from
\[\oint dz\frac{(1+\alpha z+\beta z^{2})}{z}\mathcal{F}(z)=0\,, \tag{23}\]
but we do not study their consequences because (22) will be sufficient for our purposes. Indeed using (20) and (22) for fixed \(\alpha\) we can rewrite the first sum in \(\mathcal{A}^{\rm GR}_{n,1}\) as
\[\sum_{k=2}^{n-1} \tag{24}\] \[+\!\!\sum_{j,Q_{1}}\frac{\langle 1|Q_{1}Q_{2}|n\rangle[n]}{ \langle 1|Q_{1}Q_{2}(n+j)|1]}\]
Next, we rewrite the first sum on the right hand side of (24) using the same type of residue theorem, now with two legs \(\{n,j\}\) in the corner replaced by three legs \(\{n,j,i\}\), and so on. Exploiting all these relations we get
\[\mathcal{A}^{\rm GR}_{n,1}=\!\!\sum_{Q_{1},Q_{2}}\!\!\frac{\langle 1|Q_{1}Q_{2}| n\rangle}{\langle 1|Q_{1}Q_{2}Q_{3}|1\rangle\langle 1n\rangle}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
with a lower point N\({}^{2}\)MHV vertex and write the BCFW formula as
\[\mathcal{A}^{\rm GR}_{n,2}=\sum_{Q_{1},Q_{2}}\frac{\langle P_{3}n\rangle}{\langle 1 n\rangle\mathcal{J}}\left(\begin{array}{c}Q_{3}^{n}\\ \end{array}\right) \tag{30}\]
where as before the gray vertex represents the NMHV amplitude, now given by the formula (25). This is an analogue of the N\({}^{2}\)MHV Yang-Mills formula [46] in terms of two building blocks \(A\) and \(B_{1}{+}B_{2}\) (which correspond to two different types of \(k\!=\!2\) Yangian invariants) as we also have in (30). We can further express the gray NMHV vertices in terms of (anti-)MHV vertices, and get
\[\frac{\langle P_{3}n\rangle}{\langle 1n\rangle\mathcal{J}}\frac{\langle P_{2}|Q _{2}^{a}Q_{2}^{b}|P_{3}\rangle}{\langle P_{2}P_{3}\rangle\langle P_{2}|Q_{2}^ {a}Q_{2}^{b}|P_{2}\rangle}\ \frac{1}{\langle P_{2}|Q_{2}^{a}Q_{2}^{b}|P_{2}\rangle}\ \frac{1}{\langle P
three equivalent ways to draw this configuration (corresponding to a rotation of lines) which relates three one-loop leading singularities,
(36)
An analogous object can be defined for gravity, now by summing all one-loop leading singularities which (in analogy with Yang-Mills) could correspond to six (unordered) points in \(\mathbb{P}^{2}\) with points \(1,2,3\) on a line. There are two types of leading singularities that correspond to such configurations. First, we get the following collection,
(37)
Using (28) we can evaluate it to
\[\mathcal{G}^{(a)}_{123}=\!\sum_{S_{123}\times S_{456}}\frac{[23]\langle 45 \rangle_{861}}{s_{123}\langle 12\rangle\langle 23\rangle\langle 13\rangle [45][56][46]} \tag{38}\]
where we omitted in the numerator the delta functions \(\delta^{4}(P)\delta^{16}(Q)\delta^{8}([45]\widetilde{\eta}_{6}+[56]\widetilde {\eta}_{4}+[64]\widetilde{\eta}_{5})\) and used a short notation \(\langle a|bc|d]\) for \(\langle ab\rangle[bd]+\langle ac\rangle[cd]\). There is another way to draw the configurations of points, that gives us the following collection of leading singularities,
(39)
which is a sum of three terms (each of which manifest permutation symmetry in \(1,2,3\)),
\[\mathcal{G}^{(b)}_{123}=\sum_{S_{456}}\frac{\langle 45\rangle\langle 56 \rangle(\langle 12\rangle[23]\langle 3|45]6[|14]-[12\rangle\langle 23\rangle [34]\langle 1|45|6])}{s_{123}\langle 12\rangle(23)\langle 13\rangle[45][56][1 23]\langle 2|13|4]} \tag{40}\]
with the same set of delta functions. Explicit check shows
\[\mathcal{G}^{(a)}_{123}=\mathcal{G}^{(b)}_{123}\equiv\mathcal{G}_{123} \tag{41}\]
which is an analogue of (36) in Yang-Mills. This relation can be proven by residue theorems for certain triple cuts (with only MHV vertices). In the Yang-Mills case, the \(R\)-invariants also satisfy an important six-term identity,
\[\mathcal{R}_{123}+\mathcal{R}_{234}+\mathcal{R}_{345}+\mathcal{R}_{456}+ \mathcal{R}_{561}+\mathcal{R}_{612}=0 \tag{42}\]
where \(\mathcal{R}_{ijk}\) corresponds to a configuration of six points where points \(i,j,k\) are now on the same line (for example \(\mathcal{R}_{123}\equiv\mathcal{R}_{1,3,6}\) in the usual \(\mathcal{R}\)-invariant notation). This is a very nice consequence of the residue theorem in the Grassmannian representation [52, 53, 54, 23]. The new objects \(\mathcal{G}_{abc}\) satisfy an analogous formula,
\[\sum_{S_{6}}\mathcal{G}_{abc}=0 \tag{43}\]
which is a 20-term identity for \(\mathcal{G}_{abc}\) (and hence a 60-term identity for one-loop leading singularities using (39)). This bears a striking similarity to (42) with \(R\)-invariants and further is very suggestive of the existence of a Grassmannian construction, though the actual building blocks for tree-level gravity amplitudes (25) are dressed with kinematical factors (26). Hence these prefactors should play a crucial role in the putative geometric construction, which we explore in upcoming work [55].
## Conclusion and Outlook
In this letter, we present new expressions for \(n\)-point graviton amplitudes in terms of canonical building blocks. In the NMHV case, these are one-loop leading singularities dressed with kinematical prefactors, very similar to the Yang-Mills case where the same (but undressed) objects are term-wise equal to \(R\)-invariants. The analogy goes even further and we show that for the six-point case the (un-dressed) objects satisfy new relations, similar to a six-term relation between \(R\)-invariants.
The crucial question for the future is the role of the kinematical dressing, how it fits into the story of a (putative) Grassmannian geometry, residue theorems and bonus relations, and how to show in some geometric way that different BCFW formulae (after being rewritten in an appropriate form using our double bonus relations) give the same amplitude. One particular direction is the careful study of spurious pole cancellation. In the gluon case [56] this lead to important insights and the eventual discovery of the Amplituhedron [47]. Another path is to find a closer link with the formula for the \(n\)-point NMHV fixed helicity amplitude presented in [42], which makes manifest complete permutational symmetry (separately in the labels of positive and negative helicity external states). The interplay between the formula given in [42], the ordered expressions for gravity [18] and our new BCFW expression (25), together with a deeper understanding of the Hodges MHV formula (which is implicitly used in our building blocks as the white MHV vertices) might bring us closer to the discovery of a putative Gravituhedron geometry.
_Acknowledgements_: We thank Nima Arkani-Hamed, Jacob Bourjaily, Taro Brown, Song He and Umut Oktem for very useful discussions. This work is supported by GACR 21-26574S, DOE grant No. SC0009999 and the funds of the University of California. |
2309.05076 | An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language
Model Game Agents | The development of believable, natural, and interactive digital artificial
agents is a field of growing interest. Theoretical uncertainties and technical
barriers present considerable challenges to the field, particularly with
regards to developing agents that effectively simulate human emotions. Large
language models (LLMs) might address these issues by tapping common patterns in
situational appraisal. In three empirical experiments, this study tests the
capabilities of LLMs to solve emotional intelligence tasks and to simulate
emotions. It presents and evaluates a new chain-of-emotion architecture for
emotion simulation within video games, based on psychological appraisal
research. Results show that it outperforms standard LLM architectures on a
range of user experience and content analysis metrics. This study therefore
provides early evidence of how to construct and test affective agents based on
cognitive processes represented in language models. | Maximilian Croissant, Madeleine Frister, Guy Schofield, Cade McCall | 2023-09-10T16:55:49Z | http://arxiv.org/abs/2309.05076v1 | # An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language Model Game Agents
###### Abstract
The development of believable, natural, and interactive digital artificial agents is a field of growing interest. Theoretical uncertainties and technical barriers present considerable challenges to the field, particularly with regards to developing agents that effectively simulate human emotions. Large language models (LLMs) might address these issues by tapping common patterns in situational appraisal. In three empirical experiments, this study tests the capabilities of LLMs to solve emotional intelligence tasks and to simulate emotions. It presents and evaluates a new chain-of-emotion architecture for emotion simulation within video games, based on psychological appraisal research. Results show that it outperforms standard LLM architectures on a range of user experience and content analysis metrics. This study therefore provides early evidence of how to construct and test affective agents based on cognitive processes represented in language models.
Large Language Models Affective Computing AI Agents Affect Simulation Video Games
## 1 Introduction
In user-centered software and video games, affective artificial agents have long been researched for their potential to provide personalized, natural, and engaging experiences for both entertainment and training purposes [46, 25, 48]. Affective artificial agents are mainly defined through their ability to simulate appropriate emotional responses given certain situations [6, 25]. Consequently, they are believed to contribute to enriched user interactions on various domains [6], and even potential health benefits [36].
However, building systems that successfully model, and express emotions is a difficult task [46], especially since affect and emotion are fuzzy concepts, even in psychology research [52]. Affective processes are very complex and involve multiple components (such as cognitive, behavioural, physiological, and feeling [53]) that are not fully understood and often debated on a fundamental level [28], which makes computational representations very difficult [25]. With the development of modern technology, such machine learning [57], emotion simulation might be achievable through data-driven techniques.
For example, large language models (LLMs) have demonstrated a potential to simulate a range of cognitive abilities [7] and even imputing a range of mental and affective states to others [30]. Because LLMs are trained on large text bodies that hold representations of human mental abilities, they have been observed to exhibit human-like performance on a variety of tasks [7, 23]. Since emotions are an important part of how humans perceive reality and therefore
construct language [5] and are heavily influenced by cognitive processes [39] including linguistic labelling [34, 10], language-based emotion representations might too enable deep learning models to better simulate affective responses. As of yet, the potential for LLMs to solve some of the issues present in the field of affective agents is however not well understood.
This study therefore examines the potential of LLMs to simulate emotions and how this potential might be influenced by the underlying implementation architecture. Using contemporary findings of emotion research, we propose a cognitive appraisal-based approach for language model affect generation and test it against other strategies in the ability to generate appropriate situational emotions. We then use those results to implement affective agents within a newly developed conversational video game. This study therefore represents an effort to progress affect simulation for artificial agents using language models.
## 2 Related Work
### Affective Agents
In affective computing, researchers and developers are interested in creating affective systems that intelligently respond to changes in users' emotions [46]. Some of the benefits associated with affective computing techniques applied to video games include more consistent, accessible player experience for a range of different players [21], personalized health and training applications [3], as well as new and purposefully designed gameplay mechanisms aimed at reinforcing target experiences [24, 64]. The use of affective agents in video games has been researched with special regard to this last aim. In 2011, Hudlicka discussed potential system design elements for affective (or more precisely emotional) game agents [25]. According to the author, affective agents can be seen as computational representations of operationalizations made from emotion-theoretical models with appraisal functionality for emotion generation. For example, artificial agents could implement computational calculations of certain events to assess the relevance to the agent and consequently probable emotional reaction [25]. "Computation-friendly" appraisal implementations have often been built on models such as the OCC model [42] (see for example GAMYGDALA [47]). Taken specific fixed aspects (such as expectations of the agent [9]) into account, such models have been used to simulate appraisal based on decision trees.
The main aim for such artificial agents is seen to be natural, human-like behaviour and believability as a part of a more fleshed-out and engaging game world [24, 25]. The tasks of agents therefore differ from other affective game mechanism that mostly try to adapt the game world to player affect [65, 16]. Procedurally generated content (PCG) based on affective information has been shown in video games to successfully increase enjoyment and offer personalized, immersive experience (see for example the work of Shaker et al. [56, 55]). This is often done by fine-tuning certain mechanics shown to be associated with a target player emotion to increase the probability for that emotion [65]. Affective agents however do not need to adapt behaviours to player emotions, but rather need their own emotion representations that could then lead to believable behaviours (or other natural representations of emotion components, such as simulated feeling or simulated physiology [25]).
The central issues of designing and developing affective game agents lies therefore in creating good computational simulations of emotional states. Human emotions are complex psycho-physiological states that are expressed within behavioural, physiological, cognitive, and feeling components [52].Moreover, while much work has been done to empirically investigate emotions, many core theoretical disagreements remain [52], including debates between dimensional [49], discrete [27], constructivist [5], and cognitive [53] perspectives.
A fully developed affective agent would make it necessary to first solve all fundamental psychological gaps that had been present since the beginning of affective computing [46] and then integrate them into working, computational systems [25]. This means that building a psychology-based, fully functional and accurate emotion simulation for an artificial agent is currently not possible and would be in almost all game design cases impractical. However, we may still be able to build affective agents that possess key features of emotion elicitation in humans and, as a consequence, allow for relatively successful simulation of human emotions. Once candidate feature is appraisal. Emotion elicitation is dependent on contextual and individual factors, processed through appraisal [39, 54]. The notion of emotion appraisal is that emotions are caused by subjective evaluations of triggering events in regards to their significance to one's personal life or interests [33]. Evidence suggests that appraisal holds a central role in emotion elicitation and as a consequence acts on all other emotion components [54].
Any given external (e.g. situations) or internal (e.g. thoughts) event may be appraised on multiple variables that contribute to emotion forming. Such variables might include goal relevance, certainty, coping potential, or agency [39]. Appraisal therefore represents a flexible process that adapts to individual differences [22] and the current context [38]. Evidence also suggests that language can play a key role in emotional appraisal, both by providing key contextual information from which to construct the appraisal and by providing conceptual labels for the appraised states [10].
With all this in mind, language models might provide one mean of simulating the appraisal process as they are able to generate high-level meaning-driven text outputs based on text training data that potentially holds implicit representations of human psychological processes [7].
### Language Model Approach
In the last few years, Natural Language Processing (NLP) has been rapidly progressing to the point that singe task-agnostic language models perform well in a range of tasks [11], including the simulation of human-like behaviour [23]. The basis for this is the large amount of training data representing a wide range of human behaviour through language [11, 8]. In relation to games, models such as OpenAI's Generative Pre-trained Transformer (GPT-2 and its successors) have shown early successes in the procedural generation of interactive stories [20], text-based adventure dialog as well as action candidates [66, 12]. In a recent simulation study by Park et al. [44], language models were implemented in artificial agent architectures to populate a sandbox world reminiscent of The Sims [4]. The architecture includes storing and retrieving information from a memory system and based on relevancy for the current situation and then uses the information to generate reflections (i.e. high-level interpretations of a situation), plans (i.e. potential future actions), and immediate actions. Multiple agents were simulated in a game-like world and the authors suggest that emerging interactions were natural and believable in terms of human-like simulation.
While work in this area is still in an early stage, the use of language models addresses some concerns with prior approaches. Most notably, instead of trying to build computational representations of human behaviour, the main task involves trying to retrieve believable human behaviour given a situation from a language model and implementing the results within a game agent. Depending on the game aim, this involves (1) translating the current situation with regards to the expected outcome into language; (2) generating content using a large language model; and (3) translating the output back in order to implement it in a game system. For example, Ciolino et al. [13] used a fine-tuned GPT-2 model to generate Go moves by translating the current board state to text and the language output back to action suggestions. Such a process is naturally easier for purely text-based tasks, such as dialog generation, where text is already the expected output and the expected output can comparatively easily be described in language [66, 44].
Still, even purely text-based generative tasks can pose some potential barriers for language models. The most obvious barrier comes from the underlying training data. No language model can represent human behaviour in its entirety, but is limited to the training data and its biases [60] as well as model specifications [11]. Additionally, performance of language model is not only dependent on the output, but also on the input [61]. For example, chain-of-thought prompting is a concept introduced by Wei et al. from the Google Research team [62] and relates to the integration of chain-of-thought in few-shot prompts to improve reasoning capabilities of language models. Similarly, as Park et al. [44] describe, simulating believable behaviour in a digital world includes various important steps (including storing and retrieving memory, reflecting in addition to observing, etc.) that ultimately work together to improve the probability to generate expected and natural behaviour.
When it comes to designing affective agents (i.e. agents that simulate emotions), the first questions that we have to ask is how well is affect represented in the training data and is a language model capable of retrieving it? In a recent paper discussing performance of different GPT iterations on theory of mind tasks, Kosinski [30] found that new language models perform very well when it comes to imputing unobservable mental and affective states to others. Such findings (especially combined with findings suggesting good performance on cognitive tasks [7, 58]) suggest that high-level psychological mechanisms are represented in language alone and could therefore be simulated with a well-constructed language model. Along these lines, can we effectively and efficiently achieve accurate and natural affect-simulation using language models? If we can assume that emotions are represented in language models, mechanisms for emotion elicitation (such as appraisal) might also be represented. And given that language models can be improved through in-context learning [11, 61], for example by chain-of-thought prompting [62], affect generation might be facilitated by architectures that allow for affective in-context learning. This study therefore discusses the potential of language models to simulate affective game agents by testing affect generation capabilities of different implementation architectures, including a newly developed appraisal-based architecture to facilitate natural chain-of-emotion.
## 3 Appraisal-Prompting Strategy for Emotion Simulation
The aim of this overall study is to create an effective affective agent architecture for a conversational game (i.e. a game with language-based user input and language-based agent response). However, since language models have also been successfully used to generate agent action spaces [66, 44], this process could also be applicable to other simulations of human-like affect in non-playable video game characters.
The basis of the approach is rooted in traditional PGC research, especially integrating affect-adaptation (see for example [65, 55]). User input (which in this case is text input) typically gets parsed into the game logic to adapt the content in a meaningful way, for example to better elicit a target experience [64]. This could mean that game agents react to certain player inputs or even their own interactions in the game world [44]. To make use of language model functionality, interactions need to be translated into language and depending on the specific language model in use, the language should follow specific patterns to yield the best result, which is generally known as prompt engineering [63]. One pattern that can be considered inherently relevant to simulating game agents are persona patterns, which instruct a LLM to assume a certain characterized role. Combined with aspects of game play pattern that provide the abstract game context for the persona tasks [63], the most basic form of an interaction synthesized with such patterns only includes player provided text used to generate responses. Because emotions are represented in language models [30], this very basic step alone could make a rudimentary affective agent.
However, static prompt patterns have limitations for creating believable game agents. Most notably, they do not incorporate memory as a basic foundation of human interaction. Applications that integrate language models, such as ChatGPT, partially address this by logging an input-response history that influences progressive content generations, which can create more natural conversation flows and improve performance of future generations [11, 62]. In its most basic form, memory could integrate the preceding in-game observations (such as the course of a player-agent dialog) into following prompts. In other words, it expands the prompt pattern to include memorized in-game observations to facilitate in-context learning [61]. This has however two major constraints: First, prompts are limited in terms of possible length and complexity, meaning that a full characterisation of a game agent cannot be included in a prompt for every given generation. Park et al. [44] addressed this problem by designing a memory system that first retrieves relevant entries from a data base before constructing prompts for the language model. Another less resource-intensive solution for simpler simulations could be to only store new information if it is considered relevant for future generations and have memory be representative of all relevant information to be used in prompts. A second limitation is a lack of deeper understanding. Tracking only observations makes it hard for a language model to draw inferences or make generalized interpretations of a scenario [7]. This problem could be addressed by introducing other types of memories, such as reflections and plans [44]. For affective agents, the more appropriate information to track in addition to external observation would be their internal emotional progression - or chain-of-emotion.
To summarize, language-based affective game agents need some kind of memory system in place that stores observations and emotions. This memory system is the base of future prompts. For simple games, such as the short conversational game developed for this study, only relevant information is stored in memory, which replaces a retrieval systems as game agents have a limited pool of expected actions that can be considered at the time point of memory storing. More complex games that simulate agent behaviour should however consider a memory retrieval system instead [44].
To store emotions and therefore create a chain-of-emotion, the architecture needs a system that turns observations into emotional reactions. Because emotion elicitation is highly dependent on appraisal with consideration to the current context and individual differences [39, 54], this system could make use of appraisal prompting, i.e. the use of contextual information and characterisations for the agent to appraise a current situation with the aim of generating current emotions. As shown in Fig 1, initial context information and character information can be provided by the game designer and stored in the memory system of the affective agent. The appraisal system would then expand the stored memories to include current emotions for every observed behaviour and therefore creating a chain-of-emotion. This, in turn, could be used to generate the agent's behaviour (specifically in terms of a conversational game, the agent's dialog).
One aspect to consider when developing affective agents are evaluation criteria. Differently to most cognitive abilities [7], there are few standardized benchmarks for successful emotion simulation. In psychology, one way to measure the ability for appraisal and emotion expression is referred to as Emotional Intelligence (EI) [51]. EI is considered an ability, influenced but not dictated by cognition [41], and is therefore often used to assess emotional capabilities of children and adults in various settings [43]. However, the definition of EI as a construct is fuzzy and many measures are criticized for measuring potentially different abilities all under the umbrella term EI [15]. As a consequence, MacCann and Roberts [37] developed new measures for more precisely defined dimensions, including the Situational Test of Emotional Understanding (STEU), which relates to the ability to appraise the appropriate emotion for a given situation.
While emotional understanding can be argued to be the central ability an affective agent must have, in the context of affective games, user experience becomes much more relevant. For example, one central aim of game agents is to display believable and human-like behaviour [9] or personality and social presence [50]. The ability to understand and create more accurate emotions is therefore only one aspect to consider when evaluating the success of affect simulation in video games and more user-centered methods need to be investigated as well. The proposed architecture will therefore be tested on multiple domains, including emotional understanding, agent believability, as well as user-perceived emotional intelligence, warmth, and competence.
## 4 Study 1: Investigating Situational Emotional Understanding Using Appraisal-Prompting
### Materials and Methods
To assess the capabilities of a language model in appraising emotions in various situations, this first experiment implements the language model gpt-3.5-turbo by OpenAI (accessed through the API) [1] to answer the 42 items of the STEU [37]. Each STEU item presents a situation (e.g. "Clara receives a gift.") and a question with five possible answers, one of which is right (e.g. "Clara is most likely to feel? [A] Happy [B] Angry [C] Frightened [D] Bored [E] Hungary").
All items were answered three separate times, involving three prompting strategies: The first strategy represents the baseline capabilities of the model to appraise human emotions, as it just reflects the model's outputs when prompted with each STEU item separately presented together with the example item and example answer. The second strategy implements memory and therefore context-based learning, as all prior items and answers are included in the following prompts. The third strategy expands this process by changing answer of the example item to a 2-step answer: First, the situation is appraised based on the contextual information to provide the most likely emotion and in a second step, the item is answered. This last strategy therefore tests if implementation of appraisal in prompting yield better results for emotional appraisal. Fig 2 shows the input and output for the first STEU item, including the example item for "No Memory"/"Memory" (as the input is the same for the first item for these two conditions) vs. "Appraisal Prompts". Consecutive input consisted of the next STEU item and included either again the example item (for the No Memory condition) or all previously answered items and responses (for the Memory and Appraisal Prompting conditions).
Similar to the process shown by Binz et al. [7], default values were kept for all parameters, except temperature, which was set to 0 to ensure deterministic responses.
### Results
The language model was able to solve the tasks presented within the STEU in each conditions above chance level. In the "No Memory" condition, the language model was able to successfully solve 24 out of 42 items, which represents a mean score of 0.57 that was noticably higher than chance level (0.20). In the "Memory" condition, the language model solved 31 out of 42 items, which represents a mean score of 0.74. In the "Appraisal Prompts" condition, the model was able to solve 35 out of 42 items, which is a score of 0.83 and therefore represented the best performance of all conditions. Table 1 shows a summary of the descriptive statistics for all three conditions. Figure 3 shows the results of each condition.
Figure 1: Schematic representation of the proposed architecture. Agents are set up by providing relevant context and role-playing character information already integrated in a memory system. In interactions, user input gets stored into the memory system and triggers appraisal (i.e. explicit emotion expression) that is also stored in the memory system. Based on the current state of the memory system, agent output is generated.
## 5 Study 2: Content of an Appraisal-Based Chain-Of-Emotion Architecture
Given the potential found in the previous study, the next logical step is to implement the strategies into functioning game agent architectures and compare the results on various outcomes. The following section describes a mixed-methods approach to evaluate the success of each implemented architecture within a conversational game. This study simulates gameplay for each condition through fixed prompts and analyses the responses in terms of their emotional content.
### Materials and Methods
#### 5.1.1 Scenario
To test the different strategies within real game architectures, a role-playing scenario was introduced. The setting for this scenario was a cafe called "Wunderbar", where the language model were tasked to play a role-playing character (called "Chibitea") meeting their long-term romantic partner who requested the meeting to ultimately break up. This scenario was chosen because of the depths of possible emotional responses from the agent and created through simple conversational exchanges. The instruction prompts and the fixed inputs can be found in the Appendix. The agent's responses were again generated using OpenAI's gpt-3.5-turbo model (accessed through the API) [1]. All LLM parameters were kept to their default values, except temperature, which was set to 0 to ensure deterministic responses.
#### 5.1.2 Conditions
Again, three strategies of emotion generation were compared. The "No memory" condition can again be seen as a baseline/control condition for the system's ability to simulate appropriate emotional responses from the fixed inputs and a task instruction. All details about the agent's character's personality and important context that would facilitate appraisal was given in the task description within the "No memory" condition.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Condition & Sum & M & SD \\ \hline No Memory & 24 & 0.57 & 0.50 \\ Memory & 31 & 0.74 & 0.45 \\ Appraisal Prompts & 35 & 0.83 & 0.38 \\ \hline \hline \end{tabular}
\end{table}
Table 1: STEU scores (out of 42) by condition. Each STEU item can either be right (1) or wrong (0)
Figure 2: Example of model input and output for the three conditions. The input of the “No Memory” and “Memory” condition are the same for the first item. For the “No Memory” condition all following items only include the example question and answer, as well as the next question in the scale. The “Memory” condition includes all prior questions and generated answers. The “Appraisal Prompts” condition is the same as the “Memory” condition, but the example answer is changed to include 2 steps: First, appraising the situation to generate emotions of the involved person and second providing the answer.
The "Memory" condition first stores each user response and the generated text as a memory data structure and builds requests that include not only the task instruction, but the whole conversation log. Because the tested scenario was rather short, it was possible to build all progressive prompts in a way that included the entire text of the preceding conversation, making retrieval of memory unnecessary. This system therefore represents a prompt construction system involving the task instruction and all prior conversation logs.
The "Chain-of-emotion" condition implemented the appraisal model shown in 1 and involved two steps: First, appraisal prompting is used to generate the current emotion of the agent, which is then in the second step implemented into the prompt for response generation. For the first step, appraisal prompting was achieved with the following prompt, which was provided to the language model in addition to all stored conversation snippets and generated emotions: "Briefly describe how Chibitea feels right now given the situation and their personality. Describe why they feel a certain way. Chibitea feels:". The generated text-based emotion descriptions were stored in the memory system and represent a chain-of-emotion of the agent for the duration of the game. For the second step, again all conversation snippets and generated emotions stored in memory were provided to the language model with the same task instruction that was used in the other two conditions to generate the agent's responses. This condition therefore represents a 2-step process of first generating a fitting emotion of the agent using appraisal prompting, and then generating a text response similarly to the "Memory" condition, but with the addition of the stored chain-of-emotion.
#### 5.1.3 Measure
Fixed inputs were used to create responses from each implemented agent architecture, which were analyzed in terms of their emotional content, using the Linguistic Inquiry and Word Count tool (LIWC; [45]), which is a content analysis tool based on word occurrences often used in affective computing [46] and psychology studies [29] to analyze emotion expression. It provides a word count for each text segment (e.g. per sentence), a proportion of affective words, as well as on a more detailed level a proportion of positive emotion words and negative emotion words. Finally, the LIWC also
Figure 3: Results of the comparison between conditions. The Y-axis represents cumulative STEU score by item and the X-axis represents individual STEU items.
calculates scores for authenticity (see [40] for details) and emotional tone, which signalizes how positive the overall of the text is (see [14] for details).
#### 5.1.4 Procedure
Pre-written prompts were used that stayed constant between all conditions in order to gauge qualitative characteristics of each condition responses. A list of the resulting conversations within all three architectures can be found in the Appendix. The generated content was qualitatively described and the LIWC was used to analyze the content quantitatively. To achieve this, the generated output was seperated into individual sentences and mean scores were calculated for each measure of interest (see Table 2).
### Results
Fixed prompts were used for all three conditions and resulting responses can be seen in Appendix 5. When analyzing the descriptive attributes of each text (as a common content analysis approach [31]), we can observe that the chain-of-emotion condition initially generated more specific memories for the time with the player ("Remember that time we got lost in the enchanted forest and ended up finding that hidden waterfall?" as opposed to "Remember all the adventures we've had together?"). For the duration of the conversation, the emotional journeys of all three conditions began to diverge. For example, because the "No Memory" system had no recollection of previous exchanges, the overall emotional expressions remained in a state of anticipation ("I feel a mix of excitement and anticipation"). The "Memory" system showed a progression, starting from expressions of love and happiness to shock, confusion, sadness, and fear to finally expressions of hope ("I hope we can find happiness, whether it's together or apart."). The "Chain-of-emotion" system showed indications of complex mixed emotions even very early in the conversation ("What matters most to me is your happiness, even if it means letting go") as opposed to the pure expressions of pain and sadness in the other conditions. This continued when prompted about past memories ("I feel a mix of nostalgia and gratitude" as opposed to "I feel an overwhelming sense of love, joy, and gratitude" in the "Memory" condition). The "Chain-of-emotion" condition also used more implicit affective expressions ("L... I never expected this" as opposed to "I'm shocked and hurt" in the "Memory" condition; "There is so much I want to say but words fail me in this moment" as opposed to "I want you to know that I love you deeply" in the "Memory" condition).
Using LIWC to make the text contents quantifiable, we observed significant differences in mean Authenticity score per sentence by condition (\(F[1,71]=5.10\); \(p=0.03\)). Follow-up t-tests revealed significant differences between the Chain-of-emotion condition and both the Memory condition (\(t[34.3]=\) -2.29; \(p=.03\)) and No Memory condition (\(t[31.1]=\) -2.30; \(p=.03\)). Descriptive statistics of all tested LIWC variables can be found in Table 2 and the complete data for this analysis can be found in the Appendix.
## 6 Study 3: User Evaluation of Game Implementations
In this study, users are asked to play through an interactive game version of the scenario introduced in Study 2 to evaluate each agent architecture for multiple outcomes (specifically agent believability, observed emotional intelligence, warmth, and competence). This study therefore expands on the findings of Study 2 by implementing the architectures and scenario within a video game and evaluating all three conditions in terms of user experience measures.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & No Memory (\(N=22\)) & Memory (\(N=24\)) & Chain-of-Emotion (\(N=27\)) & \(F\) (\(p\)) \\ & _M (SD)_ & _M (SD)_ & M (SD) & \\ \hline Word Count & 18.00 (6.92) & 15.20 (4.73) & 17.00 (7.59) & 0.20 (.65) \\ Authentic Score & 61.50 (38.60) & 61.9 (39.50) & 82.60 (21.20) & 5.10 (.03) \\ Tone Score & 74.20 (38.60) & 62.00 (44.80) & 53.90 (44.00) & 2.76 (.10) \\ \% Affective Words & 11.40 (8.19) & 13.50 (11.00) & 10.65 (7.82) & 0.08 (.78) \\ \% Positive Emotion Words & 5.28 (6.29) & 4.13 (4.82) & 3.32 (4.35) & 1.76 (.19) \\ \% Negative Emotion Words & 0.59 (1.97) & 3.10 (7.56) & 1.57 (2.89) & 0.37 (.55) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Descriptive overview of LIWC variables per output sentence by condition for the fixed prompt responses with F and p value of the significance test.
### Materials and Methods
#### 6.1.1 Conversational Game
A conversational role-playing game was developed based on the scenario tested in Study 2. The setting of the game was again a cafe called "Wunderbar", where this time the role-playing character of the player (called "Player") requested to meet their long-term romantic partner (called "Chibitea"). The game was played through conversational exchanges. First, the game agent shared their thoughts on the nature of the meeting, than the players got prompted to write a response via an input field. The aim of the players was to play out a breakup scenario with the game agent within six interactions. The players' characters had the specific in-game aim of breaking up, while the agent's character has the aim of staying together.
Players were instructed to not worry about creativity, but rather staying in character for the interactions and being observant of the AI agent's emotional reactions. Players were also instructed to make up reasons for the breakup. In-game screenshots can be viewed in Fig 4. The agent's character is procedurally generated from different body parts and color pallets, providing visual variation each time the game is played. To ensure that these generations had no systematic influence on player responses, the possibility space was made very large (5,184 different possible character designs). The game was developed using the Unity game engine with C# as a scripting language.
As with Study 2, the agent's responses were generated using OpenAI's gpt-3.5-turbo model (accessed through the API) [1]. The game also made use of the moderation API to test each generated response for harmful or inappropriate messages [2] that would end the game on detection of such messages. As with the previous studies, all LLM parameters were kept to their default values, except temperature, which was set to 0 to ensure deterministic responses.
#### 6.1.2 Conditions
The game implemented the same architectures used in Study 2. The "No Memory" condition therefore represented generated LLM responses just based on user input and instruction prompts. The "Memory" condition included the conversation log in a memory system that was again short enough to be completely included into the prompts for the language model without an retrieval system. The "Chain-Of-Emotion" system was also constructed exactly as in Study 2, including the same instruction prompts and involved therefore again an initial appraisal step before responses for the agent were generated.
#### 6.1.3 Measures
Players were asked to fill out three short questionnaires for each tested architecture. The first questionnaire was an adaptation of the four agent believability questions used by Bosse and Zwanenburg [9]. The four items were "The behaviour of the agent was human-like", "The agent's reactions were natural", "The agent reacted to my input", "The agent did not care about the scenario". The second questionnaire comprised four items measuring observed ratings of emotion intelligence, adapted from Elfenbein et al. [18] who originally adapted these four items from the Wong and Law emotional intelligence Scale (WLEIS; [32]) by replacing the word "I" with "he/him" or "she/her". For our study, we changed these words to ask about "the agent": "The agent always knows their friends' emotions from their behaviour", "The agent is a good observer of others' emotions", "The agent is sensitive to the feelings and emotions of others", "The agent has a good understanding of the emotions of people around them". Finally, the third questionnaire measured
Figure 4: Screenshots of the conversational game ”Wunderbar”. Left screenshot shows dialog provided by the model. The user can click to continue each dialog line, until the input field for a response appears (right screenshot).
the players assessment of the agent's personality the two classic stereotype dimensions warmth and competence with two items each (warm and friendly; competent and capable) as described by Cuddy et al. [17]. In combination, these 12 items assess the players perception of the agent's believability as a human-like character, the agent's emotional intelligence, and the agent's personality on the classic dimensions warmth and competence.
#### 6.1.4 Procedure
A pilot study was conducted before recruitment began. 5 participants (1 female) with a mean age of 27 played through the game once for each of the three conditions. After each version, they answered the 12 questions from the three included questionnaires. Following this, participants were asked demographic data (age and gender) and the experiment ended. Feedback from all pilot participants were gathered and used to improve consistency of the game and data logging implementation. The final study was then created as a WebGL build and made available online via the free video game hosting platform itch.io.
During the main experiment, participants were asked to carefully read the study information sheet and agree to participate voluntarily via the consent form. They were informed that participation was subject to OpenAI's usage terms for prompt writing, while the GPT output was controlled through an implementation of OpenAI's moderation API. Participants then progressed through the three game scenarios similarly to the pilot testers in a within-subject design. The presentation order of the conditions was counter-balanced between participants to ensure that no systematic order effects could influence results.
#### 6.1.5 Participants and Statistical Analysis
A total of 30 participants (10 female) were recruited through the institutional subject pool of the authors. Participation was compensated through University credits if applicable. The sample size was considered appropriate based on a statistical power analysis, yielding a power of 0.95 for medium sized effects (0.5 SD) in repeated measures ANOVAs. Age of the participants ranged from 19 and 47 years (M=26.41; SD=7.27).
Within-Subject ANOVAs were conducted for each measure (agent believability, observed EI, warmth, and competence). Follow-up t-tests were used to identify specific differences between conditions for each measure. All analyses were conducted in R.
#### 6.1.6 Ethics Statement
Written consent was granted after reviewing the methods of our study by the Ethics Committee of the Psychology Department of the authors' institution. The experiment was conducted in accordance to the recommendations of these committees.
### Results
Multiple significant effects between the three conditions were observed in the user study and an overview of descriptive statistics can be found in Table 3. First, there was an effect for the believability item "The agent's reactions were natural." (\(F\)[2,84] = 3.65; \(p\) = 0.03). Follow-up t-tests revealed differences between the No Memory and Chain-of-Emotion condition (\(t\)[41.46] = -2.79; \(p\) =.008), as well as the Memory and Chain-of-Emotion condition (\(t\)[52.47] = -2.00; \(p\) =.05). There was also an effect for the believability item "The agent reacted to my input." (\(F\)[2,84] = 3.62; \(p\) = 0.04). T-test revealed that this effect was based on a difference between the No Memory and Chain-of-Emotion condition (\(t\)[40.41] = -2.41; \(p\) =.02).
Regarding the EI questions, there was an effect for the item "The agent is sensitive to the feelings and emotions of others." (\(F\)[2,84] = 3.31; \(p\) = 0.04). Follow-up t-tests revealed differences between the No Memory and Chain-of-Emotion condition (\(t\)[43.84] = -2.70; \(p\) =.01), as well as the Memory and Chain-of-Emotion condition (\(t\)[40.52] = -2.07; \(p\) =.04). There was no statistically significant difference found between the conditions when it comes to observed personality aspects.
Again, the LIWC was used for content analysis of the generated texts for the user study. Significant differences in mean Tone Score by condition were oberved (\(F\)[1,574] = 12.28; \(p\) < 0.001). Follow-up t-tests revealed significant differences between the Chain-of-emotion condition and both the Memory condition (\(t\)[383.7] = 2.02; \(p\) =.03) and No Memory condition (\(t\)[374.94] = 3.53; \(p\) <.001)., as well as a difference between the Memory and No Memory condition (\(t\)[1367.6] = 4.09; \(p\) <.001). A descriptive overview of all tested LIWC variables can be found in Table 4.
## 7 Discussion
This study investigated emotional intelligence capabilities of LLMs using different prompting strategies (No Memory, Memory, Memory, Appraisal) and found better performance for appraisal-prompting strategies when it comes to successfully identifying fitting emotions in different theoretical situations. These findings were then used to create a Chain-Of-Emotion architecture for affective game agents that was tested in a custom made conversational game against a No Memory and Memory architecture in a user study. It was found that the Chain-Of-Emotion architecture implementing appraisal prompting led to qualitatively different content generations quantified via the LIWC that outperformed the other conditions on multiple user experience items relating to agent believability and observed emotional intelligence of the agent.
Overall this study provides early evidence for the potential of language model agents to understand and simulate emotions in a game context.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & No Memory (\(N\) = 30) & Memory (\(N\) = 30) & Chain-of-Emotion (\(N\) = 30) & _F (p)_ \\ & _M (SD)_ & _M (SD)_ & M (SD) & \\ \hline Word Count & 64.40 (26.60) & 59.60 (21.20) & 62.30 (34.10) & 0.52 (.47) \\ Authentic Score & 70.40 (29.30) & 74.00 (27.60) & 72.00 (24.80) & 0.32 (.57) \\ Tone Score & 84.40 (28.50) & 80.50 (32.60) & 73.00 (34.90) & 12.28 (.00) \\ \% Affective Words & 9.76 (3.19) & 10.10 (4.14) & 9.61 (4.46) & 0.15 (.70) \\ \% Positive Emotion Words & 3.84 (2.35) & 3.78 (2.62) & 3.64 (2.72) & 0.60 (.44) \\ \% Negative Emotion Words & 0.68 (1.35) & 0.69 (1.48) & 0.93 (1.35) & 2.76 (.10) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Descriptive overview of LIWC variables per participant by condition for all outputs generated in the user study with F and p value of the significance test.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & No Memory (\(N\) = 30) & Memory (\(N\) = 30) & Chain-of-Emotion (\(N\) = 30) & _F (p)_ \\ & _M (SD)_ & _M (SD)_ & M (SD) & \\ \hline “The agent’s behaviour was human-like.” & 4.82 (2.36) & 5.36 (1.99) & 5.75 (1.71) & 1.95 (.15) \\ “The agent’s reactions were natural.” & 4.43 (2.22) & 4.89 (1.95) & 5.71 (1.01) & 3.65 (.03) \\ “The agent reacted to my input.” & 5.43 (2.20) & 6.29 (1.38) & 6.54 (1.04) & 3.62 (.04) \\ “The agent did not care about the scenario.” & 3.32 (2.28) & 3.14 (2.29) & 3.32 (2.36) & 0.26 (.78) \\ \hline “The agent always knows their friends’ emotions from their behaviour.” & 4.79 (2.10) & 5.04 (2.03) & 5.71 (1.46) & 1.32 (.27) \\ “The agent is a good observer of others’ emotions.” & 4.89 (2.20) & 5.29 (2.21) & 5.93 (0.90) & 2.25 (.11) \\ “The agent is sensitive to the feelings and emotions of others.” & 5.14 (1.94) & 5.39 (1.97) & 6.25 (0.97) & 3.31 (.04) \\ “The agent has a good understanding of the emotions of people around them.” & 4.86 (2.07) & 5.11 (2.20) & 5.61 (1.40) & 0.86 (.43) \\ \hline “How capable was the agent?” & 4.86 (2.01) & 5.57 (1.45) & 5.14 (2.35) & 1.68 (.19) \\ “How competent was the agent?” & 5.39 (1.55) & 5.39 (1.85) & 5.11 (2.06) & 1.68 (.19) \\ “How friendly was the agent?” & 4.86 (2.05) & 5.18 (1.89) & 5.00 (2.13) & 0.41 (.66) \\ “How warm was the agent?” & 6.07 (1.07) & 6.38 (0.73) & 5.55 (2.11) & 2.48 (.09) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Descriptive overview of user research variables per condition with F and p value of the significance test.. Each item has a minimum of 0 and maximum of 6.
### Emotional Intelligence in Language Model Agents
As more and more evidence arises for the potential of language models to simulate cognitive processes of humans [7], we investigated how this could translate to more affect-focused tasks, specifically emotional intelligence tasks. OpenAI's GPT-3.5 performed well overall in situational emotional labelling, providing some evidence for the utility of such models to identify the most likely emotional reaction for a range of situations. These findings therefore add to the body of evidence indicating that language models could be useful to better understand cognitive processes [19]. Importantly, our findings do not only show that LLMs can solve emotion labelling tasks much better than chance level, but also that the performance is dependent on the underlying prompting strategy. Adapted from successful Chain-of-Thought prompting strategies [62], we compared prompts without context (No memory) to prompts with previously answered questions included (Memory) and to prompts that first ask the model to appraise the situation and then answer the STEU item (appraisal prompting). This third strategy was built upon findings of modern psychological research that show that cognitive appraisal processes are important factors when it comes to human emotion elicitation and understanding [39, 52]. Consequently appraisal-prompting led to better performance in the emotion labelling task compared to the other two conditions. This finding can be considered from two perspectives: first, it shows that commonly observed psychological processes might be represented in language and therefore in large language models, providing more evidence for the utility of such models to simulate human responses [7]. Second, techniques built upon such observed psychological processes can be used to improve language model performance for specific tasks and might therefore be considered when it comes to building architectures for language model segments. Especially this second point could be of relevance when considering how language model implementations could in the future be integrated to solve problem-specific tasks. Since performance can be increased through prompting strategies facilitating few-shot learning [11, 62] and language models demonstrate representations of a range of psychological constructs [23].
From a psychological perspective, appraisal has long been acknowledged to be a central part of emotion forming, involving both conscious and unconscious cognitive processes [39, 52]. In its basic definition, appraisal relates to an individual's relationship to a given event in terms of different variables (such as personal significance, goal congruence, etc. [33]). It is not yet clear what specific variables are of importance and how the process of appraisal interacts with other emotion components on a detailed level [52]. This is to say, appraisal cannot yet be universally modelled and therefore implemented within a computational system. We could however assume that information that makes the appraisal process observable and usable is represented in language and therefore also in large language models. It can therefore be argued that language models could solve some of the practicality problem present in the discipline of affective computing [46]. If LLMs have the ability to solve EI tasks through mechanisms mirroring appraisal, we can make use of these models to potentially build affective agents [25], without the need to fully solve the remaining theoretical problems in the field of psychology [52]. The use of language models could therefore be considered a more practical solution to producing useful agents, even if open questions regarding human emotion understanding remain.
### User Interaction with Chain-Of-Emotion Language Model Agents
Implementing appraisal prompting into a Chain-Of-Emotion system (see Fig 1 for a schematic representation) led to the development of an artificial game agent in a conversational game that demonstrated different outputs contents as measured with the LIWC that led to better user ratings on a range of outcome variables. For the purposes of this study, the implementation was kept as simple as possible and only included a text storage (Memory system) and an appraisal-prompted emotion generation (Appraisal System) before character dialog was generated. Within a custom-made role-playing game where players were asked to break up with the agent playing their long-term romantic partner, the Chain-Of-Emotion architecture demonstrated a higher Authenticity score when prompted with controlled prompts that were kept fixed between all conditions. When tested with players, the Chain-Of-Emotion architecture led to a significantly different Tone score of the language, potentially signaling the inclusion of more complex emotional responses as observed in the controlled environment. It is important to note that authenticity was only increased with controlled prompts and tone was only different with non-controlled player generated prompts, meaning that the differences in text-generated content was highly influenced by the in-game context. The texts generated for the fixed prompts (see Appendix 5) yielded potentially more complex emotional responses (for example a mix of melancholy and nostalgia) in the Chain-Of-Emotion condition compared to the other conditions.
This pattern was also observable within the user ratings. The Chain-of-Emotion agent was rated significantly more natural and responsive than the other conditions, and additionally more sensitive to emotions of others. Other items relating to belieably and observed emotional intelligence showed also trends of better performances for the Chain-Of-Emotion condition. Building such an architecture has therefore quantifiable benefits when it comes to the user experience of artificial agents, which is one of the most important evaluation criteria, especially in the domain of video games [26]. Importantly, there were no differences in personality aspect ratings (on the classic domains of competency, warmth, capability, and friendliness) observed. This could be seen as evidence that all implemented language model
agents followed the task of role-playing the given character with the provided personality. But the Chain-Of-Emotion architecture outperformed the other architectures in terms of observed emotional intelligence items and believability. The proposed architecture therefore yielded convincing results on multiple evaluation criteria (qualitative characteristics of content, user rated believability, user rated emotional intelligence, in addition to the previously tested emotion understanding) and can therefore be seen as a step towards well-functioning affective language model game agents that could solve some of the problems present in the field [25]. Most importantly, because language model agents have the abilities to simulate human-like cognitive tasks [7], a successful game agent architecture does not need to solve fundamental problems in theoretical psychology before creating computational implementations as previously considered [46, 24, 64]. Rather, a language model agent architecture needs to make use of the characteristics of LLMs and implement systems solving more practical concerns, such as memory tasks (both storing and retrieval [44]), or performance-enhancing tasks, such as the proposed appraisal prompting step.
### Limitations
Language models do not simulate the behaviour of a human, but provide probable language outputs that in some form represent human behaviour. This is to say, models are bound to their statistical capabilities even in a theoretical, infinitely trained model [59]. This means that while there is no doubt of the potential of language model to solve some tasks with human-like performance [7], other tasks (e.g. truth-telling [59] or casual reasoning [7]) can pose difficulties. As human affect is similarly a complex field, LLMs cannot be seen as accurate simulation machines of affective human processes. Rather, the provided results show that some psychological processes can be simulated through their representations in language that can be replicated through deep learning techniques. One limitation of this study in particular is that only one large language model (namely OpenAI's GPT-3.5) was included in the analysis as we had no access to other models. As described in some early reports (e.g. [35]), newer models such as GPT-4 likely outperform previous models on various criteria, making strategies such as appraisal prompting and Chain-Of-Emotion architectures potentially less impactful. However, given the domain-specific aims of game character simulation, it can not be assumed that game companies will want to make use of the most powerful language models in every case. Providing strategies for improving language model capabilities will have value in any case in should inform the process of creating and using appropriate models to solve emotion understanding and simulation tasks in the future. Additionally, as shown in the study by Park et al. [44], generative video game agents benefit from certain implementations of memory systems and that can store and retrieve information with relevancy for the given situation. Since the tested game was rather short and all interactions had relevancy, we did not include a memory retrieval step, which might be necessary in longer and more complex games. The appraisal step however seemed to have improved the agent's performance in believably simulating emotions (in addition to the previously shown benefits for emotion understanding). The proposed system can therefore be seen as a first step to progress affective game agents using language models and be built upon for different kinds of language models and different kinds of games/affective agents.
## 8 Conclusion
This study adds to the body of research showcasing the capabilities of LLMs to solve psychological tasks, namely emotional intelligence items and simulation of believable emotions. The affective information represented through the training data in language models seem to hold the necessary information that makes inferring plausible affective states to others possible, which adds to the results showcasing Theory of Mind abilities of LLMs [30]. Because rather complex cognitive and affective abilities arise just through natural language processing, these tools could allow for both a better understanding of complex psychological mechanisms (which is especially necessary in the realm of emotion) and for building new tools to simulate human-like behaviour. Such tools could include the creation of believable artificial agents simulating emotional reactions in different scenarios. To achieve such tasks, it is still necessary to build fitting frameworks and architectures to improve language model performance and our study shows that designing such architectures based on our knowledge of psychological processes might prove beneficial. Our results show early evidence of language model game agents to simulate emotions in a gaming context, role-playing a scenario of high emotional complexity (breaking up with a long-term partner). A dedicated Chain-Of-Emotion architecture using a simple Memory system and appraisal prompting strategies outperformed other implementations in terms of user-rated believability, reactivity, and emotional intelligence. The field of language model agents is very young and this study therefore provides only a first step towards better affective agents, but language models seem to be capable to provide promising results that can be shaped by targeted implementation strategies based on observed psychological phenomena. |
2309.13614 | Boosting Offline Reinforcement Learning for Autonomous Driving with
Hierarchical Latent Skills | Learning-based vehicle planning is receiving increasing attention with the
emergence of diverse driving simulators and large-scale driving datasets. While
offline reinforcement learning (RL) is well suited for these safety-critical
tasks, it still struggles to plan over extended periods. In this work, we
present a skill-based framework that enhances offline RL to overcome the
long-horizon vehicle planning challenge. Specifically, we design a variational
autoencoder (VAE) to learn skills from offline demonstrations. To mitigate
posterior collapse of common VAEs, we introduce a two-branch sequence encoder
to capture both discrete options and continuous variations of the complex
driving skills. The final policy treats learned skills as actions and can be
trained by any off-the-shelf offline RL algorithms. This facilitates a shift in
focus from per-step actions to temporally extended skills, thereby enabling
long-term reasoning into the future. Extensive results on CARLA prove that our
model consistently outperforms strong baselines at both training and new
scenarios. Additional visualizations and experiments demonstrate the
interpretability and transferability of extracted skills. | Zenan Li, Fan Nie, Qiao Sun, Fang Da, Hang Zhao | 2023-09-24T11:51:17Z | http://arxiv.org/abs/2309.13614v2 | # Boosting Offline Reinforcement Learning for Autonomous Driving with Hierarchical Latent Skills
###### Abstract
Learning-based vehicle planning is receiving increasing attention with the emergence of diverse driving simulators and large-scale driving datasets. While offline reinforcement learning (RL) is well suited for these safety-critical tasks, it still struggles to plan over extended periods. In this work, we present a skill-based framework that enhances offline RL to overcome the long-horizon vehicle planning challenge. Specifically, we design a variational autoencoder (VAE) to learn skills from offline demonstrations. To mitigate posterior collapse of common VAEs, we introduce a two-branch sequence encoder to capture both discrete options and continuous variations of the complex driving skills. The final policy treats learned skills as actions and can be trained by any off-the-shelf offline RL algorithms. This facilitates a shift in focus from per-step actions to temporally extended skills, thereby enabling long-term reasoning into the future. Extensive results on CARLA prove that our model consistently outperforms strong baselines at both training and new scenarios. Additional visualizations and experiments demonstrate the interpretability and transferability of extracted skills.
## I Introduction
Learning-based vehicle planning is increasingly gaining its popularity with the advent of driving simulators [1, 2] and large-scale offline datasets [3, 4]. Among them, the performance of Imitation Learning (IL) algorithms is limited by the quality of expert data [5] as well as the covariate shift issue [6, 5]. Therefore, offline reinforcement learning (RL) [7, 8, 9] emerges as an effective approach to learning policies that break the constraints of datasets, while avoiding risky exploration in real driving environments.
Despite its great potential, the application of offline RL in autonomous driving is relatively limited [10, 11, 12], with the long-horizon problem presenting as a major obstacle [12, 13]. Specifically, the reward signals in long-horizon tasks are sparse and delayed [14, 15], and the agent must learn from them to plan a reasonable sequence of actions to avoid compounding errors over extended periods.
One powerful strategy to deal with long-horizon tasks is to adopt a hierarchical framework [16, 12], with the high-level policy producing a subgoal, and the low-level policy executing to approach the consecutive subgoals. As pointed out by [17] that human behaviors are fundamentally skill executions, our intuition is that the vehicle planning problem can also be solved by acquiring a set of _driving skills_ (e.g. cruising, nudging, and turning) [13, 18]. Therefore, we propose to interpret the subgoals as different driving skills to decompose the long-horizon task into a two-stage problem: (i) extract a set of reusable driving skills, and (ii) learn a skill-based policy. Essentially, skills provide a way to represent a sequence of actions. This temporal abstraction [19] allows the vehicle to reason and plan at a higher level of granularity [20, 21], taking longer-term consequences (so densely shaping reward signals) into consideration and improving long-horizon planning.
Several prior works have investigated techniques for extracting skills from offline datasets [21, 22, 23], in general, they all employ a sequence-to-sequence conditional variational autoencoder (seq-to-seq CVAE) [24, 25] to model the generative process of transition sequences, and extract a continuous latent space of skills describing different driving behaviors. This straightforward method can however suffer in the vehicle planning setting since the expert demonstrations are seriously _unbalanced_ in distribution, with a majority dominated by simple straight or static sequences. While this can be addressed by pre-filtering the dataset, a more fundamental challenge stems from the _complex and versatile_ nature of driving skills. Specifically, drivers must react aptly to different maps and environmental vehicle configurations, which encompass an infinite number of combinations, e.g. turning behaviors exhibit significant variations across different maps and road conditions. Typically, VAEs are widely known to suffer from the posterior collapse problem [26, 27] under high input complexity, like that in Fig. 4(a), where the encoder fails to model and distinguish between different driving skills.
The key to overcoming posterior collapse is to use more expressive latent distributions that can better capture diverse aspects of inputs [28]. Regarding this objective, we note the common structure of driving skills in Fig. 1: Generally, drivers
Fig. 1: A motivating example. Top: Traditional offline RL planner samples action at each timestep, and is easy to go outside the distribution of training data, thus incurring compounding errors of actions. Bottom: Our proposed hierarchical skill-based planner first selects the discrete skill option (turning), and then decides the continuous variation for execution (radians), which can model a diverse set of driving skills. This multi-step skill execution enables long-term reasoning for the planner.
first select within discrete skill options (e.g. turning), then determine continuous variations (e.g. radians and speeds) during execution. Therefore, we suggest to inject this prior into the generative process and extract a hierarchical latent space instead of simple gaussian distributions. Specifically, our VAE encoder is two-branch: the discrete branch produces logits that are passed through a gumbel-softmax activation [29] to get approximately one-hot variables; the continuous branch functions as an ensemble, with discrete outputs acting as gates to select a member network for computing the final skill variable. In this sense, utilizing the discrete branch to distinguish between different skill options, the member networks in the continuous ensemble just need to fit variations within a specific skill option, which is a relatively simple distribution, thereby reducing the risk of posterior collapse.
After distilling skills from data, any off-the-shelf offline RL algorithm [8, 30, 31] can be used to learn the high-level skill-based policy using relabeled transition data. The VAE decoder is employed as the low-level action decoder, which can also be finetuned with conditional behavior cloning (BC).
To summarize, we make the following contributions:
* We present **H**ierarchical **sk**ill-based **O**ffline Reinforcement Learning for **V**ehicle **P**lanning (HsO-VP), a framework that boosts offline RL to solve the challenging long-horizon vehicle planning task.
* We propose a two-branch network architecture for the VAE encoder, thus forming a hierarchical latent space that can effectively address the posterior collapse issue during driving skill extraction.
* We comprehensively evaluate HsO-VP on CARLA [1], where it achieves consistent performance improvements over the baselines (4.0% at training scenarios and 6.4% at new scenarios). Besides, the learned skills are also verified to possess interpretability and transferability.
## II Related Works
The objective of vehicle planning is to efficiently drive the ego-vehicle to the destination while conforming to some safety and comfort constraints, which is essentially a sequential decision problem. Recently, learning-based planning algorithms [32, 33, 34, 35] are progressively gaining popularity over conventional rule-based algorithms [36, 37] for their better scalability to different driving scenarios. In this section, we review works about offline RL and skill-based algorithms as the foundation of our work and highlight the differences and contributions of HsO-VP.
**Offline RL:** Offline RL algorithms learn policies from a static dataset without interacting with the real environment [7], which is particularly useful in scenarios where interaction can be prohibitively expensive or risky like autonomous driving. Unlike IL, offline RL algorithms have been proven to be capable of surpassing the performance of experts [8, 30, 31]. Generally, they utilize policy regularization [8, 38] and out-of-distribution (OOD) penalization [30, 39] to avoid value overestimation. In this paper, we pioneer to deploy offline RL algorithms to the long-horizon vehicle planning task, where the reward may be sparse and the extrapolation error will be accumulated [14, 15]. Although there have been several works [10, 11, 12] attempting to apply offline RL to vehicle planning, few of them [12] directly deal with the long-horizon challenge. As the most relevant work to our paper, HiGoC [12] employs a hierarchical planning framework (not open-sourced so we choose IRIS [40] instead as the comparing baseline), where the high-level policy produces subgoal states through optimization, and the low-level policy is trained by offline RL conditioned on subgoals. In contrast, HsO-VP utilizes skill completion as subgoals instead of imagined target states, which offers better interpretability in driving tasks. Besides, offline RL is conducted on the high-level skill-based policy in HsO-VP, which enables long-term reasoning during training.
**Skill-based algorithms:** Skills, also known as primitives or macro actions, are widely used in RL to enable knowledge transfer on new tasks [21, 41] or facilitate long-horizon training [23, 42]. Previous works [18, 43] have attempted to manually design driving skills for specific tasks. While these task-specific skill designs can succeed in certain well-defined tasks such as overtaking a car, they limit the flexibility and expressiveness in complex driving scenarios [13]. In addition to this, another line of research explores skills from the perspective of unsupervised segmentation of reusable behaviors in temporal data [21, 22, 23, 41, 42]. Generally, they use a VAE to extract continuous skill representations from expert transition sequences. Once the skills have been extracted, a new high-level skill-based policy is learned from relabeled transitions by online [21, 22, 41] or offline RL [23, 42], with the VAE decoder acting as the low-level policy that outputs per-step actions conditioned on high-level skills. In this paper, we find vanilla VAEs not expressive enough for modeling the versatile driving demonstrations. As a treatment, we introduce another categorical latent variable to form a hierarchical latent space that can capture both discrete options and continuous variations of driving skills, enhancing the interpretability and transferability of learned skills. Different from [13] that learns driving skills from sampled trajectories, we extract interpretable skills from expert demonstrations and focus on the offline RL setting.
## III Preliminary
We introduce preliminary knowledge for HsO-VP in this section. To keep notations concise, we use subscripts \(t,c\) or numbers for variables at specific timesteps, Greek letter subscripts for parameterized variables, and bold symbols to denote variables spanning multiple timesteps.
We consider learning in a Markov decision process (MDP) [44] denoted by the tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state space and the action space, respectively. Given states \(s,s^{\prime}\in\mathcal{S}\) and action \(a\in\mathcal{A}\), \(\mathcal{P}(s^{\prime}|s,a):\mathcal{S}\times\mathcal{A}\times\mathcal{S} \rightarrow[0,1]\) is the transition probability distribution and \(r(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) defines the reward function. Besides, \(\gamma\in(0,1]\) is the discount factor. The agent takes action \(a\) at state \(s\) according to its policy \(\pi(a|s):\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). The goal of _online RL_ is to find a policy \(\pi\) that maximizes the total expected return: \(J=\mathbb{E}_{a_{t}\rightarrow\pi(\cdot|s_{t}),s_{t+1}\rightarrow\mathcal{P}( |s_{t},a_{t})}\left[\sum_{t=0}^{T-1}\gamma^{t}r(s_{t},a_{t})\right]\) by learning
from the transitions \((s,a,r,s^{\prime})\) through interacting with the environment in an online manner. _Offline RL_, instead makes use of a static dataset with \(N\) trajectories \(\mathcal{D}=\{\mathbf{\tau}_{i}\}_{i=1}^{N}\) collected by certain behavior policy \(\pi_{b}\) to learn a policy \(\pi\) that maximizes \(J\), thereby avoiding safety issues during online interaction. Here \(\mathbf{\tau}=\left\{(s_{t},a_{t},r_{t},s_{t}^{\prime})\right\}_{t=0}^{T-1}\) is a collected interaction trajectory composed of transitions with horizon \(T\).
## IV Approach: HsO-VP
In this section, we begin with an overview of HsO-VP's insight and architecture, followed by detailed explanations of composition modules used in the approach.
### _Model Overview_
An overview of the proposed approach HsO-VP is illustrated in Fig. 2. Specifically, it consists of three stages: First, we conduct _skill data filtering_ for a balanced skill dataset, facilitating subsequent VAE training. Second, a two-branch VAE is specially designed for _hierarchical skill extraction_, modeling structured and complex skill representations to overcome the posterior collapse problem. Finally, the extracted skills, which function as temporal abstractions to enable long-horizon planning, are used to relabel the dataset for _downstream training_. In the following, we introduce each stage with more details. For convenience, we use the terms'skill' and 'behavior' interchangeably.
### _Skill Data Filtering_
The temporally extended skills can be a solution to boost offline RL for long-horizon vehicle planning, which enables the vehicle to plan at a higher level of granularity, thereby facilitating long-term reasoning into the future [20, 21]. However, raw expert demonstrations are not well-suited for skill extraction due to their unbalanced distribution (with straight and static trajectories occupying the majority). Therefore, we propose to pre-filter the transition sequences \(\mathbf{\tau}=(s_{0},a_{0},...,s_{c-1},a_{c-1},s_{c})\), where \(c\) stands for the skill length, for a balanced filtered skill dataset \(\mathcal{D}^{f}\). As an auxiliary preprocessing stage, meticulous data management is not necessary. Given this and since it's not the primary focus of this work, here we briefly introduce our filtering principle.
Since the action sequences can already reflect the driving skills to a great extent (e.g. the turning and straight behaviors can be easily distinguished by corresponding action sequences), we concatenate these \(c\)-step low-dimensional actions into vectors \(\mathbf{a}_{0:c-1}\) and conduct clustering [45]. With different clusters coarsely representing different skills, our objective is then to maintain uniformity in the sequence count within each cluster. In practical implementation, we control and balance sequence counts through a set of thresholds. Specifically, we choose a threshold for each cluster, filter and retain action sequences whose pairwise distances are greater than (different therefore representative) the threshold.
### _Hierarchical Skill Extraction_
Now with the filtered balanced skill dataset, we can step to extract reusable driving skills through variational inference (VI) [25], similar to previous works [21, 22, 41].
**Generative Process:** The key of VI is to model the generative process of transition sequences and their corresponding skill variables \(z\). Generally, it can be written as follows:
\[\begin{split} p(\mathbf{\tau},z)=& p(\mathbf{s}_{0:c},\mathbf{a }_{0:c-1},z)=p(s_{0})p(z|s_{0})\cdot\\ &\prod_{t=0}^{c-1}p(a_{t}|s_{t},z)p(s_{t+1}|s_{t},a_{t}).\end{split} \tag{1}\]
In consideration of vehicle planning, however, the skills can be more complex and versatile than those in other fields [13, 18] across different maps and road conditions. This complexity can give rise to posterior collapse [26, 27], where the VAE fails to capture complex dependencies of input data and results in a degenerated indistinguishable latent space. So to boost the expressive capability of modeling driving skills, we propose to explicitly model the skill structure for better learning the latent representations. Noticing that driving behavior is usually conducted by first selecting a discrete skill option (e.g. turning) and then determining the specific variations (e.g. speed) to be executed, we introduce another categorical variable \(y\in\mathds{1}^{K}\) to capture discrete options and form a hierarchical latent space:
\[\begin{split} p(\mathbf{\tau},y,z)=& p(\mathbf{s}_{0:c},\mathbf{a }_{0:c-1},y,z)=p(s_{0})p(y|s_{0})\cdot\\ & p(z|y,s_{0})\prod_{t=0}^{c-1}p(a_{t}|s_{t},z)p(s_{t+1}|s_{t},a_{ t}).\end{split} \tag{2}\]
Through this hierarchical decomposition, \(z\) is required to capture variations within a specific discrete option (depending on \(y\)) instead of the full data distribution, which is relatively simpler to model, thereby mitigating the risk of posterior collapse. Practically, we model the generative priors by some
Fig. 2: Overview of HsO-VP. Left: Pre-filter the raw dataset to get the balanced skill dataset. Middle: Main components of our proposed VAE with hierarchical latent space for driving skill extraction. Right: Downstream training for our skill-based policy. Both high and low-level policies are initialized from Stage II, and trained by offline RL and conditional BC from the relabeled datasets, respectively.
flexible parametric distributions \(p_{\psi_{y}}(y|s_{0})p_{\psi_{z}}(z|y,s_{0})\). To keep notations consistent with the RL setting, we use a low-level action decoder \(\pi_{\theta}^{l}(a_{t}|s_{t},z)\) to substitute for \(p(a_{t}|s_{t},z)\). Besides, the state transition distribution \(p(s_{t+1}|s_{t},a_{t})\) is fixed and decided by the environment.
**Learning Objective:** To infer latent skills from data, we introduce transition sequence encoders \(q_{\phi_{y}}(y|\tau)q_{\phi_{z}}(z|\tau,y)\) to approximate the intractable posterior \(p(y|\tau)p(z|\tau,y)\). Based on these, the target of VI is to maximize the _conditional probability_\(\log p_{\theta,\psi}(\tau|s_{0})\) of the observed transition sequence \(\tau\) w.r.t. the initial state \(s_{0}\), which is deduced to be larger than the Evidence Lower BOund (ELBO) [25]. Our learning objective is therefore transformed to maximize the ELBO:
\[\max_{\phi,\psi,\theta}\mathbb{B}_{\tau-\mathcal{D}^{f},y-q_{ \phi_{y}}(y|\tau),z-q_{\phi_{z}}(z|\tau,y)}\Bigg{|}\underbrace{\sum_{t=0}^{c- 1}\log\pi_{\theta}^{l}(a_{t}|s_{t},z)}_{\text{reconstruction loss }\mathcal{L}_{\text{reconstruction}}}\Bigg{|}\] \[-\beta_{z}\mathbb{B}_{\tau-\mathcal{D}^{f},y-q_{\phi_{y}}(y|\tau )}\Bigg{[}\underbrace{D_{\text{KL}}\Big{(}q_{\phi_{z}}(z|\tau,y)||p_{\phi_{z} }(z|s_{0},y)\Big{)}}_{\text{continuous regularization }\mathcal{L}_{\text{KL}}^{\text{out}}}\Bigg{]}\] \[-\beta_{y}\mathbb{B}_{\tau-\mathcal{D}^{f}}\Bigg{[}\underbrace{D _{\text{KL}}\Big{(}q_{\phi_{y}}(y|\tau)||p_{\phi_{y}}(y|s_{0})\Big{)}}_{ \text{discrete regularization }\mathcal{L}_{\text{KL}}^{\text{dis}}}\Bigg{]}. \tag{3}\]
For practical implementation, we adopt the \(\beta\)-VAE [46] trick and weigh the KL terms with \(\beta_{y}\) and \(\beta_{z}\), respectively.
**Network Instantiation:** Finally, we specify the network architecture of the proposed modules for skill extraction. For all vehicle planners implemented in this paper, we use the same state input composed of a bird's-eye view (BEV) semantic segmentation image and a measurement vector.
* [Sequence encoders] \(q_{\phi_{y}}(y|\tau)\) and \(q_{\phi_{z}}(z|\tau,y)\): The architecture of our proposed sequence encoder is shown in Fig. 3. For the state representation, we use convolutional layers to encode the BEV and fully connected (FC) layers for the measurement vector. Afterward, a gated recurrent unit (GRU) [24] is adopted to process the \(c\)-step transition inputs. For modeling the driving skill structure, we use a two-branch architecture after the GRU: the discrete branch takes in the GRU outputs and produces \(K\)-dimensional logits for \(y\); the continuous branch is composed of \(K\)-ensembled FC layers, which outputs final continuous skill variable \(z\) by weighted summation with \(y\) acting as coefficients. Notably, we use gumbel-softmax [29] activation to sample approximately one-hot discrete variables \(y\), acting as a gate to guide members in the ensemble to learn about skills from different options.
* [Low-level action decoder] \(\pi_{\theta}^{l}(a|s,z)\): We use a GRU decoder that takes in state and skill embeddings to output low-level executable actions.
The learnable priors \(p_{\phi_{y}}(y|s)\) and \(p_{\psi_{z}}(z|s,y)\) and the high-level behavior policy \(\pi_{\theta}^{h}(z|s)\) just share the same architecture as sequence encoders except that we remove the GRU and directly use the single-step state representation for \(y\) and \(z\).
### _Downstream Training_
Once the skills have been learned from \(\mathcal{D}^{f}\) in terms of sequence encoders \(q_{\phi_{y}}(y|\tau)q_{\phi_{z}}(z|\tau,y)\), action decoder \(\pi_{\theta}^{l}(a|s,z)\), and priors \(p_{\psi_{y}}(y|s)p_{\psi_{z}}(z|s,y)\), HsO-VP can then apply these modules to learn a skill-based vehicle planner.
**Relabeled skill dataset:** To learn a high-level skill-based policy \(\pi_{\theta}^{h}(z|s)\) with offline RL, we need to relabel transitions in \(\mathcal{D}\) (w/o filtering). Specifically, for each \(c\)-step sequence sampled from the dataset \(\tau\sim\mathcal{D}\), we label its corresponding skill by sequence encoders \(z\sim\mathbb{B}_{\gamma\sim q_{\phi_{y}}(y|\tau)}q_{\phi_{z}}(z|\tau,y)\), thereby creating a new dataset \(\mathcal{D}^{h}\) = \(\{(s_{0}^{i},z^{i},\sum_{t=0}^{c-1}\gamma^{t}r_{t}^{i},s_{c}^{i})\}_{i=1}^{N^{h}}\) that treats skills as actions.
**Downstream Learning Procedure:** Given the relabeled dataset \(\mathcal{D}^{h}\), any off-the-shelf offline RL algorithm can be used to learn \(\pi_{\theta}^{h}(z|s)\). In this paper, we choose IQL [31], which is one of the state-of-the-art (SOTA) offline RL algorithms for end-to-end policy training. Besides, the high-level behavior policy is initialized with the learned priors \(\mathbb{B}_{\gamma\sim p_{\phi_{y}}(y|s)}p_{\psi_{z}}(z|s,y)\), which can be a good starting point to solve some short-horizon tasks.
During inference, \(\pi_{\theta}^{h}(z|s)\) is invoked at every \(c\)-steps, while \(\pi_{\theta}^{l}(a|s,z)\) is used to give executable low-level actions for the vehicle. To ensure that the \(c\)-step transitions \((s_{t},a_{t})_{t=0}^{c-1}\) remain consistent with the labeled latent skill \(z\), we can also optionally finetune \(\pi_{\theta}^{l}(a|s,z)\) on \(\mathcal{D}^{l}=\{(s_{t}^{i},a_{t}^{i})_{t=0}^{c-1},z^{i}\}_{i=1}^{N^{l}}\) with a skill-conditioned behavior cloning (BC) loss:
\[\min_{\theta}\mathbb{B}_{(\tau,z)-\mathcal{D}^{l}}\Bigg{[}-\sum_{t=0}^{c-1} \log\pi_{\theta}^{l}(a_{t}|s_{t},z)\Bigg{]}. \tag{4}\]
## V Evaluation
In this section, we do extensive experiments to answer the following questions. Q1: How effective is HsO-VP compared to baselines when deployed in seen or unseen driving scenarios? Q2: How do the composed modules impact
Fig. 3: The two-branch network architecture of our proposed transition sequence encoder. Note that the input to bi-GRU is actually composed of states and actions over several timesteps.
Hso-VP's performance? Q3: Do skills extracted by HsO-VP possess interpretability and transferability?
### _Experiment Setup_
**Datasets:** The training dataset \(\mathcal{D}\) is collected in the CARLA simulator with an expert RL-based planner [34] trained with BEV semantic images and privileged vehicle measurements. More specifically, we collect data at 10Hz from 4 different training towns (Town01, Town03, Town04, Town06) and 4 weather conditions (ClearNoon, WetNoon, HardRainNoon, ClearSunset) for a total of 10 hours of driving data. At each timestep, we save a tuple \((i_{t},m_{t},a_{t},r_{t})\), with \(i_{t}\in\mathbb{R}^{15\times 192\times 192}\) the BEV semantic image, \(m_{t}\in\mathbb{R}\) the velocity of the ego-vehicle, \(a_{t}\in[-1,1]^{2}\) the action for steering and acceleration executed by the expert, and \(r_{t}\in\mathbb{R}\) the current step reward from the environment. Our reward design is modified from [34], decided by elements including speed, safety, comfort, etc. Besides, to study skill transferability, we also collect 5 hours of driving data from Town05.
**Metrics:** We report metrics from the CARLA challenge [1, 47] to measure planners' driving performance: infraction score, route completion, success rate, and driving score. Besides, as done in [35], we also report the normalized reward (the ratio of total return to the number of timesteps) to reflect the driving performance at timestep level. Among them, driving score is the most significant metric for evaluating planners' performance, which is a weighted score of various indicators like driving efficiency, safety, and comfort.
**Baselines:** First, we choose two IL baselines: vanilla Behavior Cloning (BC) and Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) [48]. Apart from IL baselines, we also include SOTA offline RL baselines: Conservative Q-Learning (CQL) [30], Implict Q-Learning (IQL) [31]. Finally, we add two SOTA hierarchical offline RL algorithms: Implicit Reinforcement without Interaction at Scale (IRIS) [40] that generates subgoals using offline RL, and Offline Primitives for Accelerating offline reinforcement Learning (OPAL) [23] which extracts skills using vanilla VAEs, as rigorous baselines.
**Implementation Details:** Training is conducted on 8 NVIDIA 3090 GPUs, while inference is performed on one NVIDIA 3090 GPU to ensure a fair comparison. For skill filtering and extraction, we remove background vehicles to prevent any detrimental impact on the extraction of the ego-vehicle's skills due to their stochastic movements [13]. These vehicles are retained in the downstream offline RL process for the ego-vehicle to make reasonable reactions. For filtering, the cluster number is set as 6 and we retain approximately 50% of expert data. The skill length \(c\) is set as 10 (1 second of driving). The skill extraction networks are trained from scratch using the Adam optimizer (with a learning rate of 0.0001) for 1000 epochs, employing a batch size of 64. The gumbel temperature is 0.1, regularization factors \(\beta_{y}\) and \(\beta_{z}\) are set to 0.01, and the number of discrete skill options \(K\) (i.e. dimension of \(y\)) is set as 6. The downstream training process shares the same optimizer configuration with the skill extraction stage, and the discounted factor \(\gamma\) is set to 0.99. More details will be public in codes when published.
### _Driving Performance_
First, we implement baselines and HsO-VP, and test them at training (Town03) and new (Town05) driving scenarios (Q1). Results are recorded in Tab. I and Tab. II. Notably, planners achieve higher driving scores at the new scenario because Town03 is inherently more complex than Town05 [1].
Analyzing results in the tables, it becomes evident that offline RL algorithms (CQL [30] and IQL [31]) outperform IL algorithms (BC and MARWIL [48]) at both training and new scenarios, in line with our claims in the preceding texts. Furthermore, we can observe that hierarchical offline RL algorithms (IRIS [40] and OPAL [23]) can indeed better address long-horizon planning tasks, achieving performance improvement compared to CQL and IQL. However, upon closer examination, OPAL only exhibits limited improvement in driving scores compared to the vanilla IQL (in absolute value, 1.4% improvement at training scenarios and 0.9% at new scenarios), which implies that OPAL suffers from posterior collapse [26, 27] and fails to extract highly effective skills, better shown in Fig. 4(a).
In contrast, when we reformulate the generation process and introduce additional discrete skill variables, the trained HsO-VP planner achieves optimal results across metrics at both training and new scenarios. For the most significant driving score metric, it outperforms the strongest baseline IRIS by about 4.0% in absolute value at training scenarios and 6.4% at new scenarios. The significantly higher infraction score and normalized reward indicate that HsO-VP obtains higher driving efficiency and safety simultaneously. These results suggest that the hierarchical skills extracted by HsO-VP enable long-term reasoning for offline RL algorithms.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Planner} & \multicolumn{1}{c}{Diving} & Success & Route & Infraction & Norm. \\ & Sconer & Raet & Co. & 1 & Sconer & Reward \\ \hline BC & 68.8 \(\pm\) 2.5 & 48.3 \(\pm\) 6.2 & 80.0 \(\pm\) 7.1 & 74.7 \(\pm\) 0.8 & 0.35 \(\pm\) 0.17 \\ MARWIL [48] & 70.9 \(\pm\) 6.7 & 58.1 \(\pm\) 9.5 & 83.3 \(\pm\) 2.4 & 76.0 \(\pm\) 6.0 & 0.41 \(\pm\) 0.13 \\ COL [30] & 72.8 \(\pm\) 1.6 & 61.3 \(\pm\) 6.2 & 85.1 \(\pm\) 3.1 & 77.1 \(\pm\) 3.0 & 0.54 \(\pm\) 0.01 \\ IQL [31] & 73.4 \(\pm\) 3.3 & 63.0 \(\pm\) 4.1 & 86.3 \(\pm\) 3.1 & 77.8 \(\pm\) 4.0 & 0.56 \(\pm\) 0.03 \\ HSI [40] & 76.4 \(\pm\) 3.3 & 65.8 \(\pm\) 5.2 & 91.1 \(\pm\) 2.7 & 79.4 \(\pm\) 3.2 & 0.61 \(\pm\) 0.02 \\ OPAL [23] & 74.3 \(\pm\) 3.4 & 64.2 \(\pm\) 6.5 & 82.4 \(\pm\) 4.4 & 78.2 \(\pm\) 3.5 & 0.58 \(\pm\) 0.05 \\
**ONAL Time** & **75.0\(\pm\) 2.2** & **64.9\(\pm\) 9.5** & **89.3\(\pm\) 3.9** & **79.2\(\pm\) 3.4** & **0.60\(\pm\) 0.05** \\ \hline HsO-VP Raw & 75.7 \(\pm\) 4.8 & 60.8 \(\pm\) 2.8 & 83.3 \(\pm\) 3.8 & 77.8 \(\pm\) 3.3 & 0.55 \(\pm\) 0.08 \\ HSIO-VP no BC & 80.1 \(\pm\) 2.3 & 70.7 \(\pm\) 1.5 & 90.5 \(\pm\) 1.2 & 81.6 \(\pm\) 2.6 & 0.63 \(\pm\) 0.03 \\
**HSIO-VP** & **82.8\(\pm\) 1.7** & **71.2\(\pm\) 2.5** & **94.9\(\pm\) 2.8** & **83.4\(\pm\) 2.9** & **0.60\(\pm\) 0.05** \\ \hline
**HoO-VP Tan.** & **84.7\(\pm\) 1.4** & **73.5\(\pm\) 2.4** & **96.6\(\pm\) 2.2** & **85.6\(\pm\) 2.8** & **0.67 \(\pm\) 0.04** \\ \hline Expert [34] & 85.5 \(\pm\) 2.6 & 75.0 \(\pm\) 4.1 & 100.0 \(\pm\) 0.0 & 86.4 \(\pm\) 3.9 & 0.67 \(\pm\) 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table I: Driving performance on a train town and train weather conditions in CARLA. Mean and standard deviation are computed over 3 evaluation seeds. See the main text for detailed meanings of HsO-VP variants. All metrics are recorded in percentages (%) except the normalized reward. The best results are in bold and our method is colored in gray.
### _Ablation Study_
The key component of HsO-VP is its two-branch VAE structure. If a vanilla VAE is used instead, the algorithm just degenerates into OPAL [23], which has been examined in Sec. V-B to be significantly weaker than HsO-VP. In this section, we primarily conduct ablations on the training process of HsO-VP (Q2): First, the variant that directly uses extracted skills for policy initialization and tests without downstream training, is denoted as 'HsO-VP Raw'; Second, the variant that only trains the high-level policy using offline RL but does not finetune the low-level policy by BC, is called 'HsO-VP w/o BC'. The results are recorded in Tab. I and Tab. II.
The ablation results experiments show pronounced differences. We find that both discarding the finetuning of low-level policies and offline RL of high-level policies lead to performance losses across all metrics. The impact of not performing low-level finetuning is relatively small, indicating that the latent variables in the skill extraction phase are already well aligned with the execution of low-level actions. However, 'HsO-VP Raw' only obtains limited performance improvement compared to IL and is inferior to vanilla offline RL algorithms. Essentially, the skill extraction process does not take into account the reward signals but simply reconstructs expert actions, albeit considering actions over multiple timesteps. Therefore, the skill-based offline RL process is crucial for the performance of HsO-VP.
### _Skill Analysis_
In this section, we explore the properties of extracted skills from HsO-VP through further experiments (Q3).
**Interpretability.** In Fig. 4(b), we visualize the final output \(z\) from the two-branch sequence encoder using t-SNE [49]. We analyze the role of the discrete layer by using latent variables \(y\) obtained from the discrete branch as labels. From the figure, we can observe that sequences with the same color are mostly clustered together, indicating that the gumbel-softmax operation [29] assigns skills from different discrete options to different networks in the ensemble, thereby amplifying the differences between skill embeddings \(z\). As a result, we can observe that the extracted skills of HsO-VP are obviously more distinguishable than OPAL [23] in Fig. 4(a). In Fig. 4(c), we visualize representative sequences from selected clusters, it can be seen that different colored blocks correspond to distinct discrete skill choices (from 0 to 4, right turn, stop, accelerate, left turn, and go straight, respectively). Furthermore, the clusters 0.a and 0.b in the same color (i.e. within the same skill option) stand for sharp and mild right turns, respectively, indicating flexible execution styles for actions in a discrete skill. These results demonstrate that HsO-VP captures both the discrete options and continuous variations of skills, providing evidence for the interpretability of skills.
**Transferability [21].** In Sec. V-B, we have deployed HsO-VP at new driving scenarios and validated its strong performance. Here we study a different setting: _leverage the skill extraction module obtained from training scenarios to label skills at new (testing) scenarios, then use them instead of original training data for downstream training._ Essentially, we are investigating the transferability of learned skills, analyzing whether they can be directly used to facilitate planning in unseen driving tasks. The results are also recorded in Tab. II, highlighted in green and labeled as 'Tran.'. Notably, 'OPAL Tran.' which extracts skills through a vanilla VAE only obtains limited performance improvements compared to OPAL (0.7% driving score difference), which implies that the learned skills from training data do not align with data from new driving tasks. In contrast, 'HsO-VP Tran.' shows significant improvements across all metrics compared to 'HsO-VP'. These results prove that HsO-VP succeeds in learning skills that can be transferred to assist planning in new tasks.
## VI Conclusion and Outlooks
In this paper, we propose HsO-VP to boost offline RL for long-horizon vehicle planning. Specifically, we filter offline driving data and design a two-branch VAE to extract hierarchical skills that capture both discrete options and continuous variations, overcoming posterior collapse. Based on this, we train a high-level policy that outputs skills instead of per-step actions, which serve as temporal abstractions to enable long-term reasoning into the future. Comprehensive experimental results on CARLA prove that HsO-VP extracts interpretable driving skills while consistently outperforming strong baselines at both seen and unseen driving scenarios.
As a pioneering work to leverage skill-based offline RL for vehicle planning, we believe that HsO-VP will be a promising and inspiring framework to enable practical autonomous driving. And there are quite a few interesting follow-up directions, including but not limited to learning driving skills from human-annotated skill data, a more refined generative process that can capture transitions between skills, extracting variable-length driving skills, etc.
Fig. 4: Visualization of learned skills. (a): T-SNE [49] visualization of skill embeddings learned by OPAL [23]. (b): T-SNE visualization of skill embeddings \(z\) (i.e. continuous skills) output by HsO-VP’s sequence encoder, where the color labels are determined by \(y\) (i.e. discrete skills) obtained after the gumbel-softmax operation [29]. (c): Visualization of the skill sequences extracted from the corresponding areas in the left image. |
2309.07661 | Entanglement transitions in a periodically driven non-Hermitian Ising
chain | We study entanglement transitions in a periodically driven Ising chain in the
presence of an imaginary transverse field $\gamma$ as a function of drive
frequency $\omega_D$. In the high drive amplitude and frequency regime, we find
a critical value $\gamma=\gamma_c$ below which the steady state half-chain
entanglement entropy, $S_{L/2}$, scales with chain length $L$ as $S_{L/2} \sim
\ln L$; in contrast, for $\gamma>\gamma_c$, it becomes independent of $L$. In
the small $\gamma$ limit, we compute the coefficient, $\alpha$, of the $\ln L$
term analytically using a Floquet perturbation theory and trace its origin to
the presence of Fisher-Hartwig jump singularities in the correlation function
of the driven chain. We also study the frequency dependence of $\gamma_c$ and
show that $\gamma_c \to 0$ at special drive frequencies; at these frequencies,
which we analytically compute, $S_{L/2}$ remain independent of $L$ for all
$\gamma$. This behavior can be traced to an approximate emergent symmetry of
the Floquet Hamiltonian at these drive frequencies which we identify. Finally,
we discus the behavior of the driven system at low and intermediate drive
frequencies. Our analysis shows the presence of volume law behavior of the
entanglement in this regime $S_{\ell} \sim \ell$ for small subsystem length
$\ell \le \ell^{\ast}(\omega_D)$. We identify $\ell^{\ast}(\omega_D)$ and tie
its existence to the effective long-range nature of the Floquet Hamiltonian of
the driven chain for small subsystem size. We discuss the applicability of our
results to other integrable non-hermitian models. | Tista Banerjee, K. Sengupta | 2023-09-14T12:25:45Z | http://arxiv.org/abs/2309.07661v2 | # Entanglement transitions in a periodically driven non-Hermitian Ising chain
###### Abstract
We study entanglement transitions in a periodically driven Ising chain in the presence of an imaginary transverse field \(\gamma\) as a function of drive frequency \(\omega_{D}\). In the high drive amplitude and frequency regime, we find a critical value \(\gamma=\gamma_{c}\) below which the steady state half-chain entanglement entropy, \(S_{L/2}\), scales with chain length \(L\) as \(S_{L/2}\sim\ln L\); in contrast, for \(\gamma>\gamma_{c}\), it becomes independent of \(L\). In the small \(\gamma\) limit, we compute the coefficient, \(\alpha\), of the \(\ln L\) term analytically using a Floquet perturbation theory and trace its origin to the presence of Fisher-Hartwig jump singularities in the correlation function of the driven chain. We also study the frequency dependence of \(\gamma_{c}\) and show that \(\gamma_{c}\to 0\) at special drive frequencies; at these frequencies, which we analytically compute, \(S_{L/2}\) remain independent of \(L\) for all \(\gamma\). This behavior can be traced to an approximate emergent symmetry of the Floquet Hamiltonian at these drive frequencies which we identify. Finally, we discuss the behavior of the driven system at low and intermediate drive frequencies. Our analysis shows the presence of volume law behavior of the entanglement in this regime \(S_{\ell}\sim\ell\) for small subsystem length \(\ell\leq\ell^{*}(\omega_{D})\). We identify \(\ell^{*}(\omega_{D})\) and tie its existence to the effective long-range nature of the Floquet Hamiltonian of the driven chain for small subsystem size. We discuss the applicability of our results to other integrable non-hermitian models.
## I Introduction
Entanglement is a key feature of quantum many-body systems [1; 2; 3; 4; 5; 6]. Its properties lead to important information about nature of many-body quantum states. For example, it is expected that a reduced density matrix \(\rho_{\rm red}\) constructed from the ground state (and the low-lying excited states) of any \(d\)-dimensional gapped local quantum Hamiltonian displays area law scaling: \(S_{\ell}\sim\ell^{d-1}\), where \(\ell\) denotes the linear subsystem dimension [5]. For gapless Hamiltonians, \(S_{\ell}\) receives an additional logarithmic corrections \(S\sim\ell^{d-1}\ln\ell\). For \(d=1\) where these statements can be rigorously proved, \(S_{\ell}\sim\ln\ell\) at a critical point; the coefficient of the log term is given by the central charge \(c\) of the conformal field theory which describes the critical point [7]. In contrast, \(\rho_{\rm red}\) corresponding to mid-spectrum eigenstates of a generic Hamiltonian usually shows volume law entanglement: \(S_{\ell}\sim\ell^{d}\).
The physics of closed quantum systems subjected to a periodic drive has received tremendous attention in recent past [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Such systems are typically described by their Floquet Hamiltonian \(H_{F}\) which is related to their evolution operator \(U\) by \(U(T,0)=\mathcal{T}_{t}\exp[-i\int_{0}^{T}dt^{\prime}H(t^{\prime})/\hbar]=\exp[- iH_{F}T/\hbar]\). The interest in such systems is partly due to the presence of experimental platforms such as ultracold atoms in optical lattice where such phenomena may be experimentally tested [18; 19; 20; 21; 22]. Moreover these driven systems allow one to explore several phenomena such as time crystalline state of matter [23; 24; 25], dynamical freezing [26; 27; 28; 29; 30], dynamical localization [31; 32; 33], generation of topologically nontrivial Floquet phases [34; 35; 36; 37; 38], dynamical transitions [39; 40; 41], prethermal Floquet realization of quantum scars [42; 43; 44] and Hilbert space fragmentation [45]; most of these phenomena have no equilibrium analogue.
More recently, there has been a lot of interest in study of non-Hermitian quantum systems [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. This is due to the fact that such systems can now be engineered in several experimental platforms [60] and that they often serve as models for studying open quantum systems. An explicit example of the latter phenomena constitutes study of an Ising spin chain in the presence of a measuring operator which measures \(\hat{n}_{j}=(1+\tau_{j}^{z})/2\) (where \(\tau_{j}^{z}\) denotes the usual Pauli matrix representing the spin on site \(j\) of the chain) with a rate \(\gamma\) and in the so-called no-click limit [61; 62]; this leads to a complex magnetic field term \(\sim\gamma\) in the effective Hamiltonian of the spin chain. Moreover, such systems often exhibit several physical properties that have no Hermitian analogue; these include presence of exceptional points in the spectrum [63; 64; 65; 66; 67], realization of skin effect [68; 69; 70] leading to novel bulk-boundary correspondence [65; 67]. More recently, quantum dynamics of such systems have also been studied [71; 72; 73; 74; 75]. In particular, the presence of an emergent approximate symmetry in driven non-Hermitian Ising chain has been pointed out recently [76].
In this work, we study the nature of entanglement in a driven non-Hermitian Ising chain whose Hamiltonian is given by
\[H=-J\sum_{j}\tau_{j}^{x}\tau_{j+1}^{x}-(h(t)+i\gamma/2)\sum_{j}\tau_{j}^{z} \tag{1}\]
where \(J>0\) denotes the strength of the ferromagnetic interaction term while \(h(t)+i\gamma/2\) denotes the complex transverse field. The drive is implemented by making the real part of the transverse field time dependent via a given protocol. In what follows, we shall use either the continuous drive protocol for which
\[h(t) = h_{1}+h_{0}\cos{(\omega_{D}t)} \tag{2}\]
or the square-pulse protocol which can be described by
\[h(t)\ =\ h_{1}+(-)h_{0}\quad\text{for}\quad t>(\leq)T/2. \tag{3}\]
Here \(T=2\pi/\omega_{D}\) is the drive period and \(h_{0}\) denotes the drive amplitude. In the rest of this work, we shall mostly be interested in the subsystem size (\(\ell\)) dependence of the steady-state entanglement entropy \(S_{\ell}\) of such driven system as a function of \(\omega_{D}\) and \(\gamma\). We note that whereas such dependence has been studied for quench protocol earlier [75]; however, it has not been analyzed for periodically driven systems.
The central results that we obtain from such a study are as follows. First, in the high drive frequency and amplitude regime, we find a phase transition for entanglement at a critical \(\gamma_{c}\). For \(\gamma\leq\gamma_{c}\), the entanglement show a log scaling with subsystem size. In particular, we study the half-chain entanglement \(S_{L/2}\equiv S_{\ell=L/2}\) which scales as \(S_{L/2}\sim\ln L\) for \(\gamma\leq\gamma_{c}\). For \(\gamma>\gamma_{c}\), \(S_{L/2}\) is constant and is independent of the subsystem size. The critical transition value \(\gamma_{c}\) depends on the drive frequency in a non-monotonic manner and vanishes at special drive frequencies \(\omega_{D}=\omega_{m}^{*}\) where an approximate emergent symmetry controls the behavior of the driven chain [76]. We note that the role of such an emergent symmetry behind entanglement transitions in these systems has not been pointed out so far in the literature.
Second, we analyze this phase diagram and provide an analytic, albeit approximate, expression of the entanglement entropy in the thermodynamic limit. Our analytical result for \(S\), which matches quite well with exact numerics, allows us to explain the lack of volume law scaling of the entanglement which is normally expected for driven Ising chains; moreover, it provides an expression of the coefficient of the leading \(\ln L\) term leading to an analytic understanding of the phase diagram in the high drive amplitude regime. Our analysis also shows that the contribution of the jump singularities [77; 78; 79] in the correlation functions of the driven chain to \(S\) is key to its logarithmic dependence on \(\ell\); thus it identifies the Fisher-Hartwig singularities of the correlation functions as the reason for the entanglement transitions. To the best of our knowledge, this connection was not identified in the literature before.
Third, we provide numerical study of the entanglement in the low and intermediate frequency regime. Our analysis in this regime allows us to identify a length scale \(\ell^{*}(\omega_{D})\) which diverges at low-frequency. For \(\ell\leq\ell^{*}(\omega_{D})\), the effective Floquet Hamiltonian for the subsystem behaves as an effective long-range Hamiltonian; consequently, the entanglement shows volume-law scaling in this regime for small \(\gamma\): \(S_{\ell}\sim\ell\). As \(\ell\) increases and exceeds \(\ell^{*}(\omega_{D})\), \(S\) crosses over from linear to logarithmic scaling. For large \(\gamma\), \(S\) remains independent of \(\ell\) for all \(\ell\); the transition between these two regimes occurs at a critical value of \(\gamma\) similar to the high drive-frequency regime.
The plan of the rest of the paper is as follows. In Sec. II we present our analytical results; in Sec. II.1, we analyze the driven chain to obtain the analytic, albeit perturbative, Floquet Hamiltonians for different drive protocols and identify their emergent symmetry. We also present expressions for the correlation functions and analyze their features. This is followed by Sec. II.2 where we present analytic computation of the half-chain entanglement entropy. Next, in Sec. III, we present our numerical result for \(S_{\ell}\); the phase diagram in the high drive amplitude and drive frequency regime is presented in Sec. III.1 and compared with the analytical results obtained in Sec. II.2 while the low and intermediate drive frequency regimes are discussed in Sec. III.2. Finally, we discuss our main results and their applicability to other integrable models and conclude in Sec. IV. Some additional details of the calculations are presented in the appendices.
## II Analysis of the driven chain
In this section, we shall provide our analytical results which turns out to be accurate in the high drive amplitude regime. The basic properties of the Floquet Hamiltonian are discussed in Sec. II.1 while the calculation of the entanglement entropy is charted in Sec. II.2.
### Perturbative Floquet Hamiltonian
The driven Hamiltonian of the Ising chain in the presence of complex transverse field is given by Eq. 1. The drive protocol used could either be continuous (Eq. 2) or discrete (Eq. 3). In what follows, we shall obtain analytic expression for the Floquet Hamiltonian which will be useful for computation of the correlation functions for the driven model. The analysis of this section will closely follow Ref. [76].
We begin by mapping Eq. 1 into a system of free two-components fermions; this is achieved by the well-known Jordan-Wigner transformation given by
\[\tau_{j}^{+(-)}\ =\ \left(\prod_{\ell=1}^{j-1}-\tau_{\ell}^{z}\right)c_{j}^{ \dagger}(c_{j}),\quad\tau_{j}^{z}=2c_{j}^{\dagger}c_{j}-1 \tag{4}\]
where \(c_{j}\) denotes annihilation operator of the fermions on site \(j\).
\[\hat{c}_{j}=\frac{1}{\sqrt{N}}\sum_{k\in\text{BZ}}e^{i\pi/4}e^{-ikj}\hat{c}_{k} \tag{5}\]
where BZ indicates the Brillouin zone \(-\pi\leq k\leq\pi\). Using Eq. 4, and denoting the two component fermion field in momentum space as \(\psi_{k}=(c_{k},c_{-k}^{\dagger})^{T}\), where \(c_{k}\) annihilates a fermion with momentum \(k\), Eq. 1 can be written as
\[H = 2\sum_{k\in\text{BZ}/2}\psi_{k}^{\dagger}H_{k}\psi_{k}\] \[H_{k} = \sigma_{z}(h(t)-\cos k+i\gamma/2)+(\sigma_{+}\sin k+\text{h.c.}) \tag{6}\]
where \(\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) are standard Pauli matrices in particle-hole space of the fermions, \(J\) and the lattice spacing \(a\) is set to unity, BZ/2 denotes positive half of the Brillouin zone, and \(\sigma^{\pm}=\sigma_{x}\pm i\sigma_{y}\).
The first-order contribution to the Floquet Hamiltonian can be computed perturbatively using standard prescription of FPT [17; 76]. Following the derivation sketched in App. A, one obtains \(H_{F}^{(1),c(s)}=\frac{i\hbar}{T}U_{1}^{c(s)}(T)\) where
\[H_{F}^{(1),c(s)} = \sum_{k}\psi_{k}^{\dagger}[S_{1k}\sigma_{z}+(S_{2k}\sigma^{+}+{ \rm h.c.})]\psi_{k} \tag{7}\] \[S_{1k} = 2(h_{1}-\cos(k)+i\gamma/2),\] \[S_{2k} = 2f^{c}(T)\sin k\quad{\rm cosine\,protocol}\] \[= 2f^{s}(T)\sin k\;e^{-ih_{0}T/\hbar}\quad{\rm square\,pulse\, protocol}\]
where \(f^{c(s)}(T)\) are given by
\[f^{c}(T) = J_{0}\left(4h_{0}/(\hbar\omega_{D})\right),\] \[f^{s}(T) = \frac{\hbar}{h_{0}T}\sin\left(\frac{h_{0}T}{\hbar}\right), \tag{8}\]
where \(J_{0}\) denotes the zeroth order Bessel function. We note that there are special drive frequencies at which \(f^{c,s}=0\) and \([H_{F}^{(1)},\sigma_{z}]=0\) leading to an emergent symmetry. These frequencies are given by \(\omega_{D}=\omega_{m}^{*c(s)}\) where
\[\omega_{m}^{*c} = \frac{4h_{0}}{\hbar\eta_{m}},\quad\omega_{m}^{*s}=\frac{2h_{0}}{m \hbar} \tag{9}\]
where \(m\) is an integer and \(\eta_{m}\) denotes the \(m^{\rm th}\) zero of \(J_{0}\). This symmetry is approximate since it is violated by higher-order terms of the Floquet Hamiltonian [76]; nevertheless, it was shown in Ref. [76] that such an approximate symmetry leaves its imprint on the dynamics of the system. Here we shall show that such a symmetry shapes the character of entanglement transitions of the driven non-Hermitian Ising chain. We also note here that the inclusion of the second order terms in \(H_{F}\) do not change its matrix structure; it modifies \(S_{1k}\) and \(S_{2k}\) as shown in App. A. In what follows, we shall use this form of \(H_{F}\) to compute second-order perturbative results for the correlation functions and \(S_{\ell}\).
The evolution operator \(U(nT)\) for both protocols can be expressed in terms of the eigenvalues \(E_{k}^{a}\) and eigenvectors \(|a;k\rangle\) ( where \(a=1,2\)) of the Floquet Hamiltonian as
\[U(nT) = \prod_{k}\sum_{a=1,2}e^{-iE_{k}^{a}nT}|a;k\rangle\langle a;k| \tag{10}\]
Here and in the rest of this section, we shall drop the indices \(c,s\) indicating the protocol to avoid clutter. For perturbative Floquet Hamiltonian, these quantities can be analytically obtained and are given by
\[E_{k}^{a} = (-1)^{a}\sqrt{S_{1k}^{2}+|S_{2k}|^{2}}=(-1)^{a}(\epsilon_{k}+i \Gamma_{k})\] \[|a;k\rangle = \left(\begin{array}{c}n_{zk}^{a}\\ n_{zk}^{a}+in_{yk}^{a}\end{array}\right),\quad n_{xk}^{a}=\frac{{\rm Re}[S_{2 k}]}{\mathcal{N}_{ak}}=\frac{q_{ak}}{\mathcal{N}_{ak}}\] \[n_{yk}^{a} = -\frac{{\rm Im}[S_{2k}]}{\mathcal{N}_{ak}}=\frac{q_{ak}^{\prime} }{\mathcal{N}_{ak}}\] \[n_{zk}^{a} = \frac{S_{1k}+(-1)^{a}E_{k}}{\mathcal{N}_{ak}}=\frac{p_{ak}}{ \mathcal{N}_{ak}},\] \[\mathcal{N}_{ak} = \sqrt{\left|S_{1k}+(-1)^{a}E_{k}\right|^{2}+|S_{2k}|^{2}} \tag{11}\]
Note that \(n_{yk}^{a}=0\) for the continuous protocol for which \(S_{2k}\) is real. The exact expressions for \(E_{k}^{a}\) and \(|a;k\rangle\), however, needs to be computed numerically for the continuous drive protocol. Its computation for the square-pulse protocol can be carried out analytically as shown in App. B.
The normalized wavefunction of the driven chain starting from a state \(|\psi_{0}\rangle=\prod_{k}(u_{0k}+v_{0k}c_{k}^{\dagger}c_{-k}^{\dagger})|0\rangle\) can be written, in terms of these Floquet eigenvalues and eigenvectors as \(|\psi(nT)\rangle=\prod_{k}|\psi_{k}(nT)\rangle\) where
\[|\psi_{k}(nT)\rangle = (u_{k}(nT)+v_{k}(nT)c_{k}^{\dagger}c_{-k}^{\dagger})|0\rangle, \quad u_{k}(nT)=\frac{\sum_{a}e^{-iE_{k}^{a}nT}p_{ak}(u_{0k}p_{ak}^{*}+v_{0k}( q_{ak}^{*}+iq_{ak}^{{}^{\prime}*}))}{\mathcal{D}_{k}(nT)}\] \[v_{k}(nT) = \frac{\sum_{a}e^{-iE_{k}^{a}nT}(q_{ak}-iq_{ak}^{\prime})(u_{0k}p_ {ak}^{*}+v_{0k}(q_{ak}^{*}+iq_{ak}^{{}^{\prime}*}))}{\mathcal{D}_{k}(nT)}\] \[\mathcal{D}_{k}(nT) = [|\sum_{a}e^{-iE_{k}^{a}nT}p_{ak}(u_{0k}p_{ak}^{*}+v_{0k}(q_{ak}^ {*}+iq_{ak}^{{}^{\prime}*}))|^{2}+|\sum_{a}e^{-iE_{k}^{a}nT}(q_{ak}-iq_{ak}^{ \prime})(u_{0k}p_{ak}^{*}+v_{0k}(q_{ak}^{*}+iq_{ak}^{{}^{\prime}*}))|^{2}]^{1/2}\]
and \(|0\rangle\) denotes the fermion vacuum.
It is well-known that the computation of entanglement entropy for integrable Ising chains begins with analysis of the correlation functions of the model. We denote these correlation functions, computed, at the end of \(n\) drive cycles, as
\[\Pi_{xk}(nT) = \langle\psi_{k}(nT)|(c_{k}^{\dagger}c_{-k}^{\dagger}+{\rm h.c.})| \psi_{k}(nT)\rangle\] \[= 2{\rm Re}(u_{k}^{*}(nT)v_{k}(nT))\] \[\Pi_{zk}(nT) = -i\langle\psi_{k}(nT)|(c_{-k}c_{k}-c_{k}^{\dagger}c_{-k}^{\dagger} )|\psi_{k}(nT)\rangle\] \[= 2{\rm Im}(u_{k}^{*}(nT)v_{k}(nT))\] \[\Pi_{yk}(nT) = \langle\psi_{k}(nT)|(2c_{k}^{\dagger}c_{k}-1)|\psi_{k}(nT)\rangle\] \[= 2|v_{k}(nT)|^{2}-1\]
The plots of the steady state correlation function \(\Pi_{xk}^{\rm steady}\equiv\Pi_{xk}\) at large drive frequencies are shown in top panels of Fig. 1 for the continuous drive protocol and \(\gamma=0.01J\). The coefficients of the steady state wavefunction \(|\psi_{k}^{\rm steady}\rangle\) are denoted by \(u_{k}^{\rm steady}\) and \(v_{k}^{\rm steady}\) and are charted out in App. B for the square pulse protocol. For the continuous protocol, the procedure is analogous; however, the exact steady state wavefunction needs to be obtained numerically. The steady state value of \(\Pi_{xk}\) are obtained by replacing \(u_{k}(nT)\) and \(v_{k}(nT)\) in Eq. 13 by \(u_{k}^{\rm steady}\) and \(v_{k}^{\rm steady}\) respectively. We have checked numerically that \(|\psi_{k}(nT)\rangle\) coincides with the steady state correlation functions obtained using the above-mentioned procedure after \(n\) drive cycles, where \(n\) depends on both \(\omega_{D}\) and \(\gamma\), for both protocols. This feature has been explicitly checked for all plots presented in this work.
From Fig. 1, we find that for generic drive frequencies \(\omega_{D}\neq\omega_{m}^{*}\), as shown in the top left panel of Fig.1, \(\Pi_{xk}\) shows jump singularities at \(k=\pm k^{*}\simeq\arccos h_{1}\). In contrast, at the special drive frequency \(\omega_{D}=\omega_{1}^{*}\), the jump singularity disappears and we obtain a smooth function for \(\Pi_{xk}\). For \(\omega_{D}\neq\omega_{m}^{*}\), the second order Floquet result (blue line) matches its exact numerical counterpart (red line) quite well. In contrast, at \(\omega_{D}=\omega_{m}^{*}\) and small values of \(\gamma\), one gets a qualitative match between the two results and needs to go beyond 2nd order FPT for a quantitative match. At intermediate (\(\omega_{D}=0.7J\)) and low (\(\omega_{D}=0.1J\)) drive frequencies, the perturbative Floquet theory clearly breaks down as can be seen from the bottom panels of Fig. 1. Here exact numerics reveals multiple jump singularities of \(\Pi_{xk}\); the number of such jump singularities increases as \(\omega_{D}\) is lowered.
In contrast at large \(\gamma\sim J\), \(\Pi_{xk}\) does not show any jumps singularities. As shown in Fig. 2 for \(\gamma=2J\), for all drive frequencies, \(\Pi_{xk}\) is a smooth function of \(k\). The second order Floquet Hamiltonian, once again, yields a nice match with exact numerics at the high drive frequency regime.
In contrast to \(\Pi_{xk}\), the functions \(\Pi_{yk}\) do not display any jump singularities in the high drive frequency regime and stays close to zero for \(k\sim\pm k^{*}\). The behavior of \(\Pi_{zk}\) is qualitatively similar to \(\Pi_{xk}\) except for the fact that the magnitude of the jump singularity tends to zero at high frequencies and low \(\gamma/J\) where the contribution of \(\Pi_{zk}\) to the entanglement entropy becomes small. We shall use these features of the correlation function to compute the entanglement entropy in the next section.
### Analytical results for the entanglement entropy
In this section, we present an analytic computation of the entanglement entropy \(S_{\ell}\) for the high drive amplitude regime, where the correlation functions obtained from perturbative Floquet Hamiltonian agrees well with exact numerics. We shall carry out this calculation in the regime where the dimensionless parameter \(\gamma\ll|\tilde{h}_{1}f^{c(s)}(T)|\equiv|2\sqrt{1-h_{1}^{2}}f^{c,(s)}(T)|\). Note that this indicates that such a computation is expected to yield more quantitatively accurate results away from the special frequencies.
To compute \(S_{\ell}\), we begin with expressions of \(\hat{\Pi}_{k}=(\Pi_{xk},\Pi_{yk},\Pi_{zk})\) where \(\hat{\Pi}_{k}\) denotes the correlator values
Figure 1: Plot of \(\Pi_{xk}\) as a function of \(k\) for \(\hbar\omega_{D}=60J\) (top left panel), \(\hbar\omega_{D}=\hbar\omega_{1}^{*}\simeq 33.26J\) (top right panel), \(\hbar\omega_{D}=0.7J\) (bottom left panel) and \(\hbar\omega_{D}=0.1J\) (bottom right panel). The red lines display results from exact numerics while the blue lines show that obtained using second order Floquet perturbation theory. For all plots \(\gamma=0.01J\), \(h_{0}=20J\), \(h_{1}=0.1J\) and \(L=1000\). See text for details.
Figure 2: Plot of \(\Pi_{zk}\) as a function of \(k\) for \(\gamma=2J\). All other parameters are same as in Fig. 1. See text for details.
in the steady state. We first note that the normalization of \(u_{k}\) and \(v_{k}\) leads to conservation of the norm of the correlation functions \(||\hat{\Pi}_{k}||=1\). As is well-known in the literature, under such conditions, Szego's theorem necessitates that \(S_{\ell}\) can not have a term linear in \(\ell\). The derivation of this is sketched in App. C.
To obtain \(S_{\ell}\), we first define the correlation matrix for the system. To this end, we construct the quantity
\[\eta_{k} = \lambda I-\hat{\Pi}_{k} \tag{14}\]
which shall be central to computing \(S_{\ell}\). In what follows we shall provide an approximate analytic computation of \(S_{\ell}\) in the regime where \(H_{F}^{(1)}\) provides a reasonable description of the driven system. Following the calculations in App. A and B, in this regime, one can define the steady state wavefunctions to be
\[|\psi\rangle = \prod_{k>0}\left(u_{k}^{(1)}+v_{k}^{(1)}\ \hat{c}_{k}^{\dagger}\hat{c}_{-k}^{ \dagger}|0\rangle\right)\] \[u_{k}^{(1)} = \frac{n_{zk}+{\rm sgn}(\Gamma_{k})}{{\cal N}_{k}}\quad v_{k}^{(1 )}=\frac{n_{xk}+in_{yk}}{{\cal N}_{k}}. \tag{15}\]
where \({\cal N}_{k}={\cal N}_{\pm k}\) for \(\Gamma_{k}>(<)0\). Substituting Eq. 15 in Eq. 13, one obtain the correlation functions \(\hat{\Pi}_{k}^{(1)}\)
\[\Pi_{xk}^{(1)} = \frac{2\ {\rm Re}[(n_{zk}+{\rm sgn}(\Gamma_{k}))^{*}(n_{xk}+in_{yk})] }{{\cal N}_{k}^{2}}\] \[\Pi_{yk}^{(1)} = \frac{2\ {\rm Im}[(n_{zk}+{\rm sgn}(\Gamma_{k}))^{*}(n_{xk}+in_{yk})] }{{\cal N}_{k}^{2}}\] \[\Pi_{zk}^{(1)} = 1-2\ \frac{|n_{zk}+{\rm sgn}(\Gamma_{k})|^{2}}{{\cal N}_{k}^{2}} \tag{16}\]
Near the jump singularities at \(k=\pm k^{*}=\pm\arccos(h_{1})\), where \(\Gamma_{k}\) changes sign, Eq. 16 can be further simplified, for small \(\gamma\), to yield
\[\Pi_{xk}^{(1)} = \delta\ {\rm sgn}(\tilde{h}_{1}f(T))\ \frac{k-{\rm sgn}(k)k^{*}}{|k-{\rm sgn }(k)k^{*}|}\] \[\Pi_{zk}^{(1)} = \sqrt{1-\delta^{2}}\ {\rm sgn}(k),\quad\Pi_{yk}^{(1)}=0\] \[\delta = \sqrt{1-\left(\frac{\gamma}{\tilde{h}_{1}f(T)}\right)^{2}} \tag{17}\]
where we have ignored higher order terms in \(k\pm k^{*}\). We note that for small \(\gamma\) and away from the special frequencies where \(|\delta|\sim 1\), \(\Pi_{xk}^{(1)}\gg\Pi_{zk}^{(1)}\). In this regime \(\Pi_{xk}^{(1)}\) jumps around \(k=\pm k^{*}\); these jumps control the \(\ell\) dependence of \(S_{\ell}\), as shall be elaborated later in this section. In contrast, for large \(\gamma/J\) or around \(\omega_{D}=\omega_{m}^{*}\) at which \(f(T)\simeq 0\), the off-diagonal terms of the Floquet Hamiltonian will be small in the high-drive frequency regime. This leads to a steady state which closely mimics one of the eigenstates of \(\hat{\tau}^{z}\) depending on \({\rm sgn}\,[\Gamma_{k}]\). Hence \(\Pi_{zk}^{(1)}\) dominates at these frequencies and \(\hat{\Pi}_{k}^{(1)}\) becomes a smooth function of \(k\) leading to a different behavior of \(S_{\ell}\). The transition between these behaviors which constitutes the entanglement transition occurs around \(\delta\sim 0\); thus \(\delta\) controls the behavior of \(S_{\ell}\) in this regime.
To obtain an analytic expression of \(S_{\ell}\) for large \(\ell\) we now carry out a Fisher-Hartwig analysis [77; 78; 79]. We note that this analysis can be done only when the correlation matrix depends on a single Pauli matrix. The reason for this has been discussed extensively in the literature [78; 79] and stems from the fact that such an analysis requires the correlation matrix of the model to be in the standard Toeplitz form [77]; for \(\hat{\Pi}_{k}\) with multiple components leading to block-Toeplitz form of the correlation matrix such an analysis does not hold [78; 79]. We shall therefore focus on the regime \(\delta\sim 1\) where \(\hat{\Pi}_{k}\simeq(\Pi_{xk},0,0)\) and focus on the form of \(\Pi_{xk}\) near \(k=\pm k^{*}\) given by Eq. 17. In this case, one can define
\[\zeta_{k} = \lambda-\Pi_{xk} \tag{18}\]
which acts as generators for the elements of the correlation matrix.
Next, we cast \(\zeta_{k}\) in a form which is convenient for analysis of contribution of the singularities. To this end, we note that the form of \(\Pi_{xk}\) given in Eq. 17 holds only near the jump singularities at \(k=\pm k^{*}\); the functional dependence of \(\Pi_{xk}\) over the entire Brillouin zone \(-\pi\leq k\leq\pi\) is not captured by Eq. 17. It is well-known that the precise nature of this functional form is not important for computing the contribution of the singularities to \(S_{\ell}\)[77]. Thus one can replace \(\Pi_{xk}\) away from the singularities by an appropriate form which allows for further analytical progress. The simplest such form will be to replace \(\Pi_{xk}=\delta\) everywhere away from the singularities at \(k=\pm k^{*}\). However since the jumps for \(k=k^{*}\) and \(k=-k^{*}\) occur in opposite direction as one traverse from \(k=-\pi\) to \(k=\pi\), one can not replace \(\Pi_{xk}\) by a constant without introducing intermediate spurious singularities, say, at \(k=0\) and \(k=\pi\). Keeping this in mind, we use the form
\[\Pi_{xk}^{(1)} = {\rm sgn}(\tilde{h}_{1}f(T))\ {\rm sgn}(k-k^{*}),\quad 0<k<\pi \tag{19}\] \[= {\rm sgn}(\tilde{h}_{1}f(T))\ {\rm sgn}(k+k^{*}),\ -\pi<k<0\]
We shall use this form to compute \(S_{\ell}\) analytically; the additional contribution due to the spurious singularities shall be subtracted out at the end of the calculation.
Using Eqs. 18 and 19, one can compute the eigenvalues of the correlation matrix in the steady state
\[{\cal D}_{\ell}(\lambda) = \lambda I-B_{\ell}\otimes\sigma_{x},\] \[B_{\ell} = \left(\begin{array}{cccc}\Pi_{0}&\Pi_{-1}&...&\Pi_{1-\ell}\\ \Pi_{1}&\Pi_{0}&...&\Pi_{2-\ell}\\...&...&...&...\\ \Pi_{\ell-1}&\Pi_{\ell-2}&...&\Pi_{0}\end{array}\right),\] \[\Pi_{\ell} = \frac{1}{2\pi}\int_{0}^{2\pi}dk\ e^{i\ell k}\ \Pi_{xk}^{(1)} \tag{20}\]
This calculation, which constitutes standard steps in Fisher-Hartwig analysis [77; 78; 79] is charted out in App. D
and yields
\[\ln\left(\mathrm{Det}\left(\mathcal{D}_{\ell}\left(\lambda\right) \right)\right) = 2[\ell\ \ln\left(F\left[\eta(\lambda)\right]\right)-\sum_{i=1,3}\beta_{i}^{2}( \lambda)\ln(\ell)]+...\] \[F[\eta(\lambda)] = \sqrt{\lambda^{2}-\delta^{2}}\] \[\beta_{1} = \beta_{3}=\frac{1}{2\pi i}\ln\left(\frac{\lambda-\delta}{\lambda +\delta}\right) \tag{21}\]
where the ellipsis represent subleading terms which we shall ignore for the rest of the analysis, we have subtracted out the contributions from the spurious singularities at \(k=0,\pi\), and \(\beta_{1,3}\) depicts the contribution of the jump singularities at \(\pm k^{*}\) to \(\mathrm{Det}\left(\mathcal{D}_{\ell}\right)\). Note that the first term in Eq. 21 constitutes contribution from the non-singular part of \(\eta_{k}\) as shown in App. D. The contribution of such term to \(S_{\ell}\) vanishes as shown in App. C for \(\delta=1\). In the rest of this section, we shall estimate the contribution of the second term which yields the \(\ln\ell\) behavior.
The contribution of the second term in the expression of \(\ln\mathcal{D}_{\ell}\) to \(S_{\ell}\) can be computed by defining the function \(e(x,\lambda)\)
\[e(x,\lambda)\equiv-\frac{x+\lambda}{2}\ln\frac{x+\lambda}{2}-\frac{x-\lambda} {2}\ln\frac{x-\lambda}{2} \tag{22}\]
The entanglement entropy can be written in the form \(S_{\ell}=\sum_{m=-\ell}^{\ell}e(\delta,\nu_{m})\) where \(\nu_{m}\) denote the eigenvalues of the correlation matrix \(B_{\ell}\) which lie within the range \(-\delta\leq\nu_{m}\leq\delta\) with the property \(\nu_{m}=-\nu_{-m}\). To connect \(S_{\ell}\) to \(\mathcal{D}_{\ell}\), we note, from Eq. 20, that \(\mathcal{D}_{\ell}(\Lambda)=\prod_{m=-\ell}^{\ell}(\lambda-\nu_{m})\). Thus one can write
\[S_{\ell} = \frac{1}{4\pi i}\oint d\lambda\,e(\delta,\lambda)\,\frac{d}{d \lambda}\ln\left(\mathrm{Det}\left(\mathcal{D}_{\ell}(\lambda)\right)\right) \tag{23}\]
where the contour encircles the zeroes of \(\mathrm{Det}\left(\mathcal{D}_{\ell}(\Lambda)\right)\). We now substitute Eq. 21 in Eq. 23 to evaluate \(S_{\ell}\) and concentrate on the coefficient of \(\ln\ell\) in the limit of small \(\delta\). A straightforward calculation yields
\[S_{\ell} = \delta I_{1}\ln\ell+\mathrm{O}(\delta-1) \tag{24}\] \[I_{1} = \frac{2}{\pi^{2}}\oint d\lambda\;e(1,\lambda)\frac{\beta_{1}( \lambda)}{\lambda^{2}-1}=\frac{2}{\pi^{2}}\int_{0}^{1}dx\;\frac{\ln(1-x)}{x}= \frac{1}{3}\]
For \(\delta=1\) which is achieved in the quench limit when \(T,\gamma\to 0\), we obtain \(S_{\ell}=1/3\ln\ell\) which reproduces the result obtained in Ref. [75] as a special case. For the driven system, \(\delta\) is a non-monotonic function of the drive frequency whose precise form depends on the drive protocol through \(f(T)\). This allows us to infer that \(S_{\ell}(T)\) will be a non-monotonic function of \(T\). We shall compare this analytic form of \(S_{\ell}\) with exact numerical results at small \(\gamma\) and high drive frequency regime in the next section.
Before ending this section, we would like to discuss two salient points of our analysis. First, the approximating \(\hat{\Pi}_{k}\) with only one of its components, namely \(\Pi_{xk}\), violates the normalization of \(\hat{\Pi}(k)\) for all \(\delta(T)\neq 1\). This in turn leads to a spurious volume law term; this term can be obtained when the first term of \(\ln D_{\ell}(\lambda)\) (which is \(\sim\ell\)) in Eq. 21 is substituted in Eq. 23. Second, the \(\mathrm{O}(\delta-1)\) terms in Eq. 24 can not be reliably computed within this approach; it leads to a logarithmic divergence for \(\delta\neq 1\). Both these features are artifacts of violation of the normalization condition of \(\hat{\Pi}_{k}\). A more rigorous calculation keeping the full matrix structure of \(\hat{\Pi}_{k}\) which requires the use of Fredholm techniques may resolve these issues; however, such a technique, used for computing \(S_{\ell}\) for ground state of the Hermitian XY model [79], can not be straightforwardly applied in the present case due to presence of a non-zero \(\Gamma_{k}\). We leave this issue as subject of future work.
## III Numerical results
In this section, we present our numerical results on steady state entanglement entropy \(S_{\ell}\). The numerical procedure for computing \(S_{\ell}(nT)\) for the driven state is as follows. First, for the continuous protocol, we carry out Trotter decomposition of the evolution operator \(U_{k}(T,0)\) and write
\[U_{k}(T,0) = \prod_{j=1}^{N}U_{k}(t_{j},t_{j-1})=\prod_{j=1}^{N}e^{-iH(t_{j}) \Delta t/\hbar} \tag{25}\]
where the interval \(\Delta t=t_{j}-t_{j-1}=T/N\); it is chosen to be small enough so that \(H(t)\) does not change significantly within this interval. For the square pulse protocol, \(U_{k}(T,0)\) can be exactly obtained as shown in App. B. For either case, diagonalization of \(U_{k}(T)\) leads to its eigenvalues \(e^{\pm iE_{k}^{F}T/\hbar}\), where \(E_{k}^{F}\) are the exact Floquet eigenvalues, and the corresponding eigenfucntions are \(|\pm;k\rangle\)
Figure 3: Left Panel: Plot of steady-state half-chain entanglement \(S_{L/2}\) as a function of \(L\) showing logarithmic dependence of \(S_{L/2}\) at small \(\gamma=0.1J\) and \(\hbar\omega_{D}/J=60\): \(S_{L/2}\sim\alpha\ln L+\mathrm{constant}\). A fit of \(S_{L/2}\) estimates \(\alpha\sim 0.3314\) which is close to its analytically predicted value \(1/3\). The inset shows \(S_{L/2}\) at \(\gamma=2J\) indicating its independence on \(L\): \(S_{L/2}=\mathrm{constant}\). Right Panel: Same as in left panel but at \(\omega_{D}=\omega_{1}^{*c}\) where \(S_{L/2}\) is independent of \(L\) for both small (\(\gamma=0.1J\)) and large (\(\gamma=J\)) \(\gamma\). For all plots, the red dots indicate numerical data and blue lines represent the fit, \(h_{0}=20J\), \(h_{1}=0.1J\). See text for details.
This allows one to write
\[U_{k}(T,0) = \sum_{a=\pm}e^{-iaE_{k}^{F}T/\hbar}|a;k\rangle\] \[|\psi_{k}(nT)\rangle = \frac{|\tilde{\psi}_{k}(nT)\rangle}{|\langle\tilde{\psi}_{k}(nT)| \tilde{\psi}_{k}(nT)\rangle|},\] \[|\tilde{\psi}_{k}(nT)\rangle = U_{k}(nT,0)|\psi_{0k}\rangle \tag{26}\]
Note that for non-Hermitian systems the non-conservation of norm during evolution necessitates normalization of the wavefunction at all stroboscopic times \(nT\). Having obtained \(|\psi_{k}(nT)\rangle\), we use it to construct the correlation functions \(\Pi_{k}(nT)\) using Eq. 13 and follow standard procedure charted out earlier to obtain \(S_{\ell}(nT)\). At large \(n\), \(S_{\ell}(nT)\) reaches its steady state value \(S_{\ell}\) which we analyze below in details.
The results obtained using the above procedure for high drive amplitude and frequency region are presented and compared with their analytical counterparts (computed using perturbative Floquet Hamiltonian obtained in Sec. II.2 and Apps. A and B) in Sec. III.1. This is followed by Sec. III.2 where we present our results for the low and intermediate drive frequency regime. Most of our numerical results shall be carried out using continuous drive protocol; however, we shall also present the phase diagram corresponding to the square-pulse protocol.
### Phase diagram in the high drive amplitude regime
In the high drive amplitude and frequency regime and away from the special frequencies \(\omega_{m}^{*}\), the half-chain entanglement entropy \(S_{\ell=L/2}\equiv S_{L/2}\) shows two distinct behaviors as shown in the left panel Fig. 3 as a function of \(L\) for the continuous protocol. For low \(\gamma\simeq 0.1\), \(S_{L/2}\simeq\alpha\ln L\); the coefficient \(\alpha\sim 1/3\) in the regime of high frequency. We note that this result coincides with the behavior of \(S_{\ell}\) (Eq. 24) computed in Sec II.2 in the high-frequency and low \(\gamma\) regime where \(\delta\to 1\). In contrast, for \(\gamma=2\), \(S_{L/2}\) displays area law (inset of left panel of Fig. 3) and is thus independent of \(L\). We note that this behavior is qualitatively different from that of \(S_{L/2}\) at \(\omega_{D}=\omega_{1}^{*c}\), where it shows area law behavior for almost all \(\gamma\) as shown in the right panel of Fig. 3.
For \(\omega_{D}\neq\omega_{m}^{*}\), a transition between these two behaviors occur at a critical \(\gamma_{c}\), as shown in the left panel of Fig. 4, where \(\alpha\) is plotted as a function of \(\gamma\). The analytic expression of \(S_{\ell}\) predicts \(\alpha=\delta/3\) leading to vanishing of \(\alpha\) at \(\gamma^{*}\simeq|\tilde{h}_{1}f(T)|\) (\(\delta=0\)). This prediction seems to match the numerical result qualitatively as also shown in
Figure 4: Left Panel: Plot of \(\alpha\) as a function of \(\gamma\) for \(\hbar\omega_{D}/J=60\) using continuous drive protocol, showing its decrease from \(1/3\) to \(0\) at the critical \(\gamma=\gamma_{c}\sim 1J\) where the entanglement transition takes place. The red and the blue dots correspond to exact numerical results and results from second order FPT respectively; the green line shows analytical result for \(\alpha\) obtained in Eq. 24. Right panel: Same as left panel using square pulse drive protocol. The red and the blue dots correspond to exact numerical results and results from first order FPT respectively while the green line shows analytical result for \(\alpha\) obtained in Eq. 24. The dashed line in both figures indicate \(\alpha=1/3\) and is a guide to the eye. For all plots \(h_{0}=20J\), \(h_{1}=0.1J\) and \(L\leq 2000\).
Figure 5: Top left Panel: Plot of \(\alpha\), obtained from exact numerics, as a function of \(\omega_{D}\) and \(\gamma\) showing the non-monotonic nature of \(\gamma_{c}\) and the positions of the special frequencies \(\omega_{m}^{*c}\) for continuous wave drive protocol. Top right Panel: Same as top left panel obtained using square pulse drive protocol. Bottom left panel: This figure shows the phase boundary (\(\alpha<0.001\)) obtained using exact numerics (red line) and analytical result (green line which corresponds to \(\delta=0\)) for continuous wave drive. Bottom right panel: Same as bottom left panel obtained for square pulse drive protocol with \(\alpha<0.01\). For all plots \(h_{0}=20J\), \(h_{1}=0.1J\) and \(L\leq 1000\). See text for details.
the left panel of Fig. 4. The right panel of Fig.4 shows similar results for the square pulse drive protocol. Note that we do not expect a quantitative agreement here since the analytic computation of \(S_{\ell}\) is expected to be accurate only for \(\delta\sim 1\).
The phase diagram demonstrating this entanglement transition as a function of the drive frequency \(\omega_{D}\) and the measurement rate \(\gamma\) is given in Fig.5. The top left panel of Fig. 5 plots \(\alpha\) as a function of \(\gamma\) and \(\omega\) obtained from fitting the data for \(L\leq 1000\); this data is obtained using trotter decomposition of the evolution operator for the continuous drive protocol. The region where \(\alpha\simeq 0\) marks the boundary for entanglement transition as shown in the bottom left panel of Fig. 5. The numerical cutoff for obtaining this boundary is set to be \(\alpha<0.001\). A similar phase diagram and boundary for the square pulse drive protocol is shown in the right panels of Fig. 5. The red lines in the bottom panels indicate the numerical phase boundaries. The plots clearly demonstrate the non-monotonic dependence of the phase boundary of the entanglement transition on the drive frequency. The green lines in the bottom panels of Fig. 5 indicate the curves \(\delta=0\) for the continuous (bottom left) and the square pulse (bottom right) drive protocols. It is evident that in this regime the analytical result matches exact numerics quite well.
This behavior of \(S_{\ell}\) can also be qualitatively understood from the first order Floquet Hamiltonian as follows. The energy eigenvalues corresponding to the first order Floquet Hamiltonian (Eq. 7) is given by \(E_{k}^{\pm}=\pm E_{k}=\pm 2\sqrt{(h_{1}-\cos k+i\gamma/2)^{2}+(f^{(c,s)}(T) \sin k)^{2}}\). For small \(\gamma\) and \(f^{c(s)}(T)\neq 0\), \(\gamma^{2}<4[(h_{1}-\cos k)^{2}+(f^{(c,s)}(T)\sin k)^{2}]\) and one has \(\Gamma_{k}=\mathrm{Im}[E_{k}]\sim\gamma(h_{1}-\cos k)\). Thus \(\Gamma_{k}\) changes sign around \(k^{*}\simeq\arccos(h_{1})\) leading to jumps in \(\mathrm{Sgn}\,[\Gamma_{k}]\), while \(\epsilon_{k}=\mathrm{Re}\,[E_{k}]\) varies smoothly having the same signature over the full BZ. This gives rise to jumps in \(\Pi_{x}(k)\), as can be seen from top left panel of Fig. 1.
In contrast for \(\gamma^{2}\gg 4[(h_{1}-\cos k)^{2}+(f^{(c,s)}(T)\sin k)^{2}]\), one has \(\Gamma_{k}\sim\gamma\). In this regime, \(\Gamma_{k}\) does not change sign, while \(\epsilon_{k}\) smoothly changes sign across \(k\sim\pm k^{*}\). Hence no jumps occur in \(\Pi_{x}(k)\) for any \(k\) as can be seen from the top left panel of Fig. 2. Since the coefficient \(\alpha\) of the \(\ln\ell\) term in \(S_{\ell}\) receives contribution from the jump singularities in the correlation function, it vanishes for large \(\gamma\) leading to area law behavior. In contrast for small \(\gamma\), the jumps persist and \(\alpha\) remain finite leading to a finite \(\ln\ell\) term in \(S_{\ell}\). This allows one to expect two different phases separated by a transition at \(\gamma=\gamma_{c}\) for which \(\alpha(\gamma_{c})=0\). We also note when \(f^{c(s)}(T)\) vanish \(\Gamma_{k}\sim\gamma\) for almost all \(\gamma\) while \(\epsilon_{k}\) smoothly changes sign across \(k\sim\pm k^{*}\). The correlation functions therefore remain smooth for any \(\gamma\). This leads to an area-law behavior for almost all \(\gamma\) and \(\gamma_{c}\to 0\).
Thus we find that in the high drive amplitude and frequency regime it is possible to traverse between different phases (for which \(S_{\ell}\sim\ln\ell\) and \(S_{\ell}\sim\mathrm{constant}\)) through the entanglement transition by tuning the drive frequency. In addition, the approximate emergent symmetry at special drive frequencies at \(\omega_{m}^{*}\) shapes the nature of these transition; near these frequencies, \(\gamma^{*}\to 0\) and the phase featuring area law behavior of \(S_{\ell}\) dominates. These features distinguish the behavior of \(S_{\ell}\) in periodically driven systems from their quench counterparts studied in Ref. [75].
### Low and intermediate drive frequencies
At low and intermediate drive frequencies, the results obtained from the perturbative Floquet Hamiltonian does not agree with those obtained from exact numerics. This is pointed out in App. B for the square pulse protocol where the exact Floquet Hamiltonian can be computed analytically. For the continuous protocol, the exact Floquet Hamiltonian can be computed numerically and it shows a similar deviation. In both cases, this feature is reflected from the bottom panels of Figs. 1 and 2 where the behavior of \(\Pi_{xk}\) computed using exact numerics deviates drastically from that computed using \(H_{F}^{(2)}\).
A numerical computation of \(S_{\ell}\) in the low and intermediate drive frequency regime shows two distinct regimes as shown in Fig. 6 for \(\hbar\omega_{D}/J=0.1\) (left panel) and \(\hbar\omega_{D}/J=0.01\) (right panel) for \(\gamma=0.001J\). For both of these drive frequencies, \(S_{\ell}\sim\ell\) for \(\ell\leq\ell^{*}\); in contrast, for \(\ell>\ell^{*}\), we find \(S_{\ell}\sim\ln\ell\). We note that the emergence of such a volume law for \(S_{\ell}\) does not contradict Szego's theorem which is valid for asymptotically large \(\ell\). Below, we investigate the origin of the crossover length scale \(\ell^{*}\) in the low and intermediate drive frequency regime.
To this end, we first note that in the low-frequency regime, the steady state corresponds to an eigenstate of the Floquet Hamiltonian which corresponds to \(\Gamma_{k}>0\) for each \(k\). This situation is to be contrasted with Hermitian
Figure 6: Left Panel: Plot of \(S_{\ell}\) as a function of \(\ell\) for \(\gamma=0.001J\) and \(\hbar\omega_{D}/J=0.1\) showing linear behavior at small \(\ell\) for the continuous drive protocol. Right panel: Similar plot for \(\hbar\omega_{D}/J=0.01\) showing qualitatively similar behavior. The plots indicate a subsequent crossover to \(\ln\ell\) behavior beyond a crossover scale \(\ell^{*}\) indicated by the arrows. Note that \(\ell^{*}\) increases with decreasing drive frequency. For all plots \(h_{0}=20J\), \(h_{1}=0.1J\). See text for details.
Ising model where it can be a superposition of both Floquet eiegnvalues at every \(k^{40}\). Thus it turns out that if the Floquet Hamiltonian develops long correlation length in real space in a drive frequency regime, one would naturally expect a volume-law entanglement for the steady state when subsystem size is smaller than the correlation length; in this regime, the effective Hamiltonian describing the subsystem becomes long-ranged [40]. To check if this is indeed the case here, we resort to the square pulse protocol. Using Eqs. 23, 24, and 25, we can express the Floquet Hamiltonian in real space (up to an additive constant)
\[H^{\prime}_{F} = \sum_{j_{1}j_{2}}\left(c^{\dagger}_{j_{1}}A_{j_{1}-j_{2}}c_{j_{2} }+c_{j_{1}}B_{j_{1}-j_{2}}c_{j_{2}}+\text{h.c.}\right)\] \[A_{j_{1}-j_{2}} = \int^{\pi}_{-\pi}\frac{dk}{2\pi}e^{-ik(j_{1}-j_{2})}\hbar\alpha_{ k}n_{zk}\] \[B_{j_{1}-j_{2}} = \int^{\pi}_{-\pi}\frac{dk}{2\pi}e^{i\pi/2}e^{-ik(j_{1}-j_{2})} \hbar\alpha_{k}(n_{xk}-in_{yk}). \tag{27}\]
For the continuous protocol, an analogous expression for \(A_{j_{1}-j_{2}}\) and \(B_{j_{1}-j_{2}}\) exists; however \(\alpha_{k}\) and \(\vec{n}_{k}\) need to be obtained numerically. Below we discuss numerical computation of \(A_{j_{1}-j_{2}}\); the results obtained for \(B_{j_{1}-j_{2}}\) are similar and shall not be discussed separately.
A numerical computation of \(A_{j_{1}-j_{2}}\) indicates \(|A_{j_{1}-j_{2}}|\sim\exp[-|j_{1}-j_{2}-1|/\ell^{*}(\omega_{D})]\) as shown in Fig. 7 for the square-pulse drive protocol with \(\gamma=0.001J\) and \(\hbar\omega_{D}/J=9,1.0\,\text{and}\,0.1\). The length scale \(\ell^{*}\), which controls the effective short- or long-range nature of the Floquet Hamiltonian, can be obtained by standard fitting procedure.
As can be clearly seen from Fig. 7, \(\ell^{*}\) increases with decreasing drive frequency. At high and intermediate drive frequencies, \(\ell^{*}(\omega_{D})\to 0\) (magneta curve in Fig. 7) signifying that the Floquet Hamiltonian is well described by a nearest-neighbor fermion hopping model. With decreasing \(\omega_{D}\), \(\ell^{*}\) increases; this indicates when the subsystem size \(\ell<\ell^{*}\), \(S_{\ell}\) in the steady state mimics features of a long-range model. This leads to a volume-law behavior. In contrast, for \(\ell\geq\ell^{*}(\omega_{D})\), the entanglement is still due to an effectively short-range model and we get the expected logarithmic behavior. The crossover between these two regimes occur around \(\ell=\ell^{*}(\omega_{D})\) as can be seen from Fig. 6.
For large \(\gamma\), \(\ell^{*}\) always remain small. This can be qualitatively understood from the fact that for \(\gamma\gg 1\), \(\alpha_{k}n_{zk}\sim\gamma+\text{O}(1)\) and \(\alpha_{k}n_{xk}\sim\text{O}(1)\) and \(\alpha_{k}n_{yk}\sim\text{O}(1)\). This indicates \(\ell^{*}\to 0\) even in the low-drive frequency limit and one obtains area law behavior of \(S_{\ell}\) for all \(\ell\).
## IV Discussion
In this work, we have studied entanglement transitions in a periodically driven integrable non-Hermitian Ising model. The non-hermiticity of the model arises from the presence of a complex transverse field; such a non-hermitian model can originate from an Ising chain subjected to measurements in the so-called no-click limit [61; 75]. Our analysis is based on a Jordan-Wigner transformation which maps such a driven chain to a system of spinless fermions; this allows to conclude that such a study is applicable to several other spin systems such as the non-Hermitian XY model and the Kitaev chain which can be reduced to the same free fermion form by a Jordan-Wigner transformation [76].
Our analysis presents a detailed phase diagram for entanglement transition in such driven system. For high drive frequency regime, we find two phases. In the first phase which occurs when the imaginary part of the transverse field, \(\gamma\), is small, the steady state entanglement shows a logarithmic dependence on the sub-system size. We provide an explicit analytic expression for the coefficient of the \(\ln\ell\) term, \(\alpha\), in the small \(\gamma\) limit and show that it can be tuned using drive frequency. In particular, we find that \(\alpha\to 0\) for almost all \(\gamma\) at some special drive frequencies; this is a consequence of an approximate emergent symmetry of the driven model. In contrast, for large \(\gamma\), the entanglement exhibits an area-law behavior. We chart out the phase boundary between these phases as a function of \(\gamma\) and \(\omega_{D}\). Our analytic results based on contribution of the Fisher-Hartwig singularities to \(S_{\ell}\) indicate \(\alpha=\delta/3\). This result matches exact numerics in the high frequency regime quite well and shows that the entanglement transition boundary is well-approximated by the relation \(\delta=0\). We note that the tuning of \(\alpha\) using the drive frequency and the presence of special fre
Figure 7: Plot of \(A_{j}\) as a function of \(j\) for \(\hbar\omega_{D}/J=9\) (magenta curve), \(1\) (blue curve) and \(0.1\) (red curve) for the square pulse protocol. For all plots \(h_{0}=20J\), \(h_{1}=0.1J\), \(\gamma=0.001J\), and \(L=10000\). See text for details.
quencies where \(\alpha\to 0\) due to an approximate emergent symmetry has not been presented earlier.
We also numerically study the entanglement of the driven system at low and intermediate drive frequencies where analytic results are difficult to obtain. In this regime, we identify a length scale \(\ell^{*}(\omega_{D})\) which can be identified as the correlation length of the driven Ising chain. For subsystem size \(\ell\leq\ell^{*}\), the driven chain sees an effective long-range Floquet Hamiltonian for small \(\gamma\). Consequently, it exhibits volume law entanglement. As one increase \(\ell\), \(S_{\ell}\) crosses over to a \(\ln\ell\) behavior when \(\ell>\ell^{*}\). The value of \(\ell^{*}\) increase with decreasing \(\omega_{D}\). In contrast, for large \(\gamma\), the driven chain always has short range correlation in the steady state; consequently, \(S_{\ell}\) always exhibits area-law. We note that the emergence of such a length scale for driven non-hermitian chains has not been pointed out in the literature before.
There are several possible extensions to our work. The first of these constitutes an exact calculation of \(S_{\ell}\) using Fredholm techniques for non-hermitian Ising systems; such a calculation would be an interesting application of the technique in the domain of non-hermitian quantum systems. The second is the possibility to study the entanglement of such a system away from the no-click limit; this requires analysis of the Ising chain coupled to the detector in its full generality. It will be useful to check if the entanglement transition survives in this case; this question is of relevance to possible experiments and is of particular importance in the low \(\omega_{D}\) and low \(\gamma\) limit where the system takes a long time to reach the steady state. We leave these issues for future work.
In conclusion, we have studied entanglement transition in a driven non-Hermitian Ising chain and charted out the corresponding phase boundary. We have provided analytical results for \(S_{\ell}\) in the high drive frequency regime and discussed its relation to an emergent approximate symmetry of the driven chain. In the low and intermediate drive frequency regime, we have identified a correlation length scale which shapes the behavior of the entanglement at low \(\gamma\). We expect our results to be applicable to other, similar, non-hermitian integrable models such as the XY and Kitaev chains.
## V Acknowledgement
The authors thank M. Schiro, A. Silva and J. De Nardis for discussions. KS thanks DST, India for support through SERB project JCB/2021/000030.
## Appendix A Floquet Hamiltonian
In this appendix, we sketch the derivation of the Floquet Hamiltonian starting from Eq. 6 of the main text. We first concentrate on the high drive amplitude regime where \(h_{0}\gg h_{1},J,\gamma\). In this regime one can write \(H=H_{0}(t)+H_{1}\) where
\[H_{0}(t) = \sum_{k}\psi_{k}^{\dagger}2h_{0}(t)\sigma_{z}\psi_{k}\] \[H_{1} = 2\sum_{k}\psi_{k}^{\dagger}[(h_{1}-\cos k+i\gamma/2)\sigma_{z} \tag{10}\] \[+(\sigma_{+}\sin k+\mathrm{h.c.})]\psi_{k}\]
We note that \(h_{0}(t)\) depends on the protocol used and can be read off from Eqs. 2 and 3 for continuous and discrete drive protocols respectively.
To obtain the Floquet Hamiltonian from Eq. 10, we first construct the evolution operator corresponding to \(H_{0}(t)\): \(U_{0}(t,0)\equiv U_{0}(t)=\exp[-i\int_{0}^{t}H_{0}(t^{\prime})dt^{\prime}/\hbar]\). A straightforward evaluation leads to [76]
\[U_{0}^{c}(t) = \prod_{k}e^{-2ih_{0}\sin(\omega_{D}t)\psi_{k}^{\dagger}\sigma_{z} \psi_{k}/(\hbar\omega_{D})} \tag{11}\]
for the continuous protocol and
\[U_{0}^{s}(t) = \prod_{k}e^{2ih_{0}t\psi_{k}^{\dagger}\sigma_{z}\psi_{k}/\hbar} \quad t\leq T/2, \tag{12}\] \[= \prod_{k}e^{2ih_{0}(T-t)\psi_{k}^{\dagger}\sigma_{z}\psi_{k}/ \hbar}\quad t>T/2.\]
for the square pulse drive protocol. Note that \(U_{0}(T)=I\) for both continuous and square-pulse protocols which indicates that \(H_{F}^{(0)}=0\).
The first-order contribution to the Floquet Hamiltonian can be computed perturbatively using standard prescription of FPT [76; 17]. One obtains
\[U_{1}^{c(s)}(T) = \frac{-i}{\hbar}\int_{0}^{T}dt\,[U_{0}^{c(s)}(t)]^{\dagger}H_{1}U_ {0}^{c(s)}(t) \tag{13}\]
This can be evaluated in a straightforward manner following Ref. [76] and leads to
\[H_{F}^{(1),c(s)} = i\hbar U_{1}^{c(s)}(T)/T\] \[= \sum_{k}\psi_{k}^{\dagger}[S_{1k}\sigma_{z}+(S_{2k}\sigma^{+}+ \mathrm{h.c.})]\psi_{k}\] \[S_{1k} = 2(h_{1}-\cos(k)+i\gamma/2), \tag{14}\] \[S_{2k} = 2f^{c}(T)\sin k,\quad\mathrm{cosine\,protocol}\] \[= 2f^{s}(T)\sin k\ e^{-ih_{0}T/\hbar},\quad\mathrm{square\,pulse \,protocol}\]
where
\[f^{c}(T) = J_{0}\left(4h_{0}/(\hbar\omega_{D})\right)\] \[f^{s}(T) = \frac{\hbar\sin\left(h_{0}T/\hbar\right)}{h_{0}T} \tag{15}\]
The higher order corrections to the Floquet Hamiltonian has been computed in Ref. [76]. These terms in \(p^{\mathrm{th}}\) order of perturbation theory, are typically suppressed by a \(1/\omega_{D}^{p-1}\) factor compared to \(H_{F}^{(1)}\). For \(p=2\), such terms merely change the form of \(S_{1k}\) and \(S_{2k}\); they do not alter the
structure of \(H_{F}\). An explicit computation carried out in Ref. [76] shows that
\[S_{1k} = (\alpha_{1k}+i\gamma)\quad S_{2k}=\Delta_{k}(\alpha_{2k}+i\gamma\lambda) \tag{10}\]
where \(\alpha_{k}=2(h_{1}-\cos k)\), \(\Delta_{k}=2\,\sin k\) and
\[\alpha_{1k} = \alpha_{k}-2\Delta_{k}^{2}\sum_{n=0}^{\infty}\frac{J_{0}\left( \frac{4h_{0}}{\hbar\omega_{D}}\right)J_{2n+1}\left(\frac{4h_{0}}{\hbar\omega_{ D}}\right)}{(n+1/2)\hbar\omega_{D}}\] \[\alpha_{2k} = J_{0}\left(\frac{4h_{0}}{\hbar\omega_{D}}\right)+\alpha_{k}\lambda\] \[\lambda = 2\sum_{n=0}^{\infty}\frac{J_{2n+1}\left(\frac{4h_{0}}{\hbar \omega_{D}}\right)}{(n+1/2)\hbar\omega_{D}} \tag{11}\]
These expressions for \(S_{1k}\) and \(S_{2k}\) are used for computing second-order perturbative results for the correlation matrix in the main text.
## Appendix B Exact \(H_{f}\) for the square pulse protocol
The evolution operator for the square-pulse drive protocol given by Eq. 3 of the main text is
\[U_{k} = \prod_{k}e^{-iH_{k}^{+}T/(2\hbar)}e^{-iH_{k}^{-}T/(2\hbar)}=\prod _{k}e^{-iH_{Fk}T/\hbar}\] \[H_{k}^{\pm} = \theta_{k}^{\pm}\left(r_{zk}^{\pm}\tau_{z}+r_{xk}^{\pm}\tau_{x}\right) \tag{12}\]
where
\[\theta_{k}^{\pm} = 2\sqrt{\left(h_{1}\pm h_{0}+i\gamma/2-\cos k\right)^{2}+\sin^{2}k}\] \[r_{zk}^{\pm} = \frac{2(h_{1}\pm h_{0}+i\gamma/2-\cos k)}{\theta_{k}^{\pm}},\ r_{ xk}^{\pm}=\frac{2\sin k}{\theta_{k}^{\pm}} \tag{13}\]
and we have dropped the subscript \(s\) used in the main text indicating square protocol here and for the rest of this section.
The expression of \(H_{Fk}\) can be obtained in a straightforward manner. To express this concisely, we define a set of new unit vectors \(\vec{n}_{k}=(n_{xk},n_{yk},n_{zk})^{T}\) which are related to \(r_{xk}^{\pm}\) and \(r_{zk}^{\pm}\) by
\[n_{zk}\sin(\alpha_{k}T) = r_{zk}^{-}\cos(\theta_{k}^{+}T/(2\hbar))\sin(\theta_{k}^{-}T/(2 \hbar))+r_{zk}^{+}\cos(\theta_{k}^{-}T/(2\hbar))\sin(\theta_{k}^{+}T/(2\hbar))\] \[n_{yk}\sin(\alpha_{k}T) = \sin(\theta_{k}^{-}T/(2\hbar))\sin(\theta_{k}^{+}T/(2\hbar))(r_{ xk}^{-}r_{zk}^{+}-r_{zk}^{-}r_{xk}^{+})\] \[n_{xk}\sin(\alpha_{k}T) = \cos(\theta_{k}^{+}T/(2\hbar))\sin(\theta_{k}^{-}T/(2\hbar))r_{xk} ^{-}+\cos(\theta_{k}^{-}T/(2\hbar))\sin(\theta_{k}^{+}T/(2\hbar))r_{xk}^{+}\] \[\cos(\alpha_{k}T) = \cos(\theta_{k}^{-}T/(2\hbar))\cos(\theta_{k}^{+}T/(2\hbar))- \left(r_{zk}^{-}r_{zk}^{+}+r_{xk}^{-}r_{xk}^{+}\right)\sin(\theta_{k}^{-}T/(2 \hbar))\sin(\theta_{k}^{+}T/(2\hbar)) \tag{14}\]
A few lines of straightforward manipulations show that the Floquet Hamiltonian can be expressed in terms of these unit vectors and \(\alpha_{k}\) as
\[H_{Fk} = \hbar\alpha_{k}\vec{n}_{k}\cdot\vec{\sigma} \tag{15}\]
In the large drive amplitude regime where \(h_{0}\gg h_{1},J,\ n_{yk}\to 2\sin k(1-\cos(2h_{0}T/\hbar))/(2h_{0}T\alpha_{k})\), \(n_{zk}\to 2(h_{1}-\cos k+i\gamma/2)/(\alpha_{k}\hbar)\) and \(n_{xk}\to 2\sin k\sin(2h_{0}T/\hbar)/(2h_{0}T\alpha_{k})\). Thus we recover Eq. 7 of the main text.
The Floquet eigenvalues and eigenvectors can be obtained from Eq. 15 via solution of \(H_{Fk}\Psi_{k}^{\pm}=E_{k}^{\pm}\Psi_{k}^{\pm}\). This leads to
\[E_{k}^{\pm} = \pm\hbar\alpha_{k},\quad\psi_{k}=\begin{pmatrix}u_{k}^{\pm}\\ v_{k}^{\pm}\end{pmatrix} \tag{16}\] \[\begin{pmatrix}u_{k}^{\pm}\\ v_{k}^{\pm}\end{pmatrix} = \begin{pmatrix}n_{zk}\pm 1\\ n_{xk}+in_{yk}\end{pmatrix}\frac{1}{\sqrt{\left|n_{zk}\pm 1\right|^{2}+\left|n_{ xk}+in_{yk}\right|^{2}}}\]
The wavefunction after \(n\) cycles of the drive can therefore be written, in terms of \(u_{k}^{\pm}\) and \(v_{k}^{\pm}\) as given in Eq. 12 of the main text. Here we note that the steady state wavefunction corresponds to
\[\left|\psi_{k}^{\rm steady}\right>=\frac{u_{k}^{+(-)}+v_{k}^{+(-)}c_{k}^{\dagger }c_{-k}^{\dagger}|0)}{\sqrt{|u_{k}^{+(-)}|^{2}+|v_{k}^{+(-)}|^{2}}} \tag{17}\]
if \({\rm Im}[E_{k}]=\Gamma_{k}>0(<0)\). Thus the steady state changes
with sign change of \(\Gamma_{k}\). Combining these results, we obtain the final form of the steady states used in the main text
\[|\psi^{\rm steady}\rangle = \prod_{k>0}\Big{(}\,u_{k}^{\rm steady}|0\rangle+v_{k}^{\rm steady }\;\hat{c}_{k}^{\dagger}\hat{c}_{-k}^{\dagger}|0\rangle\Big{)}\] \[u_{k}^{\rm steady} = \frac{S_{zk}+E_{k}\;{\rm sgn}(\Gamma_{k})}{C_{k}}\quad v_{k}^{ \rm steady}=\frac{S_{xk}+iS_{yk}}{C_{k}}\] \[S_{jk} = \alpha_{k}n_{jk}\hbar,\;C_{k}=\sqrt{|u_{k}^{\rm steady}|^{2}+|v_ {k}^{\rm steady}|^{2}}\,. \tag{100}\]
A similar procedure can be carried out for obtaining the steady state corresponding to the continuous drive protocol. However, for such a protocol, analytic expressions for Floquet eigenstates and eigenvalues can not be obtained; in the main text, we have obtained these quantities numerically and used them to obtain the exact steady state wavefunctions.
## Appendix C Szego's theorem and the absence of volume law scaling
In this appendix, we show the absence of volume law scaling for large \(\ell\) using Szego's strong limit theorem [80]. We note that this analysis ignores the jump singularities of \(\Pi_{xk}\) that are separately analyzed in Sec. II.2 of the main text.
We begin by noting that \(\hat{\Pi}_{k}=(\Pi_{xk},\Pi_{yk},\Pi_{zk})\) satisfies
\[\Big{|}\;\hat{\Pi}_{k}\;\Big{|}^{2} = 1\] \[{\rm Det}\left(\hat{\Pi}_{k}\right) = -\Big{|}\;\hat{\Pi}_{k}\;\Big{|}^{2}=-1 \tag{101}\]
This property follows from the normalization of the driven wavefunction as discussed in Sec. II.2. We also note that the Fourier transform of \(\hat{\Pi}(k)\), defined as,
\[\Pi_{\ell} = \int_{-\pi}^{+\pi}\frac{dk}{2\pi}\;e^{-ik\ell}\;\hat{\Pi}(k)\cdot\hat {\sigma} \tag{102}\]
serves as the elements of the correlation matrix \(B_{\ell}\). These elements are matrix valued and in terms of them one obtains
\[G_{\ell} = \left(\begin{array}{cccc}\Pi_{0}&\Pi_{-1}&...&\Pi_{1-\ell}\\ \Pi_{1}&\Pi_{0}&...&\Pi_{2-\ell}\\...&...&...&...\\ \Pi_{\ell-1}&\Pi_{\ell-2}&...&\Pi_{0}\end{array}\right), \tag{103}\]
Thus the correlation functions lead to a block Toeplitz matrix.
For later convenience, we next introduce the function \({\cal D}_{\ell}(\lambda)\) given by
\[{\cal D}_{\ell}(\lambda)\equiv\lambda I-G_{\ell} \tag{104}\]
where \(I\) denotes the identity matrix. In addition, following standard procedure in the literature [78; 79], we also define the function \(e(x,\lambda)\)
\[e(x,\lambda)\equiv-\frac{x+\lambda}{2}\ln\frac{x+\lambda}{2}-\frac{x-\lambda} {2}\ln\frac{x-\lambda}{2} \tag{105}\]
It is well known that in terms of this function the von-Neumann entanglement entropy can be written as \(S_{\ell}=\sum_{m=-\ell}^{\ell}e(1,\nu_{m})\) where \(\nu_{m}\) denote the eigenvalues of \(G_{\ell}\).
Next, we note that from Eq. (104) we can obtain
\[{\rm Det}\left({\cal D}_{\ell}(\lambda)\right)=\prod_{m=1}^{\ell}(\lambda^{2}- \nu_{m}^{2}) \tag{106}\]
where we have used the property \(\nu_{m}=-\nu_{-m}\). Using Cauchy's residue theorem and Eq. 106 we can therefore express \(S_{\ell}\) as a complex integral as
\[{\cal S}_{\ell}=\frac{1}{4\pi i}\oint_{\cal C}\;d\lambda\,e(1,\lambda)\frac{d} {d\lambda}\ln\left[{\rm Det}\left({\cal D}_{\ell}(\lambda)\right)\right] \tag{107}\]
where \({\cal C}\) is the contour surrounding the zeros of \({\rm Det}\left[{\cal D}_{\ell}(\lambda)\right]\).
To obtain the linear term in \(S_{\ell}\), we first define
\[{\cal A}_{k}=\lambda I-\hat{\Pi}_{k} \tag{108}\]
and use Szego's strong limit theorem. In the leading order this theorem allows one to write [80]
\[{\cal S}_{\infty}(\ell)\approxeq\frac{\ell}{8\pi^{2}i}\;\oint_{\cal C}\;d \lambda\,e(1,\lambda)\;\int_{0}^{2\pi}\;dk\;\frac{d}{d\lambda}\ln\left[{\rm Det }\left({\cal A}_{k}\right)\right] \tag{109}\]
Using Eq. 101, we find that \({\rm Det}({\cal A}_{k})=\lambda^{2}-1\). Using this, Eq. (109) gives us
\[{\cal S}_{\rm steady}(\ell) \approxeq\frac{\ell}{8\pi^{2}i}\oint_{\cal C}\;d\lambda\,e(1, \lambda)\int_{0}^{2\pi}\;dk\;\frac{d}{d\lambda}\left[\ln(\lambda-1)+\ln( \lambda+1)\right] \tag{110}\] \[= \frac{\ell}{8\pi^{2}i}\oint_{\cal C}\;d\lambda\,e(1,\lambda)\int_ {0}^{2\pi}\;dk\;\left[\frac{1}{\lambda-1}+\frac{1}{\lambda+1}\right]\] \[= \frac{\ell}{4\pi i}\oint_{\cal C}\;d\lambda\,\left[\frac{e(1, \lambda)}{\lambda-1}+\frac{e(1,\lambda)}{\lambda+1}\right]=0\]
where the last line follows from the presence of simple poles at \(\lambda=\pm 1\).
Thus we arrive at the conclusion that the coefficient of the linear term vanishes. This indicates that \(S_{\ell}\) can show either logarithmic dependence on \(\ell\) or lead to an area law behavior (\(S_{\ell}\) being a constant) for large \(\ell\). We show in the main text using a Fisher-Hartwig type analysis that the first behavior follows from the jump singularities of \(\Pi_{xk}\) at small \(\gamma\) while the latter behavior is seen at large \(\gamma\) where such jumps are absent.
## Appendix D Fisher-Hartwig singularities of \(\Pi_{xk}\)
In this section, we sketch the derivation of \(\mathcal{D}_{\ell}\) starting from the expression of \(\Pi_{xk}^{(1)}\) (Eq. 19) in the main text. We are going to first sketch this derivation for \(\delta=1\) where the norm of \(\Pi_{xk}\) is unity. In the rest of the appendix, we shall shift the Brillouin zone from \(0\leq k\leq 2\pi\) instead of \(-\pi\leq k\leq\pi\). The singularities therefore occur at \(k=k^{*}\) and \(2\pi-k^{*}\). This allows us to write
\[\tilde{\Pi}_{k}^{(1)} = g(k)\;\sigma^{x}\quad 0\leq k<2\pi\] \[\zeta_{k} = \lambda-g(k) \tag{101}\]
where \(g(k)=\pm 1\). We perform the calculations in the standard form by working on the unit circle \(S^{1}\): \(z=e^{ik}\). The generator, \(\zeta(z)\) contains Fisher-Hartwig jump singularities as shown schematically in Fig. 8; hence one can apply the Fisher-Hartwig conjecture to find the asymptotic value of the Toeplitz determinant \(\mathcal{D}_{\ell}(\lambda)\) which in turn will allow us to obtain the asymptotic analytic form of \(S_{\ell}\).
To this end, we express the generator \(\zeta(z)\) in the Fisher-Hartwig form. We denote \(z_{j}=e^{ik_{j}}\), where \(k_{j}=0,k^{*},\pi,(2\pi-k^{*})\), leading to \(z_{j}=1,z^{*},-1,1/z^{*}\), where \(z^{*}=\exp[ik^{*}]\), to be the positions of the jump discontinuities. We note that the discontinuities at \(z=\pm 1\) are spurious as explained in the main text. We divide the unit circle \(S^{1}\) in four regions as shown in Fig. 8. In each region, the generator \(\zeta(z)\) has a constant value \(\lambda\pm 1\).
We also define \(g_{z_{j}\beta_{j}}(z)\;\text{for}\;j=1,2,3\) that changes across the discontinuity points and \(g_{z_{0}\beta_{0}}\) which remains the same over the entire unit circle; this allows us to choose \(g_{z_{0}\beta_{0}}=\exp[-i\pi\beta_{0}]\). The other \(g_{z_{j}\beta_{j}}(z)\) has the following structure.
\[g_{z_{j},\beta_{j}}(z) = e^{-i\pi\beta_{j}}\quad 0<k<k_{j} \tag{102}\] \[= e^{i\pi\beta_{j}}\quad k_{j}<k<2\pi\]
In terms of this function the generator takes the following form
\[\zeta(z) = e^{V(z)}\left(\frac{z}{z_{0}}\right)^{\beta_{0}}\left(\frac{z}{z _{1}}\right)^{\beta_{1}}\left(\frac{z}{z_{2}}\right)^{\beta_{2}}\left(\frac{z }{z_{3}}\right)^{\beta_{3}} \tag{103}\] \[\times e^{-i\pi\beta_{0}}g_{z_{1}\beta_{1}}(z)g_{z_{2}\beta_{2}}(z)g_{ z_{3}\beta_{3}}(z)\]
In order to find the functional form of the functions \(g_{z_{j}\beta_{j}}(z)\), one needs to solve a set of coupled equations. It is clear from Fig. 8 that the condition \(\zeta(z)=\lambda\pm 1\) necessitates \(V(z)=\ln\eta_{0}(\lambda)\) to be independent of \(z\) and
\[\beta_{0}+\beta_{1}+\beta_{2}+\beta_{3}=0 \tag{104}\]
With these conditions the generator takes the following form
\[\zeta(z) = \eta_{0}(\lambda)\prod_{j=1}^{3}z_{j}^{\beta_{j}}e^{-i\pi\beta_{ 0}}g_{z_{1}\beta_{1}}(z)g_{z_{2}\beta_{2}}(z)g_{z_{3}\beta_{3}}(z)\]
With this form of \(\zeta(z)\), we now seek the solution of the equations
\[\zeta(z) = \lambda\pm 1 \tag{106}\]
as shown schematically in Fig. 8. The solutions of these equations are straightforward and yield
\[\beta_{0} = \beta_{2}=\frac{1}{2\pi i}\ln\left(\frac{\lambda+1}{\lambda-1}\right)\] \[\beta_{1} = \beta_{3}=\frac{1}{2\pi i}\ln\left(\frac{\lambda-1}{\lambda+1}\right)\] \[\eta_{0}(\lambda) = \sqrt{\lambda^{2}-1} \tag{107}\]
The above solution above is for \(\text{Sgn}\left(\tilde{h}_{1}f(T)\right)=1\). Solution for \(\text{Sgn}\left(\tilde{h}_{1}f(T)\right)=-1\) can be obtained by exchanging \(\lambda+1\) and \(\lambda-1\). The final expression of \(S_{\ell}\) will remain the same in both cases.
The form of \(\zeta(z)\) (Eq. 106) obtained allows us to write the matrix \(G_{\ell}=B_{\ell}\otimes\sigma^{x}\), where \(B_{\ell}\) is given by Eq. 20 in the main text. To this end, we define
\[\mathcal{D}_{\ell}\left(\lambda\right)=\lambda\mathbb{I}_{2\ell\times 2\ell}-G_{\ell} \tag{108}\]
Figure 8: Schematic representation of the jump singularities which separates the Brillouin zone into different regions. The solution of \(\zeta_{k}\) is obtained by equating it to \(\lambda\pm 1\) in these regions as schematically shown. See text for details.
where \(G_{\ell}\) is a \(2\ell\times 2\ell\) Toeplitz matrix. As all the nontrivial correlations are embedded within the \(\ell\times\ell\) Toeplitz matrix \(B_{\ell}\), we compute the asymptotics of \(\mathrm{Det}[\widetilde{\mathcal{D}}_{\ell}(\lambda)]\) in order to obtain the relevant information about the entanglement entropy of the system, where
\[\widetilde{\mathcal{D}}_{\ell}(\lambda)=\lambda\mathbb{I}_{\ell\times\ell}-B_{ \ell}. \tag{107}\]
Following the Fisher-Hartwig conjecture, we can write
\[\mathrm{Det}\left(\widetilde{\mathcal{D}}_{\ell}\left(\lambda\right)\right)= \left(F\left[\eta(\lambda)\right]\right)^{\ell}\ell^{-\sum_{i=0}^{\beta}\beta_ {i}^{2}(\lambda)}\tilde{\bar{E}} \tag{108}\]
where \(F[\eta(\lambda)]=\eta_{0}(\lambda)\) and \(\tilde{\bar{E}}\) can be expressed in terms of Barnes G function [78; 79] whose form is not important for our analysis as long as we only restrict ourselves to the leading order contribution to \(S_{\ell}\). This allows one to write
\[\ln\left(\mathrm{Det}\left(\mathcal{D}_{\ell}\left(\lambda\right) \right)\right) = 2\ln\left(\mathrm{Det}\left(\widetilde{\mathcal{D}}_{\ell}\left( \lambda\right)\right)\right) \tag{109}\] \[= 2[\ell\;\ln\left(F\left[\eta(\lambda)\right]\right)-\sum_{i=1, 3}\beta_{i}^{2}(\lambda)\ln(\ell)]\] \[+\text{sub-leading corrections}\]
The first term, which arises from non-singular part of \(\zeta_{k}\), does not contribute to \(S_{\ell}\). This is seen by application of Szego's theorem and is detailed in App. C. In the rest of this section, we shall be concerned with the term \(\sim\ln\ell\) in Eq. 109. Note that we have omitted the contribution of \(\beta_{0}\) and \(\beta_{2}\); this amounts to subtracting out the contribution from the spurious singularities at \(k=0,\pi\).
The above mentioned calculation may be easily repeated with \(\delta\neq 1\). A procedure, exactly similar to the one outlined above, requires solution of the equations \(\zeta_{k}=\lambda\pm\delta\) and yields
\[\beta_{0} = \beta_{2}=\frac{1}{2\pi i}\ln\left(\frac{\lambda+\delta}{\lambda- \delta}\right)\] \[\beta_{1} = \beta_{3}=\frac{1}{2\pi i}\ln\left(\frac{\lambda-\delta}{\lambda+ \delta}\right)\] \[\eta_{0}(\lambda) = \sqrt{\lambda^{2}-\delta^{2}} \tag{110}\]
Substituting Eq. 109 in Eq. 109, one finally gets the expression of \(\ln\left[\mathrm{Det}\left(\mathcal{D}_{\ell}\right)\right]\) used in Eq. 21 in the main text.
|
2302.14209 | FerroHEMTs: High-Current and High-Speed All-Epitaxial AlScN/GaN
Ferroelectric Transistors | We report the first observation of ferroelectric gating in AlScN barrier
wide-bandgap nitride transistors. These FerroHEMT devices realized by direct
epitaxial growth represent a new class of ferroelectric transistors in which
the semiconductor is itself polar, and the crystalline ferroelectric barrier is
lattice-matched to the substrate. The FerroHEMTs reported here use the thinnest
nitride high K and ferroelectric barriers to date to deliver the highest on
currents at 4 A/mm, and highest speed AlScN transistors with fmax larger than
150 GHz observed in any ferroelectric transistor. The FerroHEMTs exhibit
hysteretic Id Vgs loops with subthreshold slopes below the Boltzmann limit. A
control AlN barrier HEMT exhibits neither hysteretic, nor sub Boltzmann
behavior. While these results introduce the first epitaxial high K and
ferroelectric barrier technology to RF and mm wave electronics, they are also
of interest as a new material platform for combining memory and logic
functionalities in digital electronics. | J. Casamento, K. Nomoto, T. S. Nguyen, H. Lee, C. Savant, L. Li, A. Hickman, T. Maeda, J. Encomendero, V. Gund, A. Lal, J. C. M. Hwang, H. G. Xing, D. Jena | 2023-02-28T00:08:30Z | http://arxiv.org/abs/2302.14209v1 | # FerroHEMTs: High-Current and High-Speed All-Epitaxial AlScN/GaN Ferroelectric Transistors
###### Abstract
We report the first observation of ferroelectric gating in AlScN barrier wide-bandgap nitride transistors. These FerroHEMT devices realized by direct epitaxial growth represent a new class of ferroelectric transistors in which the semiconductor is itself polar, and the crystalline ferroelectric barrier is lattice-matched to the substrate. The FerroHEMTs reported here use the thinnest nitride high-K and ferroelectric barriers to date to deliver the highest on-currents at 4 A/mm, and highest speed AlScN transistors with \(f_{MAX}>\) 150 GHz observed in any ferroelectric transistor. The FerroHEMTs exhibit hysteretic \(I_{d}-V_{gs}\) loops with subthreshold slopes below the Boltzmann limit. A control AlN barrier HEMT exhibits neither hysteretic, nor sub-Boltzmann behavior. While these results introduce the first epitaxial high-K and ferroelectric barrier technology to RF and mm-wave electronics, they are also of interest as a new material platform for combining memory and logic functionalities in digital electronics.
## I Introduction
The report of ferroelectric sputtered AlScN in 2019 [1] indicated the tantalizing possibility of introducing ferroelectric barriers into GaN RF and mm-wave transistors. One can pursue this either by integrating sputtered layers on epitaxial channels, or by direct epitaxy of AlScN on GaN transistors [2]. Direct epitaxial AlScN on GaN revealed high piezoelectric coefficients [3]. When the leakage was reduced with increased Sc source purity [4], epitaxial hi-K dielectric constants up to 20 were discovered [5]. These epitaxial AlScN layers exhibited a low coercive field of \(\sim\) 1 MV/cm [6] compared to those of non-epitaxial sputtered layers reported in [1]. Other reports by epitaxy showed higher coercive fields [7], and memory functionality [8]. The RF and mm-wave properties of AlScN barrier HEMTs have been studied [9, 10, 11], but ferroelectric transistor behavior has not been reported.
Here we report ferroelectric gating behavior of epitaxial AlScN barrier HEMTs and contrast its ultra high-current, high-speed, and sub-Boltzmann performance to a control HEMT with a non-ferroelectric AlN barrier. The measured FerroHEMT behavior is consistent with a low coercive field of epitaxial AlScN. All measurements presented in this work were performed at room temperature.
## II Epitaxial Growth
Fig. 1(a) shows a test layer structure comprising of a 200 nm unintentionally doped (UID) GaN layer grown by MBE followed by a 100 nm AlScN layer with 18% Sc. A polarization-induced 2D electron gas (2DEG) of density \(1.8\times 10^{13}\)/cm\({}^{2}\) and mobility of \(377\) cm\({}^{2}\)/V\(\cdot\)s was observed in this heterostructure by Hall-effect measurement at room temperature. Using a deposited top electrode and the 2DEG as a bottom electrode, and applying positive-up-negative-down (PUND) voltage pulses and tracking the currents, the polarization-electric field (P-E) loops extracted are shown in Fig. 1(b) which indicate the epitaxial AlScN layer on UID GaN is ferroelectric with a coercive field \(E_{c}\sim 0.9\) MV/cm.
A control AlN/GaN HEMT and a 14% targeted Sc composition AlScN/AlN/GaN FerroHEMT structure were grown by MBE directly on semi-insulating 6H-SiC substrates with a 300 nm AlN nucleation layer and a 1 \(\mu\)m GaN buffer layer using methods reported in [5] to study ferroelectric gating behavior. Figs. 2(a) and (c) show the corresponding energy band diagrams from the top surface, calculated using self-consistent Schrodinger-Poisson solutions. 2DEG channels are expected at the heterojunctions as shown as \(\psi_{1}^{2}\) at respective depths. To form Ohmic contacts to these 2DEGs, lithographically defined etching and regrowth of heavily doped n+ GaN (\(N_{d}\sim\) 10\({}^{20}\)/cm\({}^{3}\) Si) was performed by MBE. Figs. 2(b) and (d) show the measured 2DEG densities, mobilities, and sheet resistances over various dies of the wafer measured by room-temperature Hall effect. The 2DEG densities are consistent with the expected values from the calculation.
## III Device design and fabrication
Figs. 3(a) and (b) show the resistances of metal-regrown n\({}^{+}\)GaN (low) and n\({}^{+}\)GaN-2DEG (moderate, not exceptionally low). Figs. 4(a) and (b) show the control AlN barrier HEMT and the AlScN barrier FerroHEMT cross sections respectively. Figs. 4(c) shows the representative device process flow of HEMTs with regrown contacts. A SiO\({}_{2}\)/Cr hard mask defined the source/drain regions. Ti/Au source/drain was deposited on the n\({}^{+}\)GaN, and Ni/Au gate was deposited. Electron beam lithography (EBL) process was performed to fabricate T-gate devices for RF and mm-wave performance. Figs. 4(d) and the
inset show SEM images of the final transistor structures. Gate lengths ranged from 90 nm to 18 \(\mu\)m.
## IV Results and discussion
Figs. 5(a) and (b) show the measured \(I_{d}-V_{ds}\) output characteristics and \(I_{d}-V_{gs}\) transfer characteristics of the control AlN/GaN HEMT sample respectively. An on current of \(\sim 1.5\) A/mm with an on resistance of \(R_{on}=1.34\)\(\Omega\)-mm, an on/off ratio of \(10^{6}\) limited by gate leakage, with a threshold voltage of \(\sim-3.5\) V were observed. Fig. 5(c) shows a peak transconductance of \(\sim 0.5\) S/mm with a barrier thickness of 2 nm GaN + 2 nm AlN. The subthreshold slope shown in Fig. 5(d) indicates very good normal transistor behavior, grazing the ideal Boltzmann limit of \(\sim 60\) mV/decade. The control HEMT thus exhibits excellent performance, as borne out by its RF performance discussed subsequently.
Fig. 6 shows the transistor characteristics when a 5 nm thick epitaxial AlScN barrier layer is added between the AlN and the GaN cap layer, as indicated in Figs. 4(a)-(b). From Fig. 6(a), the maximum \(I_{d}\) still reaches \(\sim 1.5\) A/mm at the same gate voltage, _in spite of a more than double barrier thickness_ of 2 nm GaN + 5 nm AlScN + 2 nm AlN compared to the control sample. This is a result of the high-K dielectric property of AlScN as was reported in [5]. The \(I_{d}-V_{ds}\) curves indicate a higher output conductance, but a far larger difference from the control sample is observed in the transfer characteristics in Fig. 6(b). A counterclockwise (CCW) hysteresis loop develops in the subthreshold characteristics. A sub-Boltzmann steep slope of 23.6 mV/decade is observed for the on\(\rightarrow\)off voltage sweep. Such repeatable loops are observed in multiple devices. This translates to a hysteretic transconductance curve as seen in Fig. 6(c) and drain current as seen in Fig. 6(d). Note that though the peak transconductance is lower than the control HEMT, the hi-K AlScN helps maintain a high value in spite of a more than double barrier thickness.
The hysteresis window of the threshold voltage is between \(1.0-2.0\) V as seen in Figs. 6(c) and (d). These observations are strong signatures of both high-K dielectric, and ferroelectric gating behavior. Based on the AlScN barrier thickness, a voltage drop \(E_{c}\times t_{AlScN}\sim 0.5\) V across the AlScN layer is consistent with a low \(E_{c}\sim 1.0\) MV/cm. A large portion of the gate voltage still drops across the GaN and AlN layers between the gate metal and the 2DEG channel.
Fig. 7(a) shows the hysteresis loop measured on Die 7 of Fig. 2 (d). Several CCW hysteresis loops based on the voltage step of the measurement are shown in Fig. 7(a), and the corresponding subthreshold slopes are plotted in Figs. 7(b) and (c). While the majority of the sub-Boltzmann steep slopes are observed in the leftgoing voltage sweeps, some also appear in the rightgoing sweeps. It is well known that trapping behavior could lead to false conclusions of ferroelectricity. The CCW loops for n-channels is a strong evidence of ferroelectricity FETs. Moreover, the absence of such behavior in the control HEMT sample, and the symmetry, voltage width, and sub-Boltzmann slopes of the FerroHEMT conclusively indicates ferroelectric gating in the all-epitaxial AlScN/GaN devices.
Fig. 8(a) shows the high-frequency characteristics of the AlN barrier control HEMT. It exhibits excellent cutoff frequencies of \(f_{T}/f_{MAX}=126/304\) GHz for a gate length of \(L_{g}=90\) nm. The device dimensions and bias conditions are indicated in the plot. Fig. 8(b) shows the measured values for the FerroHEMT of similar dimensions but different bias conditions due to the shifted threshold characteristics. It exhibits lower cutoff frequencies of \(f_{T}/f_{MAX}=78/156\) GHz. Even though the values of the AlScN FerroHEMT are lower than the control AlN barrier HEMT, they are the highest reported to date for AlScN barrier transistors, as indicated in Fig. 8(c). Morever, they are the fastest _ferroelectric_ transistors reported to date: a maximum FerroHEMT \(f_{MAX}=168\) GHz is measured. The higher speed of the control sample is due to the (expected) higher maximum \(g_{m,ext}=0.616\) S/mm of the control HEMT compared to \(g_{m,ext}=0.475\) S/mm for the FerroHEMT, indicating higher speed FerroHEMTs are possible with further scaling. Fig. 8(d) shows the scaling of the output current \(I_{d}^{max}\) in the AlScN FerroHEMTs when \(L_{g}\) is scaled from 18 \(\mu\)m to 90 nm. An exceptionally high value of 2.5 A/mm is observed for an _optical_ gate length of 1 \(\mu\)m. The deep submicron EBL gates reach record values of 4 A/mm at 90 nm gate length. These record high on-currents are enabled by the high 2DEG density generated by the large difference in polarization between GaN and the epitaxial AlScN layers.
## V Conclusions and future work
The moderate FerroHEMT channel mobility in Fig. 2(d) can be improved by 3\(\times\) for better RF and mm-wave performance. The high-K value of AlScN will increase the breakdown voltage in FerroHEMTs by reducing the gate leakage [5]. A negative DIBL effect is predicted [12], but this will be achievable with careful geometrical design of future epitaxial FerroHEMTs. Thus FerroHEMTs with thinnest epitaxial ferroelectric barriers show steep subthreshold slopes, and deliver the highest on-currents (4 A/mm), and highest speed (\(f_{MAX}>\) 150 GHz) in all ferroelectric transistors. They present a new class of transistors with the potential to blur the boundaries between memory, logic, and communication devices.
## Acknowledgment
This work was supported in part by the SRC and DARPA through the Joint University Microelectronics Program (JUMP), by the DARPA TUFEN program, and performed at the Cornell NanoScale Facility, an NNCI member supported by NSF Grant No. NNCI-2025233.
|
2309.08546 | Towards Robust Continual Learning with Bayesian Adaptive Moment
Regularization | The pursuit of long-term autonomy mandates that machine learning models must
continuously adapt to their changing environments and learn to solve new tasks.
Continual learning seeks to overcome the challenge of catastrophic forgetting,
where learning to solve new tasks causes a model to forget previously learnt
information. Prior-based continual learning methods are appealing as they are
computationally efficient and do not require auxiliary models or data storage.
However, prior-based approaches typically fail on important benchmarks and are
thus limited in their potential applications compared to their memory-based
counterparts. We introduce Bayesian adaptive moment regularization (BAdam), a
novel prior-based method that better constrains parameter growth, reducing
catastrophic forgetting. Our method boasts a range of desirable properties such
as being lightweight and task label-free, converging quickly, and offering
calibrated uncertainty that is important for safe real-world deployment.
Results show that BAdam achieves state-of-the-art performance for prior-based
methods on challenging single-headed class-incremental experiments such as
Split MNIST and Split FashionMNIST, and does so without relying on task labels
or discrete task boundaries. | Jack Foster, Alexandra Brintrup | 2023-09-15T17:10:51Z | http://arxiv.org/abs/2309.08546v3 | # Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization
###### Abstract
The pursuit of long-term autonomy mandates that robotic agents must continuously adapt to their changing environments and learn to solve new tasks. Continual learning seeks to overcome the challenge of catastrophic forgetting, where learning to solve new tasks causes a model to forget previously learnt information. Prior-based continual learning methods are appealing for robotic applications as they are space efficient and typically do not increase in computational complexity as the number of tasks grows. Despite these desirable properties, prior-based approaches typically fail on important benchmarks and consequently are limited in their potential applications compared to their memory-based counterparts. We introduce Bayesian adaptive moment regularization (BAdam), a novel prior-based method that better constrains parameter growth, leading to lower catastrophic forgetting. Our method boasts a range of desirable properties for robotic applications such as being lightweight and task label-free, converging quickly, and offering calibrated uncertainty that is important for safe real-world deployment. Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments such as Split MNIST and Split FashionMNIST, and does so without relying on task labels or discrete task boundaries.
## I Introduction
A common assumption in machine learning (ML) is that training data is independent and identically distributed (i.i.d). However, in Continual Learning (CL) an ML model is presented with a non-i.i.d stream of sequential tasks. Under such circumstances traditional learning methods suffer from a phenomenon known as catastrophic forgetting, where previously learnt knowledge is lost as a result of model parameters biasing towards recently observed data [1, 2, 3]. Continual learning is a major, long-standing challenge within machine learning [4], and it is naturally entwined with robotics [5]. A canonical example of continual learning is that of a Mars rover needing to adapt to ever changing terrain as it traverses the surface [6]. An equally valid example would be a warehouse packing robot here on Earth, needing to continuously recognise new tools or new products in order to appropriately package them.
There are several approaches to continual learning, such as memory-based, where past experiences are stored in a replay-buffer, and (prior-based) regularization methods, which seek to constrain weight updates based on their importance to previously learned tasks. Regularization methods are attractive as they are biologically accurate [4], do not require additional memory storage, and typically do not grow in computational complexity as the number of tasks increases. These properties make regularization approaches particularly appealing for robotic applications, due to the need to operate in real-time, in the real-world, and often on edge or lightweight computing hardware.
In [6], five desiderata for CL evaluations are introduced which ensure that evaluations are robust and representative of real-world CL challenges. They are as follows: input data from later tasks must bear some resemblance to previous tasks, task labels should not be available at test-time (since if they were then separate models could be trained for each task), all tasks should share an output head (multi-head architectures simplify the problem by making the assumption that the task label is known at test-time), there cannot be unconstrained retraining on old tasks (motivations of CL preclude this), and finally CL evaluations must use more than two tasks. Two experiments that satisfy these requirements are the single-headed Split MNIST [7], and Split FashionMNIST problems. Prior-based methods typically fail to improve upon naive baseline performance in these evaluations, experiencing the same catastrophic forgetting as ordinary supervised learning. In order to excel, these methods require the experimental setup to violate the desiderata (e.g. using multiple heads and test-time task labels). This is a significant barrier to the application of prior-based methods to robotics and other domains.
A further weakness of regularization approaches is they often make strong assumptions about the structure of the data stream. Specifically, it is typically assumed that tasks are discrete (i.e. no overlaps), and that the boundaries between tasks are known. In [8] a different paradigm is noted, where task boundaries are unknown, and where tasks may overlap. This is a much harder problem as task labels cannot be used, and also because taking multiple passes over each task's data is no longer possible. While such experimental conditions are very challenging, they may be more representative of real-world online continual learning challenges, where there is no known structure to the data stream. Bayesian Gradient Descent (BGD) was proposed as
a solution to continual learning with unknown, graduated task boundaries by solving the problem with a closed-form update rule that does not rely on task labels [9]. BGD is insightful and possesses several appealing properties, particularly for robotics. BGD requires no auxiliary memory storage, it does not require task labels, it does not require tasks to be separated discretely, and finally it has calibrated uncertainties which is valuable in safety-critical robotic applications [10]. However, BGD has two key failings: first it is slow to converge and second, BGD fails to solve single-headed split MNIST and split FashionMNIST.
We present Bayesian Adaptive Moment Regularization (BAdam), a novel continual learning method that unifies the closed-form update rule of BGD with properties found in the Adam optimizer [11], maintaining adaptive per-parameter learning rates based on the first and second order moment of the gradient. The consequence of this is improved convergence rates, more effective constraining of parameter updates and therefore, we hypothesize, the learning of parameters that better generalize across multiple tasks. We show empirically that our method makes substantial improvements over previous prior-based approaches on single-headed class-incremental problems.
Finally, there has been no evaluation of methods in an environment that enforces the desiderata from [6], while also featuring graduated task boundaries (where tasks have some overlap as one finishes and another begins), no train-time task labels, and that permits only a single-epoch of training per task. This formulation is more reflective of the scenarios encountered by a robot in the real-world, where the environment often changes gradually, and retraining over multiple epochs is hard due to compute restrictions. Therefore, in this work we introduce this formulation of a continual learning challenge and evaluate regularization methods in such an environment.
## II Related Work
As discussed in [8], there are three primary continual learning scenarios; task-, domain-, and class-incremental learning, differentiated by task label availability (Table I). Task-IL problems are the easiest, as the label is given and thus task-specific components, such as separate heads or sub-networks, may be utilized. Such scenarios are less likely to be present in real-world tasks, due to the uncertainty surrounding the observed datastream. Class-IL scenarios are the most challenging, and class-incremental split MNIST is the simplest benchmark that accurately evaluates the efficacy of CL methods [6]. There exist many approaches to solving CL problems, which can be loosely classified into the following four paradigms: memory-based [12, 13, 14, 15, 16, 17], architectural [18, 19, 20], data-augmentation [21, 22, 23, 24], and regularization. Here, we restrict our review to regularization methods.
Regularization techniques are an abstraction of biological neuro-plasticity [25, 26], reducing parameter update rates proportional to their importance to previous tasks, thereby protecting past knowledge. This is effective as many parameter sets will yield equivalent performance for a given neural network [27, 28], thus constraining parameter updates does not preclude strong solutions being found.
Elastic weight consolidation (EWC) protects knowledge of previous tasks with a quadratic penalty on the difference between the previous task's parameters, and that of the current task [29]. The Fisher information matrix of previous tasks is used to estimate parameter importance. Memory aware synapses (MAS) quantify parameter importance through measuring the sensitivity of the output to perturbations of the weight [30], while synaptic intelligence (SI) calculates importance as a function of a parameter's contribution to the path integral of the gradient vector field along the parameter trajectory [7]. Variational continual learning (VCL) takes a Bayesian approach [31], where online variational inference is leveraged to mitigate catastrophic forgetting. A limitation of all of these methods is that they all assume task boundaries are discrete, and that the current task is known. Task-free continual learning (TFCL) removes the reliance on task labels [32]. Utilising MAS as the underlying CL method, TFCL detects plateaus in the loss surface (indicating learnt tasks), taking subsequent peaks to indicate the introduction of new tasks. This provides an online, label-free means of determining when to update importance weights, however it is somewhat fallible to graduated task boundaries, as the intermingling of tasks may cause sporadic changes in the loss surface.
Bayesian Gradient Descent addresses both graduated task boundaries and no task labels through a closed form update rule derived from online variational Bayes [9]. BGD's theoretical derivation assumes that data arrives in a sequential, online manner, offering a promising route to truly online continual learning. In practice, however, convergence is slow, and many epochs are required to reach optimal performance, violating the conditions described in [8]. Furthermore, like other prior-based methods, BGD fails to solve single-headed class-incremental problems.
Other noteworthy papers include [33], which is not a prior-based method but is a form of hard-regularization, constraining the parameter space to be within a specific hyper-rectangle. [34] Introduces a variational form of the
Adam optimizer [11], which bares some resemblance to our proposed method, however it is not a continual learning algorithm, nor does it prevent catastrophic forgetting. Finally, we note there have been several works highlighting the need for continual learning in robotics [5, 35, 36].
## III Methods
### _Preliminaries_
BGD attempts to learn the online posterior distribution of model parameters given by:
\[p(\theta|D_{n})=\frac{p(D_{n}|\theta)p(\theta|D_{n-1})}{p(D_{n})} \tag{1}\]
Where \(\theta\) are the model parameters, and \(D_{n}\) is the \(n\)th dataset (task) in the sequence. As calculating the exact posterior is intractable, online variational Bayes is used to approximate it. From [37], a parametric distribution \(q(\theta|\phi)\) is used to approximate a posterior distribution by minimising the Kullback-Leibler divergence:
\[KL(q(\theta|\phi)||p(\theta|D))=-\mathbb{E}_{\theta\sim q(\theta|\phi)}\left[ \text{log}\frac{p(\theta|D)}{q(\theta|\phi)}\right] \tag{2}\]
In online variational Bayes [38], the optimal variational parameters are calculated by:
\[\phi^{*}=\text{arg}\min_{\phi}\int q_{n}(\theta|\phi)\text{log}\frac{q_{n}( \theta|\phi)}{p(\theta|D_{n})}d\theta\]
\[\phi^{*}=\text{arg}\min_{\phi}\mathbb{E}_{\theta\sim q_{n}(\theta|\phi)}[ \text{log}(q_{n}(\theta|\phi))\]
\[-\text{log}(q_{n-1}(\theta))-\text{log}(p(D_{n}|\theta)] \tag{3}\]
The following transformation is defined to arrive at our Bayesian neural network:
\[\theta_{i}=\mu_{i}+\epsilon_{i}\sigma_{i}\]
\[\epsilon_{i}\sim\mathcal{N}(0,1)\]
\[\phi=(\mu,\sigma) \tag{4}\]
Assuming the parametric distribution is Gaussian and that components are independent (mean-field approximation), the optimization problem in 3 can be solved using unbiased Monte Carlo gradients as in [39]. This results in the following closed form updates for the variational parameters \((\mu,\sigma)\):
\[\mu_{t}=\mu_{t-1}-\eta\sigma_{t-1}^{2}\mathbb{E}_{\epsilon}\left[\frac{\partial (-\text{log}(p(D_{n}|\theta)))}{\partial\theta_{t}}\right] \tag{5}\]
\[\sigma_{t}=\sigma_{t-1}\sqrt{1+\left(\frac{1}{2}\sigma_{t-1}\mathbb{E}_{ \epsilon}\left[\frac{\partial(-\text{log}(p(D_{n}|\theta)))}{\partial\theta_ {t}}\epsilon_{t}\right]\right)^{2}}\]
\[-\frac{1}{2}\sigma_{t-1}^{2}\mathbb{E}_{\epsilon}\left[\frac{\partial(-\text {log}(p(D_{n}|\theta)))}{\partial\theta_{t}}\epsilon_{t}\right] \tag{6}\]
Where \(\eta\) is a learning rate. Since \(\frac{\partial(-\text{log}(p(D_{n}|\theta)))}{\partial\theta_{i}}\) depends on \(\mu_{i}\) and \(\sigma_{i}\), the solution is predicted by evaluating this derivative using the prior parameters. Finally, the expectations are estimated using the Monte Carlo method. For a more complete derivation of the method, we direct the reader to the original paper [9].
### _The BAdam Optimizer_
The update rule for \(\mu\) in BGD closely resembles the update rule in stochastic gradient descent, with the key difference being the variance weighting, which leads to smaller updates for highly certain parameters. This is the mechanism through which BGD lowers plasticity and prevents forgetting. To alleviate the catastrophic forgetting experienced by BGD, we propose a new update rule for \(\mu\) which leads to significantly less plasticity. This update rule is given in equation 7.
\[m_{t}=\frac{\beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot\mathbb{E}_{\epsilon} \left[\frac{\partial L_{n}(\theta)}{\partial\theta_{t}}\right]}{1-\beta_{1}^{ t}}\]
\[v_{t}=\frac{\beta_{2}\cdot v_{t-1}+(1-\beta_{2})\cdot\left(\mathbb{E}_{\epsilon} \left[\frac{\partial L_{n}(\theta)}{\partial\theta_{t}}\right]\right)^{2}}{1- \beta_{2}^{t}}\]
\[\mu_{t}=\mu_{t-1}-\eta\sigma^{2}\cdot\frac{m_{t}}{\sqrt{v_{t}}+\gamma} \tag{7}\]
where \(L_{n}(\theta)=(-\text{log}(p(D_{n}|\theta)))\) is the log likelihood loss, and \(\gamma\) is a small value, typically \(1e-8\). This is derived from the update rule in Adam [11], with \(m_{t}\) being the bias-corrected mean of the loss and \(v_{t}\) being the bias-corrected variance. The approach introduces a per-parameter learning rate and a momentum characteristic, which facilitates a greater robustness to saddle points and areas of small curvature in the loss surface, as well as faster convergence [11].
The benefits of BAdam's \(\mu\) update rule, in a continual learning context, are two-fold. Firstly, recent work has shown that Adam leads to considerably less plasticity than SGD [40]. While this can be a negative as it can limit learning in later tasks, we propose that this reduced plasticity could play a role in reducing catastrophic forgetting. Second, the variance of each parameter (\(\sigma\)) is minimised when \(\mu\) is at an optimal value, and since the plasticity of a parameter is controlled by this \(\sigma\) value, better and faster optimization of \(\mu\) leads to lower update rates for those parameters, which will reduce catastrophic forgetting. To better understand this, we analyse discussion from [9]. Using a Taylor expansion we can see that \(\mathbb{E}_{\epsilon}\left[\frac{\partial L_{n}(\theta)}{\partial\theta_{i}} \epsilon_{i}\right]\) closely approximates the curvature of the loss, for small values of \(\sigma\):
\[\mathbb{E}_{\epsilon}\left[\frac{\partial L_{n}(\theta)}{\partial\theta_{i}} \epsilon_{i}\right]\]
\[=\mathbb{E}_{\epsilon}\left[\left(\frac{\partial L_{n}(\mu)}{\partial\theta_{i} }+\sum_{j}\frac{\partial^{2}L_{n}(\mu)}{\partial\theta_{i}\partial\theta_{j}} \epsilon_{j}\sigma j+O(||\sigma^{2}||)\right)\epsilon_{i}\right]\]
\[=\frac{\partial^{2}L_{n}(\mu)}{\partial^{2}\theta_{i}}\sigma_{i}+O(||\sigma||^{2}) \tag{8}\]
where for \(\epsilon_{i}\) and \(\epsilon j\) the following holds: \(\mathbb{E}_{\epsilon}[\epsilon_{i}]=0\) and \(\mathbb{E}_{\epsilon}[\epsilon_{i},\epsilon_{j}]=\delta_{ij}\). The consequence of this is that in areas with positive curvature, such as near local and global minima, the uncertainty will decrease. This is because the right hand term to be subtracted in equation 6 will be positive, causing a reduction in \(\sigma\). Similarly, uncertainty will increase at points of negative curvature, such as near maxima or saddle points. Due to these characteristics of \(\sigma\), our method introduces key advantages over vanilla BGD. Since the curvature of the loss strongly depends on the value of \(\mu\), faster convergence of \(\mu\) will lead to the faster shrinking of \(\sigma\), which is desirable as it becomes more likely a task will be protected upon the introduction of data from new tasks. Finally, since saddle points are associated with higher uncertainty, and BAdam has the ability to better overcome saddle points, our method can avoid getting stuck in low-certainty states.
Figure 1 further highlights the benefits of BAdam's \(\mu\) update rule. The figure shows the average magnitude for all \(\mu\) parameters in a simple feedforward neural network trained on the SplitMNIST task. Two key observations are immediately obvious: first, BAdam yields far smaller parameter values, which is indicative of lower plasticity since parameters can't grow as quickly. Second, every time a new task is introduced parameters optimized by BGD experience a sudden drop in magnitude, essentially'soft resetting' the parameters, which is indicative of catastrophic forgetting. In contrast, parameters optimized by BAdam suffer a proportionally much smaller drop in magnitude, indicating that these parameters are either better protected, or that they occupy a position in parameter space that better generalizes to subsequent tasks. This is the key benefit of BAdam compared to other CL methods, BAdam better impedes the plasticity of model parameters to protect existing knowledge. As will be discussed in section V, BAdam achieves this without sacrificing model performance.
## IV Experiments
All experiments are conducted on an Nvidia RTX 4090 and results for continual learning experiments are averaged over 15 seeds \([1,15]\) (25 seeds, for the graduated experiments). In the first experiment, we compare the convergence properties of BAdam to that of BGD and Adam by training a small neural network with 2 convolutional and 3 fully connected layers for 10 epochs on the CIFAR10 dataset.
Next, BAdam's performance is evaluated against other prior-based methods on the three standard benchmark datasets: SplitMNIST, Split FashionMNIST, and PMNIST, as in [6]. Split MNIST and split FashionMNIST are framed as single-headed class-incremental problems. In split MNIST, the model must successfully classify each digit as in standard MNIST training, however the digits are presented in pairs, sequentially: \([0,1],[2,3],[4,5]\), and so forth. Split FashionMNIST shares the same formulation, but with clothing items. For completeness, we also evaluate on the domain-incremental PMNIST dataset [29], where a model must classify a sequential set of permuted MNIST digits. We evaluate BAdam against BGD, MAS, EWC, SI, and VCL (without an auxiliary coreset). We adopt the same architectures, hyper-parameters, and experimental setup found in [6], with the exception that we do not allow VCL to train for additional epochs on account of being Bayesian, since BGD and BAdam also use Bayesian neural networks but are not afforded additional training time. MAS, BAdam and BGD hyper-parameters were found via a gridsearch (typically for BAdam we found \(\eta\) values in the range \([0.01,0.1]\) and initial values for \(\sigma\) in the range \([0.005,0.02]\) to be most effective). The implementation utilizes PyTorch [41], and the Avalanche continual learning library [42].
Since there exists no standard robotic continual learning benchmark that is class-incremental and satisfies the desiderata of [6], we introduce a novel formulation of the standard SplitMNIST benchmarks that features specific conditions reflective of learning in the real world. Referring to these experiments henceforth as the graduated
Fig. 1: Change in average \(\mu\) magnitude during training. Catastrophic forgetting can be observed in BGD’s parameters, while this is less present for BAdam.
Fig. 2: Probability of a sample being taken from each task every batch for graduated SplitMNIST
experiments, we take the original SplitMNIST and Split FashionMNIST evaluations and impose the following constraints: first, each data sample is seen only once, since storing and extensively training on data samples is challenging for robotics and edge devices. Second, task labels are unavailable to the methods, which is reflective of the real-world where an autonomous agent does not necessarily have access to this meta-information. Finally, tasks may have some small overlap, which is representative of how real-world tasks and environments change gradually over time. The graduated boundaries are implemented by assigning draw probabilities to each task based on the index of the current sample. The probability is proportional to the squared distance of the current sample index to the index of the centre point of each task. This creates peaks where a task is near-guaranteed to be selected, which slowly decays into a graduated boundary with the next task. This can be seen in figure 2. Since no task labels are available, regularization is recalculated after every batch. While these experiments bear a strong resemblance to experiments in [9], they are different in that only a single task is permitted and also that we restrict the experiments to single-headed class-incremental problems, which was not enforced in [9]. For these tasks, we train a fully-connected neural network with 2 hidden layers of 200 units each, and use the ReLU activation function. We conduct a grid-search for each method's hyper-parameters, optimising for final model performance. In addition to the methods evaluated in the labelled experiments, we also evaluate the Task-Free Continual Learning approach (TFCL), which allows MAS to not require task-labels.
## V Results
### _Cifar10 Convergence Analysis_
Figure 3 shows that BAdam converges much faster than BGD, demonstrating the efficacy of the amended update rule for \(\mu\). Furthermore, the convergence rates of BAdam are comparable to Adam, which is a strong property since BAdam utilizes a Bayesian neural network. The ability to converge quickly is of significant importance for all optimization tasks, but this is especially true for those likely to be undertaken on robotic hardware where compute power may be limited.
### _Standard Benchmark Experiments_
For split MNIST (figure 4), no previous prior-based method outperforms the naive baseline, all achieving \(\sim 20\%\) accuracy. In contrast BAdam makes substantial improvements, reaching over \(40\%\) accuracy, doubling the efficacy of other methods. On the more challenging Split FashionMNIST benchmark, table II shows that existing methods once again perform similarly to the naive baseline, achieving \(\sim 21\%\) accuracy. BAdam improves upon this significantly, reaching \(31\%\) accuracy, being the only method to successfully retain some previous knowledge. These improvements are statistically significant for \(p=0.05\) using a standard T-test. While there remains clear room for improvement, BAdam is the first prior based method that makes substantial steps towards solving these challenging single-headed class-incremental tasks. On the domain-incremental PMNIST task, BAdam is competitive with all other methods, whereas VCL's poor performance may be attributed to underfitting.
Fig. 4: CL method performance on class-incremental split MNIST
Fig. 5: CL method performance on class-incremental split MNIST with graduated boundaries, single epoch training, no task labels
Fig. 3: Optimizer convergence rate comparison on single-task CIFAR10
### _Graduated Experiments_
As seen in figure 5 and table II, BAdam outperforms all other methods on both tasks, and is considerably better than BGD. BAdam not only reaches superior performance as the number of tasks grows, but it is also the only method with stable and consistent improvement. Other methods achieve stronger intermediate performance, but they fail to maintain that level due to catastrophic forgetting. BAdam is seemingly slower to reach good performance compared to other methods, despite its strong convergence properties, however this is explained by method hyper-parameters being optimized for final performance on all tasks, not intermediate performance. BAdam's performance on SplitMNIST is perfectly preserved between the original and graduated experiments, while Split FashionMNIST is slightly worse, which is likely due to underfitting on the more challenging task. These findings indicate that the method is robust to the additional challenges posed by this experiment. Interestingly, VCL improves when applied to the graduated experiments.
## VI Discussion
BAdam exhibits state-of-the-art performance on single-headed, class-incremental tasks for prior-based methods. BAdam is the strongest method evaluated, both in the discrete, labelled domain, and the graduated, label-free, single epoch domain, and is also the first prior-based method to make convincing improvements over the baseline in these experiments. The ability to retain knowledge on these challenging benchmarks is an important step for prior-based methods, although there is still clear room for improvement.
Many previous contributions to regularization CL have been new parameter importance estimation methods, however existing prior-based approaches do not seem to have inaccurate importance estimation, and thus further contributions in this direction have failed to improve performance on class-incremental scenarios. The contributions in this work are instead focused around improving the convergence properties of BGD, this seems to be a promising avenue forward.
BAdam is promising, however it has two key limitations. Firstly, like BGD, BAdam has two hyper-parameters to optimize, and we found that both of these methods are fairly sensitive to changes in the initialisation value of the standard deviation. Further work on identifying good initialisation values a priori is worthwhile, to help ensure the optimal performance of BAdam may be realised in real-world challenges. Secondly, the primary barrier for progress is the rate at which neural networks can learn more challenging problems. If a model cannot learn a single task in a single epoch (such as complex graph, vision, or reinforcement learning problems), then solving continual learning tasks under such conditions is limited, since the upper bound on performance is unsatisfactory. Future research into fast convergence and concepts such as few-shot learning is important to the progress of online continual learning. Furthermore, since BAdam is a closed-form update rule, if improved learning rates and convergence is reached through advances in stochastic optimization, BAdam will likely need to be further improved with these characteristics.
## VII Conclusion
In this work we present BAdam, which unifies desirable properties of Adam and BGD, yielding a fast-converging continual learning method with no reliance on task labels. The extensions to BGD offered by BAdam are easy to implement, computationally efficient, and built upon familiar concepts from stochastic optimization literature.
We evaluated BAdam alongside a range of regularization CL methods for single-head class-incremental problems in both traditional continual learning conditions, and in a novel experimental setup focused on continual learning in a single epoch, without requiring task labels or a strongly structured data-stream. This is often more reflective of the real-world and is considerably more challenging than other CL environments [8]. We found BAdam to be the most efficacious approach in both setups, being the only prior-based continual learning method to improve upon the naive baseline and more than doubling the performance of other methods in the SplitMNIST task. While further work is required to fully solve class-incremental problems, BAdam takes the first steps towards retaining previous task knowledge in these challenging domains using prior-based methods, laying the groundwork for important future work in this direction. Further work could explore several
avenes, a simple direction would be to explore alternative ways to improve convergence by leveraging other concepts from stochastic optimization. A more interesting direction is to investigate other limitations of prior-based approaches that exist beyond the importance estimation methods.
|
2309.09880 | Error Reduction from Stacked Regressions | Stacking regressions is an ensemble technique that forms linear combinations
of different regression estimators to enhance predictive accuracy. The
conventional approach uses cross-validation data to generate predictions from
the constituent estimators, and least-squares with nonnegativity constraints to
learn the combination weights. In this paper, we learn these weights
analogously by minimizing a regularized version of the empirical risk subject
to a nonnegativity constraint. When the constituent estimators are linear
least-squares projections onto nested subspaces separated by at least three
dimensions, we show that thanks to an adaptive shrinkage effect, the resulting
stacked estimator has strictly smaller population risk than best single
estimator among them, with more significant gains when the signal-to-noise
ratio is small. Here "best" refers to an estimator that minimizes a model
selection criterion such as AIC or BIC. In other words, in this setting, the
best single estimator is inadmissible. Because the optimization problem can be
reformulated as isotonic regression, the stacked estimator requires the same
order of computation as the best single estimator, making it an attractive
alternative in terms of both performance and implementation. | Xin Chen, Jason M. Klusowski, Yan Shuo Tan | 2023-09-18T15:42:12Z | http://arxiv.org/abs/2309.09880v3 | # Error Reduction from Stacked Regressions+
###### Abstract
Stacking regressions is an ensemble technique that forms linear combinations of different regression estimators to enhance predictive accuracy. The conventional approach uses cross-validation data to generate predictions from the constituent estimators, and least-squares with nonnegativity constraints to learn the combination weights. In this paper, we learn these weights analogously by minimizing an estimate of the population risk subject to a nonnegativity constraint. When the constituent estimators are linear least-squares projections onto nested subspaces separated by at least three dimensions, we show that thanks to a shrinkage effect, the resulting stacked estimator has strictly smaller population risk than best single estimator among them. Here "best" refers to an estimator that minimizes a model selection criterion such as AIC or BIC. In other words, in this setting, the best single estimator is inadmissible. Because the optimization problem can be reformulated as isotonic regression, the stacked estimator requires the same order of computation as the best single estimator, making it an attractive alternative in terms of both performance and implementation.
## 1 Introduction
When performing regression, an analyst rarely knows a priori what the true model is, and instead starts by deriving a collection of candidate models \(\hat{\mu}_{1},\hat{\mu}_{2},\ldots,\hat{\mu}_{M}\). Classically, the next step in the process is to select the best model based on criteria such as complexity (e.g., AIC or BIC) or out-of-sample error (e.g., cross-validation). Such procedures come under the umbrella of model selection, and have been extensively studied (Hastie et al., 2009).
If predictive performance is the primary consideration, Wolpert (1992) realized that an alternate approach may be more fruitful. Instead of selecting the best model, one may use predictions from these estimators as inputs for another (combined) model, a scheme he called _stacked generalizations_. Breiman (1996) was able to operationalize Wolpert's stacking idea by restricting the
combined models to be of the form
\[\hat{f}_{\text{stack}}(\mathbf{x})=\sum_{k=1}^{M}\hat{\alpha}_{k}\hat{\mu}_{k}( \mathbf{x}). \tag{1}\]
and learning the weights \(\hat{\alpha}_{1},\hat{\alpha}_{2},\ldots,\hat{\alpha}_{M}\) using cross-validation. Through extensive experiments on real-world and simulated data, he consistently observed the following:
1. The stacked model (1) has lower mean squared error than the single model having lowest cross-validation error.
2. The optimal weights sum to approximately one, despite no explicit constraint to do so.
These initial results have proved to be quite robust and have been reproduced in many different settings. Indeed, stacking has found widespread applications in industry (often going by the name _blending_), and has also been a component of several successful solutions to data science competitions, including some on the platform Kaggle and, perhaps most famously, the Netflix Prize (Koren, 2009). It is therefore unfortunate that the beneficial impact of stacking has yet to be properly understood theoretically. In the statistical community, variations of stacking have been studied under the name of aggregation. However, the current state of the art is only able to bound the excess risk that the stacked model has with respect to the best model selected by an oracle. This does not quite address the empirical observations.
In our paper, we focus on a special setting in order to better illustrate these stylized features of stacking. We will focus on the case when the base estimators comprise a nested sequence of regression models. Such a sequence arises naturally in the context of sieve estimation (series, spline, polynomial, wavelet), stepwise regression, and pruning decision trees (e.g., CART). Hence, it is fairly general. In such a setting, we can rigorously prove both of Breiman's empirical observations for a variant of stacking that estimates the stacking weights via a regularized mean squared error. Our proof relies on a novel connection between the optimization program and isotonic regression. This connection also allows our version of the stacked regression problem to be solved efficiently.
## 2 Preliminaries
Throughout the paper, we consider a nonparametric regression model. Suppose we have collected data \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,(\mathbf{x}_{n},y_{n})\) satisfying
\[y_{i}=f(\mathbf{x}_{i})+\sigma\varepsilon_{i},\quad i=1,2,\ldots,n,\]
where \(y_{i}\in\mathbb{R}\) is the label of the \(i\)-th observation, \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) is the \(d\)-dimensional feature vector of the \(i\)-th observation, \(\varepsilon_{i}\overset{\text{iid}}{\sim}\mathcal{N}(0,1)\) is the \(i\)-th unobserved noise variable following the standard normal distribution, and \(f\) is the unknown regression function to estimate. We consider the fixed design problem where the \(\mathbf{x}_{i}\)'s are deterministic. We also assume that the noise level \(\sigma^{2}\) is known a priori.
For a real-valued function \(f\), we define \(\mathbf{f}=(f(\mathbf{x}_{1}),\ldots,f(\mathbf{x}_{n}))^{\text{T}}\) to be the \(n\times 1\) vector of \(f\) evaluated
at the design points \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{\mathrm{T}}\in\mathbb{R}^{n \times d}\). For real-valued functions \(f\) and \(g\), we let
\[\|\mathbf{f}\|^{2}=\frac{1}{n}\sum_{i=1}^{n}\left(f(\mathbf{x}_{i})\right)^{2}, \qquad\langle\mathbf{f},\mathbf{g}\rangle=\frac{1}{n}\sum_{i=1}^{n}f(\mathbf{x }_{i})g(\mathbf{x}_{i})\]
denote the squared empirical norm and empirical inner product, respectively. The response vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})^{\mathrm{T}}\) is viewed as a relation, defined on the design matrix \(\mathbf{X}\), that associates \(\mathbf{x}_{i}\) with \(y(\mathbf{x}_{i})=y_{i}\). Thus, we write \(\|\mathbf{y}-\mathbf{f}\|^{2}=\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-f(\mathbf{ x}_{i})\right)^{2}\) and \(\langle\mathbf{y},\mathbf{f}\rangle=\frac{1}{n}\sum_{i=1}^{n}y_{i}f(\mathbf{ x}_{i})\). The \(\ell_{0}\) norm of a vector, denoted by \(\|\cdot\|_{\ell_{0}}\), is defined as the number of its nonzero components. We write \((z)_{+}=\max\{0,z\}\) for the positive part of a real number \(z\).
### Nested Regression Models
In this paper, we investigate the problem of approximating a target variable \(y\) by stacking a sequence of least squares projections onto nested subspaces corresponding to a different level of approximation. Specifically, we consider a fixed sequence of orthonormal basis functions \(\{\psi_{l}\}_{l\in\mathbb{N}}\), that is, \(\|\psi_{l}\|^{2}=1\) and \(\langle\psi_{l},\psi_{l^{\prime}}\rangle=0\) for all \(l\neq l^{\prime}\). Let \(A_{1}\subseteq A_{2}\subseteq\cdots\subseteq A_{M}\) be a nested sequence of index sets and define the \(k\)-th linear subspace \(\mathcal{A}_{k}\) as \(\mathrm{span}(\{\psi_{l}:l\in A_{k}\})\). The nested structure implies that each subsequent subspace of dimension \(d_{k}=|A_{k}|\) expands the representation space of the previous one. Let \(\hat{\mu}_{k}(\mathbf{x})\) represent the projection of the response values \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})^{\mathrm{T}}\) onto the \(k\)-th linear subspace \(\mathcal{A}_{k}\), given explicitly by
\[\hat{\mu}_{k}(\mathbf{x})=\sum_{l\in A_{k}}\langle\mathbf{y},\psi_{l}\rangle \psi_{l}(\mathbf{x}). \tag{2}\]
We define the mean of \(\hat{\mu}_{k}\) by \(f_{k}(\mathbf{x})=\sum_{l\in A_{k}}\langle\mathbf{f},\psi_{l}\rangle\psi_{l}( \mathbf{x})\) and the empirical risk of \(\hat{\mu}_{k}\) by \(R_{k}=\|\mathbf{y}-\hat{\mu}_{k}\|^{2}\). For notational convenience in some of the upcoming expressions, we define \(d_{0}=0\), \(R_{0}=\|\mathbf{y}\|^{2}\), and \(\hat{\mu}_{0}(\mathbf{x})\equiv 0\), corresponding to the _null model_. Without loss of generality, we shall assume that the models are distinct, so that the set inclusions above are strict. In this case, because the representation spaces are nested, we have \(R_{0}\geq R_{1}\geq\cdots\geq R_{M}\), with strict inequality holding almost surely.1 We therefore assume throughout the paper that \(R_{k}<R_{k-1}\) for all \(k\), an event that holds with probability one.
Footnote 1: This is due to the fact that \((n/\sigma^{2})(R_{0}-R_{k})\sim\chi^{2}(d_{k},\theta)\) follows a (continuous) noncentral chi-squared distribution with \(d_{k}\) degrees of freedom and noncentrality parameter \(\theta=n(\|\mathbf{f}\|^{2}-\|\mathbf{f}-\mathbf{f}_{k}\|^{2})/\sigma^{2}\).
### Examples of Nested Regressions
While there are many nested families of regression models that one could potentially stack (e.g., sieve estimators), Breiman focused on two canonical forms that we now describe.
#### 2.2.1 Subset Regressions
Using validation data, perform stepwise deletion (elimination) on a linear model with \(d\) variables, that is, start with all \(d\) variables and then discard the one that reduces the empirical risk the least, one by one, until one is left with a simple linear model. This produces \(d\) (nested) linear models such that the \(k\)-th model has \(d-k+1\) nonzero coefficients. These nested regressions can then be stacked together using the training data.
#### 2.2.2 Decision Trees
A decision tree model can be viewed as a least squares projection onto a set of piecewise constant functions. The set of piecewise-constant functions is defined in terms of a (possibly data-dependent) partition of the input space induced by the splits in the tree. The tree output can therefore be written in the form (2), where the basis elements \(\psi_{t}\) are indexed by the internal nodes of the tree and depend only on the induced tree partition; see for example, (Cattaneo et al., 2022, Lemma 2.1).
Nested decision trees serve as a canonical example of nested regressions, since a refinement of a partition naturally defines an ordering. Each tree \(T_{k}\) is then a subtree of its successor: \(T_{1}\leq T_{2}\leq\cdots\leq T_{M}\), where \(T^{\prime}\preceq T\) means that \(T^{\prime}\) can be obtained from \(T\) by pruning. As each tree model is linear, \(d_{k}\) is equal to the number of internal nodes in its corresponding tree \(T_{k}\).
To form the nested sequence of trees in practice, Breiman (1996) recommended growing a large tree \(T_{M}\) with \(M\) terminal nodes (\(M-1\) internal nodes) and then pruning upwards so that \(T_{k}\) is the subtree of \(T_{M}\) having the smallest empirical risk among all subtrees of \(T_{M}\) with \(k\) terminal nodes (\(k-1\) internal nodes). This construction implies that the number of models in the stack could potentially be quite large, as \(M\) can grow polynomially with \(n\).
### Accounting for Data-adaptive Subspaces
To preserve the integrity of our theory, we require the sequence of subspaces to be statistically independent of the responses. On the other hand, subset regression and decision trees as implemented in practice both select these subspaces data-adaptively, which in the case of decision trees corresponds to selecting splits based on a data-dependent criterion, such as impurity decrease for CART. In order to maintain independence, one may perform data splitting, using a portion of the data to select the subspaces and the rest of the data to fit the stacked model. This assumption, sometimes known as the honesty condition, is often made in the literature for theoretical tractability (Athey and Imbens, 2016).
## 3 Learning the Stacking Weights
For a sequence of weights \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{M})\), the empirical risk of the corresponding stacked model is given by
\[R(\boldsymbol{\alpha})=\left\|\mathbf{y}-\sum_{k=1}^{M}\alpha_{k}\hat{ \boldsymbol{\mu}}_{k}\right\|^{2}. \tag{3}\]
### Breiman's Stacking
It is easy to see that directly optimizing (3) leads to a weight vector that is a point mass on the most complex model \(\hat{\mu}_{M}\), and so Breiman (1996) advocated for optimizing the weights with respect to cross-validation error. Even so, optimizing over \(\boldsymbol{\alpha}\) without constraints often leads to stacked models that overfit. Recognizing this, Breiman advocated for some form of regularization. Because the \(\hat{\mu}_{k}\) values are highly correlated--since they aim to predict the same outcome--he initially experimented with ridge constraints. Ultimately, however, he settled on nonnegative constraints, i.e.,
\(\alpha_{k}\geq 0\) for \(k=1,2,\ldots,M\), for their seemingly best performance. Breiman argued that the reason nonnegative weights (that also sum to one) work well is because they make the stacked estimator "interpolating" in the sense that \(\min_{k}\hat{\mu}_{k}(\mathbf{x})\leq\hat{f}_{\text{stack}}(\mathbf{x})\leq \max_{k}\hat{\mu}_{k}(\mathbf{x})\). On the other hand, he did not impose the constraint that the weights had to sum to unity as this seemed to have little effect on performance.
While Breiman used cross-validation to learn the weights of combination, we can quickly see that such an approach is not appropriate for the fixed design setting because the cross-validation error does not give an unbiased estimate of the in-sample error. As such, we will instead study a variant of stacking that optimizes a penalized version of (3) based on model degrees of freedom and dimension. We believe this to be a small departure from Breiman's formulation of stacking, as the asymptotic model selection properties of cross-validation and penalized empirical risk have been shown to be similar under certain conditions (Stone, 1977; Shao, 1997). Because of this, our theory can and indeed does explain some of the empirical findings of Breiman.
### Dimension of a Model
We define the dimension of a function \(\mu\) by
\[\text{dim}(\mu)=\min_{A\subseteq\mathbb{N}}\,\big{|}|A|:\mu\in\text{span}(\{ \psi_{I}:l\in A\})\big{|},\]
that is, the fewest number of basis elements needed to represent the function. If no such representation exists, then \(\text{dim}(\mu)=\infty\), and if \(\mu\equiv 0\), then \(\text{dim}(\mu)=0\). Using this definition and setting \(\alpha_{0}=1\), it is easy to see that a stacked model of nested regressions with fixed weight vector \(\boldsymbol{\alpha}\) has dimension \(\max_{k=0,1,\ldots,M}\,\{d_{k}:\alpha_{k}\neq 0\}\).
### Degrees of Freedom of a Stacked Model
In fixed design regression, the degrees of freedom of an estimator \(\hat{f}\) is defined as
\[\text{df}(\hat{f})=\sum_{i=1}^{n}\frac{\text{cov}(y_{i},\hat{f}(x_{i}))}{ \sigma^{2}}.\]
It quantifies the optimism bias of the empirical risk with respect to the mean squared error (Hastie et al., 2009, Section 7.5 & 7.7):
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}\|^{2}]=\mathbb{E}[\|\mathbf{y}- \mathbf{\hat{f}}\|^{2}]+\frac{2\sigma^{2}\text{df}(\hat{f})}{n}-\sigma^{2}. \tag{4}\]
Although it is generally a population level quantity, in the case of linear regression, it is equivalent to the number of regressors which is known a priori; in particular, \(\text{df}(\hat{\mu}_{k})=d_{k}\) for each \(k=1,2,\ldots,M\). Moreover, because covariance is linear in each of its arguments, the degrees of freedom for a stacked model with fixed weight vector \(\boldsymbol{\alpha}\) likewise has a known formula. It is equal to
\[\text{df}(\boldsymbol{\alpha})=\sum_{k=1}^{M}\alpha_{k}d_{k}.\]
### Stacking via Complexity Penalization
The identity (4) immediately suggests that an unbiased estimator of the population risk of the stacked model is \(R(\alpha)+(2\sigma^{2}/n)\mathrm{df}(\alpha)\), which we may then optimize to learn the stacking weights. However, a major problem with this estimator is that unbiasedness holds as long as the stacking weights are nonadaptive. So there is no guarantee that \(R(\hat{\alpha})+(2\sigma^{2}/n)\mathrm{df}(\hat{\alpha})\) is an unbiased estimator of the expected population risk of the stacked model with adaptive weights \(\hat{\alpha}\), where \(\hat{\alpha}\) minimizes \(R(\alpha)+(2\sigma^{2}/n)\mathrm{df}(\alpha)\) subject to \(\alpha\geq\mathbf{0}\). In fact, it turns out that
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^{2}]=\mathbb{E} \Bigg{[}R(\hat{\alpha})+\frac{2\sigma^{2}}{n}\mathrm{df}(\hat{\alpha})+\frac{4 \sigma^{2}}{n}\|\hat{\alpha}\|_{\ell_{0}}-\frac{4\sigma^{2}}{n}\hat{\alpha}^{ \mathrm{T}}\hat{\alpha}_{0}-\sigma^{2}\Bigg{]}, \tag{5}\]
where \(\hat{\alpha}_{0}(k)=\sum_{j=1}^{k}\mathbf{1}(\hat{\alpha}_{j}\neq 0)\). It therefore becomes clear that an accurate estimator of the population risk of a stacked model with adaptive weights should involve more than just the empirical risk \(R(\alpha)\) and degrees of freedom \(\mathrm{df}(\alpha)\), and even be discontinuous in the weights. Inspired by this observation, we propose learning the weights via solving the following program:
\[\begin{split}\text{minimize}& R(\alpha)+\frac{2 \tau\sigma^{2}}{n}\mathrm{df}(\alpha)+\frac{(\lambda-\tau)_{+}^{2}}{\lambda} \frac{\sigma^{2}}{n}\mathrm{dim}(\alpha)\\ \text{subject to}&\alpha\geq\mathbf{0},\end{split} \tag{6}\]
where \(\mathrm{dim}(\alpha)=\max_{k=0,1,\ldots,M}\left\{d_{k}:\alpha_{k}\neq 0\right\}\) with \(\alpha_{0}=1\), and \(\tau,\lambda>0\) are tuning parameters chosen by the user. An equivalent way of formulating the program is to minimize \(R(\alpha)+(2\tau\sigma^{2}/n)\mathrm{df}(\alpha)\) subject to the constraints \(\alpha\geq\mathbf{0}\) and \(\mathrm{dim}(\alpha)\leq\mathrm{dim}(\hat{f}_{\mathrm{best}})=d_{\hat{m}}\).
The objective function (6) modulates the empirical risk of the stacked ensemble in two distinct ways. Firstly, similar to \(\ell_{1}\) regularization in nonnegative Lasso, it penalizes the weights according to the size of the models, as reflected by the term \(\mathrm{df}(\alpha)\). Secondly, similar to \(\ell_{0}\) regularization, the objective function also takes into account the number of basis functions needed to represent the model through the term \(\mathrm{dim}(\alpha)\). This promotes model parsimony by encouraging a sparse selection of models.
While program (6) is nonconvex due to \(\mathrm{dim}(\alpha)\), we can leverage the nested structure of the estimators and reduce it to an isotonic regression problem, which can be solved in \(O(M)\) time, i.e., linear time in the number of models. Note that this is orderwise the same complexity as finding the best single model (7). This means that we could potentially stack thousands of models and not worry about computational issues.
It is worth noting that, like conventional stacking, we do not impose any sum constraint on the weights. Indeed, as we shall see, such a constraint is essentially superfluous as the unconstrained solution always satisfies \(\sum_{k=1}^{M}\hat{\alpha}_{k}<1\) with near equality in most cases. This fact also corroborates with Breiman's experiments, in particular, when stacking nested decision trees or linear regressions determined by stepwise deletion. We have also found both empirically and theoretically that enforcing the equality constraint can lead to inferior performance, as otherwise one is limiting the potentially beneficial effects of shrinkage.
### Best Single Model
In our analysis, we will compare the performance of the stacked model (6) to that of the _best single model_, defined as \(\hat{f}_{\text{best}}(\mathbf{x})=\hat{\mu}_{\hat{m}}(\mathbf{x})\), where
\[\hat{m}\in\operatorname*{argmin}_{k=0,1,\ldots,M}\ R_{k}+\lambda\frac{\sigma^{ 2}d_{k}}{n}, \tag{7}\]
where \(\lambda>0\) is a tuning parameter whose value we will take to be the same in (6) and (7). If ties exist in (7), we choose the smallest \(\hat{m}\). Note that we have also included the null model (i.e., \(\hat{\mu}_{0}\equiv 0\)) as a candidate model, which will simplify some of the forthcoming theory.
The criterion \(R_{k}+\lambda\sigma^{2}d_{k}/n\) holds significance in various model selection contexts. Specifically, for \(\lambda=2\), it matches Mallows's \(C_{p}\). In the present Gaussian linear regression setting, if \(\lambda=2\), it is equivalent to the Akaike Information Criterion (AIC) and Stein's Unbiased Risk Estimate (SURE) (Stein, 1981), and if \(\lambda=\log(n)\), it corresponds to the Bayesian Information Criterion (BIC). These model selection criteria operate on a single model at a time, seeking the one that optimally balances goodness of fit and complexity. In contrast, stacking leverages the diverse strengths of multiple models through an optimal linear combination.
### Stacking Versus Pruning Decision Trees
When the nested subspaces arise from a sequence of decision trees obtained via Breiman's approach (see Section 2.2.2), model selection via (7) corresponds exactly to cost-complexity pruning (Hastie et al., 2009, Section 9.2.2). Meanwhile, the stacked model \(\hat{f}_{\text{stack}}(\mathbf{x})\) also takes the form of a decision tree with the same structure as the largest tree in the stack. It consists of \(\dim(\hat{f}_{\text{stack}})\) internal nodes and outputs \(\hat{f}_{\text{stack}}(\mathbf{x})=\sum_{k=1}^{M}\hat{\alpha}_{k}\overline{y}_ {k}\), where \(\overline{y}_{k}\) represents the output of tree \(T_{k}\) at \(\mathbf{x}\), which is the sample mean of observations in the cell containing \(\mathbf{x}\). Since for \(k=1,2,\ldots,M\), the leaf of \(T_{k}\) containing \(\mathbf{x}\) is an ancestor of that containing \(\mathbf{x}\) in \(T_{M}\), this has the effect of shrinking the predictions over each leaf in \(T_{M}\) to its ancestors' values, similar to the method of Agarwal et al. (2022).
## 4 Main Results
Our main result is that stacked model with weights from (6) _strictly_ outperforms the best single model (7) provided the dimensions of the individual models differ by a constant. Put another way, in the parlance of statistical decision theory, the best single estimator \(\hat{f}_{\text{best}}\) (and by implication, the estimator selected by AIC and BIC) is _inadmissible_. As far as we are aware, this is the first result showing superior performance of the stacked model compared to the best single model in any context. We emphasize again that both \(\hat{f}_{\text{best}}\) and \(\hat{f}_{\text{stack}}\) have the same computational complexity, that is, \(O(M)\).
**Theorem 4.1**.: _Suppose \(0<\tau<2\) and \(d_{k}\geq d_{k-1}+4/(2-\tau)\) for all \(k\). The population risk of the stacked model with weights from (6) is strictly less than the population risk of the best single model (7); that is,_
\[\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{stack}\|^{2}\big{]}<\mathbb{E} \big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{best}\|^{2}\big{]}.\]
**Remark 1**.: _Since the \(d_{k}\) are integers, the hypothesis of Theorem 4.1 becomes \(d_{k}\geq d_{k-1}+3\) when \(0<\tau\leq 2/3\). Thus, a sufficient condition for the stacked model to be provably superior to the best single model is that the constituent models differ by at least three dimensions. We presently do not know if the condition is also necessary. If we knew the theoretically optimal \(\tau\), which involves unknown population level quantities, then the condition can be improved to \(d_{k}\geq d_{k-1}+2\)._
The proof hinges on a few novel ingredients that we briefly introduce here, and will explain further in the next section. First, the nestedness of the base estimators and the nonnegative weight constraints allow us to recast the optimization program (6) as isotonic regression (see Lemma 5.1). We then build on known properties of solutions to isotonic regression to obtain an explicit representation of \(\hat{f}_{\text{stack}}\) and compare it to that of \(\hat{f}_{\text{best}}\) (see Theorem 5.2). Finally, to compare the population risks of the two representations, we apply an extension of Stein's Lemma for discontinuous functions (Tibshirani, 2015), which allows us to decompose the model degrees of freedom into two portions, one of which is attributable to the model selection mechanism (Tibshirani called this contribution the _search degrees of freedom_).
This proof strategy in fact yields the following lower bound for the risk gap \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{best}}\|^{2}]-\mathbb{E}[\| \mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]\):
\[\frac{\sigma^{2}\tau(2-\tau)}{n}\mathbb{E}\Bigg{[}\min_{1\leq k\leq M}\frac{( d_{k}-4k/(2-\tau))^{2}}{(n/\sigma^{2})(R_{0}-R_{k})}\Bigg{]}+2\min\Big{\{}1, \frac{\tau}{\lambda}\Big{\}}\frac{\sigma^{2}}{n}\text{sdf}(\hat{f}_{\text{ best}}). \tag{8}\]
Here, the second term is in terms of the search degrees of freedom of \(\hat{f}_{\text{best}}\). Meanwhile, the first term is reminiscent of the improvement that would arise from applying James-Stein shrinkage (James and Stein, 1961) to an individual estimator, viz.,
\[\hat{\mu}_{k}^{\text{JS}}(\mathbf{x})=\left(1-\frac{d_{k}-2}{(n/\sigma^{2})(R _{0}-R_{k})}\right)\hat{\mu}_{k}(\mathbf{x}),\]
with risk gap \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{\mu}}_{k}\|^{2}]-\mathbb{E}[\|\mathbf{f }-\mathbf{\hat{\mu}}_{k}^{\text{JS}}\|^{2}]\) given by
\[\frac{\sigma^{2}}{n}\mathbb{E}\Bigg{[}\frac{(d_{k}-2)^{2}}{(n/\sigma^{2})(R_{ 0}-R_{k})}\Bigg{]}. \tag{9}\]
The similarity between (8) and (9) is not a coincidence. Our proof strategy shows that one may think of stacking, in the nested regressions setting, as essentially performing shrinkage to an adaptively selected subset of models. We will discuss this connection further in the next two sections. As with usual James-Stein shrinkage (9), the risk gap (8) tends to be larger when the signal-to-noise ratio \(\|\mathbf{f}\|/\sigma\) or sample size \(n\) is small--agreeing with the adage that ensemble methods tend to work particularly well in these settings. This is suggested analytically from the fact that, for each \(k\), the quantity \((n/\sigma^{2})(R_{0}-R_{k})\sim\chi^{2}(d_{k},\theta)\) is stochastically increasing in \(\theta=n(\|\mathbf{f}\|^{2}-\|\mathbf{f}-\mathbf{f}_{k}\|^{2})/\sigma^{2}\).
**Remark 2**.: _We conjecture that the risk gap is uniformly bounded away from zero as the number of models increases, i.e.,_
\[\liminf_{M\to\infty}\Big{[}\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{best}\|^ {2}]-\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{stack}\|^{2}]\Big{]}>0,\]
_which would imply that stacking offers a consistent advantage in predictive performance over the best single model._
Up until now, we have focused on the virtues of stacking in a relative sense; that is, how well is does relative to another estimator. Here we show how well it does relative to an oracle, i.e., the stacked model with oracle weights.
**Theorem 4.2**.: _The risk of the stacked model with weights from (6) when \(\tau=\lambda=1\) satisfies the following oracle inequality:_
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]\leq\mathbb{E} \bigg{[}\bigg{\|}\mathbf{f}-\sum_{k=1}^{M}\alpha_{k}^{\star}\mathbf{\hat{\mu}} _{k}\bigg{\|}^{2}\bigg{]}+\frac{4\sigma^{2}\mathbb{E}[\|\hat{\alpha}\|_{\ell_{0 }}]}{n},\]
_where \(\alpha^{\star}=(\alpha_{1}^{\star},\alpha_{2}^{\star},\ldots,\alpha_{M}^{ \star})^{\mathrm{T}}\) is the minimizer of the population risk._
Theorem 4.2 says that the expected number of stacked models, or \(\mathbb{E}[\|\hat{\alpha}\|_{\ell_{0}}]\), controls the error from the oracle stacked model. Breiman observed that this quantity was surprisingly small. For example, in his experiments with stacked subset regressions (\(M=40\)), he found \(\mathbb{E}[\|\hat{\alpha}\|_{\ell_{0}}]\approx 3.1\). The size of \(\|\hat{\alpha}\|_{\ell_{0}}\) is equal to the number of unique elements of the isotonic sequence \(\hat{\gamma}\) that are less than one, defined in (11) below. Finding useful bounds (other than the number of models \(M\)) appears to be nontrivial. While previous shape-constrained estimation literature has shown that similar isotonic minimax sequences can have, on average, \(o(M)\) distinct elements (Meyer and Woodroofe, 2000), we have yet to establish this for \(\hat{\gamma}\).
## 5 Reduction to Isotonic Regression
Due to the nested structure of the models and the nonnegative weight constraint, the problem of determining the stacking weights from problem (6) can be recast as isotonic regression. To see this, let \(\Delta d_{k}=d_{k}-d_{k-1}\) and \(\Delta R_{k}=R_{k-1}-R_{k}\). Recall that \(R_{k}<R_{k-1}\) for all \(k\), implying that \(\Delta R_{k}>0\) for all \(k\). Making the change of variables \(\alpha_{k}=\beta_{k+1}-\beta_{k}\), for \(k=1,2,\ldots,M\), with \(\beta_{M+1}=1\), the nonnegativity constraint on the weights \(\alpha_{k}\) becomes an isotonic constraint on the \(\beta_{k}\). Some further algebraic manipulation gives the following equivalence of optimization programs.
**Lemma 5.1**.: _Program (6) is equivalent to:_
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k}(z_ {k}-\beta_{k})^{2}+\xi\sum_{k=1}^{M}\Delta d_{k}\mathbf{1}(\beta_{k}\neq 1)\\ \text{subject to}&\beta_{1}\leq\beta_{2}\leq \cdots\leq\beta_{M}\leq 1,\end{split} \tag{10}\]
_where \(w_{k}=\Delta R_{k}>0\), \(z_{k}=(\tau\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k})>0\), and \(\xi=(\sigma^{2}/n)((\lambda-\tau)_{+}^{2}/\lambda)\)._
We shall henceforth refer to the stacking weights \(\hat{\alpha}\) as _the_ solution to program (6), since it turns out to be almost surely unique. Crucially, using known properties of isotonic regression, the solution admits a closed form expression in terms of the following nondecreasing minimax sequence (11), which allows us to connect \(\hat{f}_{\text{stack}}\) and \(\hat{f}_{\text{best}}\):
\[\hat{\gamma}_{k}=\frac{\sigma^{2}}{n}\min_{k\leq i\leq M}\max_{0\leq j\leq k} \frac{d_{i}-d_{j}}{R_{j}-R_{i}},\quad k=1,2,\ldots,M. \tag{11}\]
The vector of this sequence is denoted as \(\hat{\gamma}=(\hat{\gamma}_{1},\hat{\gamma}_{2},\ldots,\hat{\gamma}_{M})^{ \mathrm{T}}\). In what follows, it will be convenient to define \(\hat{\gamma}_{0}=0\) and \(\hat{\gamma}_{M+1}=\infty\).
**Theorem 5.2**.: _The (almost surely) unique solution to program (6) is_
\[\hat{\alpha}_{k}=(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<\gamma)-(1- \tau\hat{\gamma}_{k+1})\mathbf{1}(\hat{\gamma}_{k+1}<\gamma),\quad k=1,2,\ldots,M, \tag{12}\]
_where \(\gamma=\min\{1/\tau,1/\lambda\}\), and, consequently, \(\sum_{k=1}^{M}\hat{\alpha}_{k}=(1-\tau\hat{\gamma}_{1})\mathbf{1}(\hat{\gamma}_ {1}<\gamma)<1\). Furthermore, the stacked model can be written as_
\[\hat{f}_{stack}(\mathbf{x})=\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x})-\hat{\mu} _{k-1}(\mathbf{x}))(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<\gamma), \tag{13}\]
_and the best single model can be written as_
\[\hat{f}_{best}(\mathbf{x})=\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x})-\hat{\mu} _{k-1}(\mathbf{x}))\mathbf{1}(\hat{\gamma}_{k}<1/\lambda). \tag{14}\]
According to (14), the best single estimator can be written as a telescoping sum of predictive differences \(\hat{\mu}_{k}(\mathbf{x})-\hat{\mu}_{k-1}(\mathbf{x})\) across successive submodels, up to the selected model \(\hat{\mu}_{\hat{m}}\). In contrast, according to (13), stacking additionally shrinks these predictive differences towards zero, and because \(\{\hat{\gamma}_{k}\}\) is an increasing sequence, larger models are shrunk more than smaller models. The adaptive shrinkage factor \(1-\tau\hat{\gamma}_{k}\) in (13) is the driving force behind its superior performance. So in summary, in the setting of nested regressions, _stacking performs both model selection and shrinkage simultaneously._
### Implementation
Because we know the general form of the solution (12) in terms of \(\hat{\gamma}\), a quantity free of \(\lambda\) and \(\tau\), we simply need solve for \(\hat{\gamma}\) using the related program
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k}(z _{k}-\beta_{k})^{2}\\ \text{subject to}&\beta_{1}\leq\beta_{2}\leq\cdots \leq\beta_{M},\end{split} \tag{15}\]
which we recognize as a weighted isotonic regression problem. The solution admits a closed form expression \(\hat{\mathbf{\beta}}=\hat{\gamma}\), the minimax sequence (11), from which the general solution (12) can be directly obtained. Program (15) can be solved in \(O(M)\) time using the Pooled Adjacent Violators Algorithm (PAVA) (Barlow et al., 1972), thereby permitting \(M\) to be in the thousands without incurring worrisome computational burden. The isoreg() function from base R will implement program (15).
## 6 Model Selection and Shrinkage in Stacking
According to Theorem 5.2, the version of stacking we analyzed performs both model selection and shrinkage simultaneously. As discussed after Theorem 4.1 and as will be proved in the appendix, the improvement in prediction performance over the best single model derives almost entirely from shrinking the weights in the telescoping sum (14). Since the shrinkage factors are optimized,
they naturally resemble James-Stein shrinkage factors, leading to a similar formula for the risk gap (8). In this section, we compare shrinkage from stacking with prior work on shrinkage and discuss the generality of this interpretation of stacking.
Agarwal et al. (2022) introduced a hierarchical shrinkage procedure to regularize decision tree models. When viewing the tree model as a linear regression onto orthogonal basis elements \(\psi_{l}\) indexed by the internal nodes of the tree, they showed that their procedure was equivalent to performing ridge regression instead of linear regression on this basis. Since the amount of shrinkage in ridge regression is controlled by a single parameter, theirs is therefore a more constrained form of shrinkage compared to what arises from stacking nested subtrees. The latter optimizes the amount of shrinkage over \(M\) different parameters, each controlling the amount of shrinkage over a separate block of the regression coefficients. On the other hand, hierarchical shrinkage allows the basis elements \(\psi_{l}\) to be unnormalized, with \(\|\psi_{l}\|^{2}=N_{l}/n\), where \(N_{l}\) is the number of samples contained in node \(l\). The ridge regression shrinkage factor for the coefficient of \(\psi_{l}\) is then \(\frac{N_{l}}{N_{l}+\lambda}\), which means that splits on nodes with fewer samples are penalized more strongly. A theoretical or empirical performance comparison of the two forms of shrinkage is left for future work.
Agarwal et al. (2022) also showed empirically that the best performing tree model after shrinkage is usually much deeper than the best performing tree model prior to shrinkage. In other words, by allowing a more nuanced bias-variance tradeoff, shrinkage allows for a "larger model" and makes use of features whose inclusion would not be justified otherwise. Breiman (1996)'s empirical findings for stacked subtrees present a similar story. In one experiment he found that, among a collection of \(M=50\) subtrees, the stacked tree had 33 internal nodes (the _dimension of the model_ in our terminology), whereas the (underperforming) best single tree had only 22. Breiman was surprised by this, and explained it as the prediction in the stacked tree "gathering strength" from the data among ancestor terminal nodes in the smaller stacked subtrees.
Our work formalizes Breiman's notion of "gathering strength" as shrinkage, and rigorously proves its benefits. On the other hand, we are not yet able to show theoretically how the prediction performance of stacking benefits from allowing for a larger model. The optimization program (6) was explicitly designed so that the additive representations (13) and (14) for \(\hat{f}_{\text{stack}}\) and \(\hat{f}_{\text{best}}\) truncate at the same level. This property is crucial in our proof of Theorem 4.1 because it allows us to compare the search degrees of freedom of the two estimators, \(\text{sdf}(\hat{f}_{\text{stack}})\) and \(\text{sdf}(\hat{f}_{\text{best}})\), which would otherwise be challenging. Recall that these quantities arise because we use an extension of Stein's Lemma for discontinuous functions (Tibshirani, 2015). We need this to cope with the fact that both \(\hat{f}_{\text{stack}}\) and \(\hat{f}_{\text{best}}\) are discontinuous with respect to the \(y\)-data--they perform hard thresholding on a data-dependent quantity, arising from the model selection mechanism. In general, bounding the search degrees of freedom (e.g., from best subset selection) is known to be quite challenging (Mikkelsen and Hansen, 2018; Tibshirani and Rosset, 2019). Truncating the additive representations for \(\hat{f}_{\text{stack}}\) and \(\hat{f}_{\text{best}}\) at the same level places their discontinuities (with respect to the \(y\)-data) at exactly the same locations, which causes their search degrees of freedom to be positive and proportional to each other, that is, \(\text{sdf}(\hat{f}_{\text{stack}})=(1-\tau/\lambda)_{+}\text{sdf}(\hat{f}_{ \text{best}})\geq 0\). Nonetheless, we conjecture that stacking nested regressions without this artificial constraint does lead to a larger and more predictive model. Some partial results in this direction are presented in the next section.
Furthermore, while we have analyzed stacking in the context of nested regressions, the benefits of
stacking have been empirically observed in much more general settings without any nested structure. The shrinkage effect of stacking likely continues to play some role in improving prediction performance, but it is surely not the only factor at play. We hypothesize that stacking in these settings benefits from combining different models, some more complex than \(\hat{f}_{\text{best}}\), where each captures separate nuances in the data.
## 7 Size of the Stacked Model
In this section, we will provide theoretical evidence of the aforementioned phenomenon observed by Breiman and show that, in addition to having a sum less than one, for certain stacking weights, the stacked model puts nonzero weight on some constituent models that have higher complexity than the best single model when \(\lambda=2\) (e.g., the model selected by AIC, SURE, or Mallows's \(C_{p}\)). To this end, note that the quantity inside the expectation in the right hand side of (5) is an unbiased estimator of the expected prediction error of the stacked model with adaptive weights \(\hat{\mathbf{\alpha}}\), where \(\hat{\mathbf{\alpha}}\) minimizes \(R(\mathbf{\alpha})+(2\sigma^{2}/n)\text{df}(\mathbf{\alpha})\) subject to \(\mathbf{\alpha}\geq\mathbf{0}\). Dropping the negative term involving \(0\leq\hat{\mathbf{\alpha}}^{\text{T}}\hat{\mathbf{\alpha}}_{0}\ll\|\hat{\mathbf{\alpha}}\| _{\ell_{0}}\), we may thus use
\[R(\mathbf{\alpha})+\frac{2\sigma^{2}}{n}\text{df}(\mathbf{\alpha})+\frac{4\sigma^{2}} {n}\|\mathbf{\alpha}\|_{\ell_{0}},\]
to hopefully better estimate the population risk of an (adaptive) stacked model, and solve the program
\[\begin{split}\text{minimize}& R(\mathbf{\alpha})+\frac{2 \sigma^{2}}{n}\text{df}(\mathbf{\alpha})+\frac{4\sigma^{2}}{n}\|\mathbf{\alpha}\|_{ \ell_{0}}\\ \text{subject to}&\mathbf{\alpha}\geq\mathbf{0}.\end{split} \tag{16}\]
We see that the objective function penalizes the number of included models through \(\|\mathbf{\alpha}\|_{\ell_{0}}\) as well as the complexity of the included models through \(\text{df}(\mathbf{\alpha})\). The optimization problem (16) is equivalent to (weighted) reduced isotonic regression (Gao et al., 2020, Haiminen et al., 2008) in which one fits an isotonic sequence subject to a constraint on the number of distinct values it can assume, i.e.,
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k}(z_ {k}-\beta_{k})^{2}+\frac{4\sigma^{2}}{n}\sum_{k=1}^{M}\mathbf{1}(\beta_{k}\neq \beta_{k+1})\\ \text{subject to}&\beta_{1}\leq\beta_{2}\leq\cdots \leq\beta_{M}\leq 1,\end{split} \tag{17}\]
where \(w_{k}=\Delta R_{k}>0\) and \(z_{k}=(\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k})>0\).
Reduced isotonic regression can be implemented in \(O(M^{3})\) time using a two-stage approach consisting of Pooled Adjacent Violators Algorithm (PAVA) (Barlow et al., 1972) followed by dynamic programming for the \(K\)-segmentation problem (Bellman, 1961), e.g., see the procedure of (Haiminen et al., 2008). The scar package in R provides a function for reduced isotonic regression.
It turns out that the stacked model with weights from solving (16) will always assign nonzero weight to models with dimension at least as large as that of the best single model, agreeing with what Breiman observed using cross-validated data.
**Theorem 7.1**.: _The dimension of the stacked model with weights from (16) is no less than the dimension of the best single model (7) with \(\lambda=2\); that is,_
\[\text{dim}(\hat{f}_{\text{stack}})\geq\text{dim}(\hat{f}_{\text{best}}).\]
_Furthermore, the weights sum to less than one, i.e., \(\sum_{k=1}^{M}\hat{\alpha}_{k}<1\)._
## 8 Other Connections
In this section, we discuss connections between stacking and other ways of combining estimators.
### Aggregation
Another line of research known as _aggregation_ covers similar ground, albeit without an examination of shrinkage and model complexity. The goal of this research also differs in that it benchmarks against the error of the best single oracle model, namely, \(\min_{1\leq k\leq M}\mathbb{E}[\|\mathbf{f}-\hat{\mathbf{\mu}}_{k}\|^{2}]\), rather than what is achieved by conventional model selection criteria, namely, \(\hat{f}_{\text{best}}\).
In this case, the weights assigned to the model (13) are additionally constrained to sum to one, and the risk bounds that are sought take the form
\[\min_{1\leq k\leq M}\mathbb{E}[\|\mathbf{f}-\hat{\mathbf{\mu}}_{k}\|^{2}]+\frac{ \Lambda\sigma^{2}}{n},\]
where \(\Lambda\) is a constant that possibly depends on \(M\). The term \(\Lambda\sigma^{2}/n\) is the price to pay for aggregating the various models and from not knowing the oracle model that minimizes the population risk. Evaluating the relative performance of \(\hat{f}_{\text{stack}}\) and \(\hat{f}_{\text{best}}\) using this bound presents challenges, as existing inequalities in the literature only allow for a comparison of the risks of \(\hat{f}_{\text{best}}\) and the best single oracle model up to constants (Cavalier et al., 2002). For an unstructured collection of models, the constant \(\Lambda\) in the bound can be taken to be of order \(\log(M)\)(Leung and Barron, 2006; Dai et al., 2012; Bellec, 2018). However, Bellec and Yang (2020) have recently demonstrated that if the constituent models are ordered linear smoothers (Kneip, 1994), which includes the nested linear least squares models examined in this paper, then the constant is universal. Bellec (2018) provided a best combination oracle inequality for aggregation over the simplex similar to Theorem 4.2, though paying a price of \(\sigma^{2}\sqrt{\log(M)/n}\), uniformly over all models, instead of the distribution-dependent \(4\sigma^{2}\mathbb{E}[\|\hat{\mathbf{\alpha}}\|_{\ell_{0}}]/n\).
Inspired by the \(Q\)-aggregation procedure (Dai et al., 2012; Rigollet, 2012), recent work (Bellec, 2018; Bellec and Yang, 2020) adds an additional term to the unbiased estimator of the population risk \(R(\alpha)+(2\sigma^{2}/n)\text{df}(\alpha)\), namely, \(\eta\sum_{m=1}^{M}\alpha_{m}\big{\|}\hat{\mathbf{\mu}}_{m}-\sum_{k=1}^{M}\alpha_{ k}\hat{\mathbf{\mu}}_{k}\big{\|}^{2}\) with \(\eta=1/2\), which encourages the solution to be closer to the vertices of the probability simplex (the search space for \(\mathbf{\alpha}\)). While this does lead to sparser solutions, the mechanism by which it is done is quite different from (17), which in contrast explicitly penalizes the number of nonzero weights. To see why, suppose the individual models are linear least squares projections onto nested subspaces. Simple algebra (see
the appendix) shows that the program from (Bellec, 2018; Bellec and Yang, 2020) is equivalent to
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k}(z_{k} -\beta_{k})^{2}\\ \text{subject to}& 0=\beta_{1}\leq\beta_{2}\leq\cdots \leq\beta_{M}\leq 1,\end{split} \tag{18}\]
where \(w_{k}=\Delta R_{k}\) and \(z_{k}=(1/(1-\eta))(\eta/2+(\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k}))\). The solution has a closed form expression that we derive in the appendix:
\[\tilde{\beta}_{1}=0,\quad\tilde{\beta}_{k}=1-\phi\bigg{(}\frac{1}{1-\eta} \bigg{(}1-\eta/2-\frac{\sigma^{2}}{n}\min_{k\leq i\leq M}\max_{1\leq j<k}\frac {d_{i}-d_{j}}{R_{j}-R_{i}}\bigg{)}\bigg{)},\quad k=2,3,\ldots,M, \tag{19}\]
where \(\phi(z)=\min\{1,\max\{0,z\}\}\) is the clip function for \(z\) at \(0\) and \(1\). As can be seen from (19) and the fact that \(\phi\big{(}\pm\frac{\text{const.}}{1-\eta}\big{)}\in\{0,1\}\) for sufficiently large \(\eta\), the additional term tends to push \(\tilde{\beta}_{k}\) into the corners \(\{0,1\}\), making it more likely that \(\tilde{\alpha}_{k}=\tilde{\beta}_{k+1}-\tilde{\beta}_{k}=0\). However, there are other ways to achieve sparsity beyond \(\tilde{\beta}_{k}=\tilde{\beta}_{k+1}=0\) or \(\tilde{\beta}_{k}=\tilde{\beta}_{k+1}=1\). For example, whenever \(\tilde{\beta}_{k}=\tilde{\beta}_{k+1}\), i.e., the isotonic solution is constant across two successive values, \(\tilde{\alpha}_{k}\) will be zero. Thus, sparsity can also be modulated by controlling the number of constant pieces of the isotonic solution \(\tilde{\beta}_{k}\)--precicely what the objective function in program (17) seeks to do.
### Randomized Ensembles
A hallmark of random forests is that the optimization process for choosing good constituent tree models (i.e., splitting variables) is randomized. That is, instead of selecting the best splitting variable among all variables at each node, only a randomly generated subset of mtry of them are considered each time. We can use a similar idea here in the context of nested regression models. To this end, let \(m\) be an integer less than or equal to \(M\), which can be thought of as analogous to the mtry parameter from random forests. Generate a random subset \(I\) from \(\{1,2,\ldots,M\}\) with cardinality \(m-1\) and set \(J=\{0\}\cup I\). Define
\[\hat{m}\in\underset{k\in J}{\text{argmin}}\;\;R_{k}+\lambda\frac{\sigma^{2}d_{ k}}{n},\]
so that \(\hat{\mu}_{\hat{m}}\) is the best single model among \(\{\hat{\mu}_{k}:k\in J\}\). Repeating this process independently \(B\) times and averaging the resulting model predictions yields a randomized ensemble estimator
\[\hat{f}_{\text{rand}}(\mathbf{x})=\frac{1}{B}\sum_{b=1}^{B}\hat{\mu}_{\hat{m}_ {b}}(\mathbf{x}).\]
Using the form (14) of the best single model but specialized to a subset of the models, we may write
\[\hat{\mu}_{\hat{m}}(\mathbf{x})=\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x})-\hat{ \mu}_{k-1}(\mathbf{x}))\mathbf{1}(\hat{\gamma}_{k}(I)<1/\lambda),\]
where \(\hat{\gamma}_{k}(I)\) is the isotonic sequence (11) with knots in \(I\), that is, for each \(k\in I\),
\[\hat{\gamma}_{k}(I)=\frac{\sigma^{2}}{n}\min_{i\in I,\;i\geq k}\;\max_{j\in J, \;j<k}\frac{d_{i}-d_{j}}{R_{j}-R_{i}},\]
and \(\hat{\gamma}_{k}(I)\) is constant for \(k\notin I\) between successive indices in \(I\), taking on the smallest of the two adjacent \(\hat{\gamma}_{l}(I)\) values. When \(B\) is large, we have the approximation
\[\hat{f}_{\text{rand}}(\mathbf{x})\approx\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x} )-\hat{\mu}_{k-1}(\mathbf{x}))\mathbb{P}_{I}(\hat{\gamma}_{k}(I)<1/\lambda), \tag{20}\]
where \(\mathbb{P}_{I}\) denotes the probability measure induced from random sampling. As can be seen from comparing (20) and (13), the randomized ensemble and stacking both reduce the instability in the best single model, but do so in different ways. Recall the form of the best single estimator (14) in which the successive predictive differences \(\hat{\mu}_{k}(\mathbf{x})-\hat{\mu}_{k-1}(\mathbf{x})\) across successive submodels are multiplied by \(\mathbf{1}(\hat{\gamma}_{k}<1/\lambda)\) and then summed up to the selected model index \(\hat{m}\). The randomized ensemble smooths the jump discontinuity in the indicator function \(\mathbf{1}(\hat{\gamma}_{k}<1/\lambda)\) by instead averaging over the indicator functions \(\mathbf{1}(\hat{\gamma}_{k}(I)<1/\lambda)\) resulting from many (locally) best single models. On the other hand, stacking shrinks \(\mathbf{1}(\hat{\gamma}_{k}<1/\lambda)\) towards zero, via \((1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<1/\lambda)\), making the jump discontinuity less pronounced. An interesting question for future work would be to analytically compare the factors \(\mathbb{P}_{I}(\hat{\gamma}_{k}(I)<1/\lambda)\) from (20) and \((1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<1/\lambda)\) from (13) on predictive accuracy.
We mention in closing that previous work (Buhlmann and Yu, 2002) has also revealed the smoothing effect of ensembling on indicator functions, but from subsampling the data (i.e., bagging) as opposed to subsampling the candidate models.
## 9 Discussion and Conclusion
The findings from this research provide compelling evidence for the enhanced predictive accuracy of stacked regressions. As we continually seek cheap and simple ways to improve model performance, ensemble methods like stacking emerge as valuable tools. Our research reiterates the seminal works of Wolpert (1992) and Breiman (1996), both of whom touched upon the potential of stacking.
The fact that the stacked model provably outperforms the best single model for _any_ regression function \(f\), noise level \(\sigma^{2}\), and sample size \(n\) is of course noteworthy. However, it is essential to consider the conditions under which this advantage is true. Perhaps most limiting was that we stack models resulting from least squares projections onto nested subspaces and that the errors are Gaussian. For tractable theoretical analysis, it is likely the Gaussian assumption cannot be dropped, but the structure of the models can most likely be extended. For example, Breiman also considered stacking ridge regressions and reached similar though not quite as strong conclusions on its merits. More generally, ridge and nested regression models are both instances of ordered linear smoothers (Kneip, 1994), i.e., families of models that can be written as
\[\hat{\mu}_{k}(\mathbf{x})=\sum_{l=1}^{n}c_{k,l}\langle\mathbf{y},\boldsymbol{ \psi}_{l}\rangle\psi_{l}(\mathbf{x}),\quad 0\leq c_{k,l}\leq c_{k+1,l}\leq 1.\]
Interestingly, the representation (14) for the best single model still holds if \(d_{k}\) is replaced by \(\text{df}(\hat{\mu}_{k})=\sum_{l=1}^{n}c_{k,l}\). However, it is not true anymore that the stacking estimator has the form (13), unless \(c_{k,l}\in\{0,1\}\). We leave these considerations for future work.
Proofs
In this section, we provide full proofs of all the main results and other statements in the main text. We first establish some notation. Given a sequence of indices \(s_{0}=0,s_{1},\ldots,s_{u}=M\), we define \(\Delta d_{s_{l}}=d_{s_{l}}-d_{s_{l-1}}\) and \(\Delta R_{s_{l}}=R_{s_{l-1}}-R_{s_{l}}\).
Proof of Lemma 5.1.: The proof is based on the following lemma.
**Lemma A.1**.: _Suppose \(h(\mathbf{x})\) has the form_
\[h(\mathbf{x})=\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x})-\hat{\mu}_{k-1}( \mathbf{x}))c_{k}.\]
_Then,_
\[\|\mathbf{y}-\mathbf{h}\|^{2}=R_{0}+\sum_{k=1}^{M}\Delta R_{k}(c_{k}^{2}-2c_{k }).\]
Proof of Lemma a.1.: Note that for any \(k\neq l\),
\[\langle\hat{\mathbf{\mu}}_{k}-\hat{\mathbf{\mu}}_{k-1},\hat{\mathbf{\mu}}_{l}-\hat{\mathbf{\mu }}_{l-1}\rangle=\left\langle\sum_{i\in A_{k}\backslash A_{k-1}}\langle\mathbf{ y},\mathbf{\psi}_{i}\rangle\,\psi_{i},\,\sum_{j\in A_{l}\backslash A_{l-1}} \langle\mathbf{y},\mathbf{\psi}_{j}\rangle\psi_{j}\right\rangle=0,\]
since \(\langle\mathbf{\psi}_{i},\mathbf{\psi}_{j}\rangle=0\) for any \(i\neq j\), and
\[\|\hat{\mathbf{\mu}}_{k}-\hat{\mathbf{\mu}}_{k-1}\|^{2}=\sum_{j\in A_{k} \backslash A_{k-1}}\langle\mathbf{y},\mathbf{\psi}_{j}\rangle^{2}=\Delta R_{k}.\]
In addition,
\[\langle\mathbf{y},\hat{\mathbf{\mu}}_{k}-\hat{\mathbf{\mu}}_{k-1}\rangle=\left\langle \mathbf{y},\,\sum_{j\in A_{k}\backslash A_{k-1}}\langle\mathbf{y},\mathbf{\psi}_{ j}\rangle\psi_{j}\right\rangle=\sum_{j\in A_{k}\backslash A_{k-1}}\langle \mathbf{y},\mathbf{\psi}_{j}\rangle^{2}=\Delta R_{k}.\]
Thus,
\[\|\mathbf{y}-\mathbf{h}\|^{2} =\left\|\mathbf{y}-\sum_{k=1}^{M}(\hat{\mathbf{\mu}}_{k}-\hat{\mathbf{\mu }}_{k-1})c_{k}\right\|^{2}\] \[=\|\mathbf{y}\|^{2}+\sum_{k=1}^{M}c_{k}^{2}\|\hat{\mathbf{\mu}}_{k}- \hat{\mathbf{\mu}}_{k-1}\|^{2}-2\sum_{k=1}^{M}c_{k}\langle\mathbf{y},\hat{\mathbf{\mu }}_{k}-\hat{\mathbf{\mu}}_{k-1}\rangle\] \[=R_{0}+\sum_{k=1}^{M}\Delta R_{k}(c_{k}^{2}-2c_{k}).\qed\]
Let \(c_{k}=\sum_{i=k}^{M}\alpha_{i}\), for \(k=1,2,\ldots,M\). Define \(\beta_{k}=1-c_{k}\), with \(\beta_{M+1}=1\). Note that \(\alpha_{k}=\beta_{k+1}-\beta_{k}\). Using summation by parts, we know
\[\sum_{k=1}^{M}\alpha_{k}\hat{\mu}_{k}(\mathbf{x})=\sum_{k=1}^{M}( \hat{\mu}_{k}(\mathbf{x})-\hat{\mu}_{k-1}(\mathbf{x}))c_{k},\quad\sum_{k=1}^{M }\alpha_{k}d_{k}=\sum_{k=1}^{M}c_{k}\Delta d_{k},\]
where \(\hat{\mu}_{0}\equiv 0\). In addition, note that if there exists some \(1\leq a\leq M\) such that \(\alpha_{a}>0=\alpha_{a+1}=\cdots\alpha_{M}\), then we have \(c_{a}>0=c_{a+1}=\cdots=c_{M}\). Thus,
\[\dim(\alpha)=\sum_{k=1}^{M}\Delta d_{k}\mathbf{1}(c_{k}\neq 0)=\sum_{k=1}^{M }\Delta d_{k}\mathbf{1}(\beta_{k}\neq 1).\]
Therefore, with Lemma A.1 we can establish the equivalence between the solution of (6) and the solution of the following programs:
minimize \[R_{0}-\sum_{k=1}^{M}\Delta R_{k}(2c_{k}-c_{k}^{2})+\frac{2\tau \sigma^{2}}{n}\sum_{k=1}^{M}c_{k}\Delta d_{k}+\xi\sum_{k=1}^{M}\Delta d_{k} \mathbf{1}(c_{k}\neq 0)\] subject to \[c_{1}\geq c_{2}\geq\cdots\geq c_{M}\geq 0\] \[\Leftrightarrow\ \mbox{minimize} \sum_{k=1}^{M}\Delta R_{k}\bigg{(}c_{k}^{2}-2c_{k}+c_{k}\frac{2 \tau\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\bigg{)}+\xi\sum_{k=1}^{M} \Delta d_{k}\mathbf{1}(c_{k}\neq 0)\] subject to \[c_{1}\geq c_{2}\geq\cdots\geq c_{M}\geq 0\] \[\Leftrightarrow\ \mbox{minimize} \sum_{k=1}^{M}\Delta R_{k}\bigg{(}c_{k}-\bigg{(}1-\frac{\tau \sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\bigg{)}\bigg{)}^{2}+\xi\sum_{ k=1}^{M}\Delta d_{k}\mathbf{1}(c_{k}\neq 0)\] subject to \[c_{1}\geq c_{2}\geq\cdots\geq c_{M}\geq 0\] \[\Leftrightarrow\ \mbox{minimize} \sum_{k=1}^{M}w_{k}(z_{k}-\beta_{k})^{2}+\xi\sum_{k=1}^{M}\Delta d_{k} \mathbf{1}(\beta_{k}\neq 1)\] subject to \[\beta_{1}\leq\beta_{2}\leq\cdots\leq\beta_{M}\leq 1,\]
where \(w_{k}=\Delta R_{k}>0\), \(z_{k}=(\tau\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k})>0\), and \(\xi=(\sigma^{2}/n)((\lambda-\tau)_{+}^{2}/\lambda)\).
Proof of Theorem 5.2.: We first show the formula for the best single model (14). It can be directly obtained from the following lemma, which characterizes the selected index \(\hat{m}\).
**Lemma A.2**.: _Let \(\tilde{m}=\max\left\{k\in\{0,1,\ldots,M\}:\hat{\gamma}_{k}<1/\lambda\right\}\) where \(\{\hat{\gamma}_{k}\}\) is defined in (11). Then, for any \(0\leq j<\tilde{m}\),_
\[R_{\tilde{m}}+\frac{\lambda\sigma^{2}}{n}d_{\tilde{m}}<R_{j}+ \frac{\lambda\sigma^{2}}{n}d_{j}.\]
_and for any \(\tilde{m}<j\leq M\)_
\[R_{\tilde{m}}+\frac{\lambda\sigma^{2}}{n}d_{\tilde{m}}\leq R_{j }+\frac{\lambda\sigma^{2}}{n}d_{j}.\]
_This shows \(\hat{m}=\tilde{m}\)._
Proof of Lemma a.2.: A well-known geometric property of isotonic regression is that the solution is the left slope of the _greatest convex minorant_ of the _cumulative sum diagram_[2]. Let \(0=s_{0}<s_{1}<\cdots<s_{u}=M\) be the values of \(k\) for which \(\hat{\gamma}_{s_{k}+1}>\hat{\gamma}_{s_{k}}\) (\(u\geq k\geq 0\)), and \(\hat{\gamma}_{0}=0\) and \(\hat{\gamma}_{M+1}=\infty\), implying that \(s_{0},s_{1},\ldots,s_{u}\) are the change points of the isotonic solution. Then, according to this property,
\[\hat{\gamma}_{s_{l}}=\frac{\sigma^{2}}{n}\frac{d_{s_{l}}-d_{s_{l-1}}}{R_{s_{l- 1}}-R_{s_{l}}},\quad l=1,2,\ldots,u. \tag{21}\]
The definition of \(\tilde{m}\) implies it should be one of the change points. We assume \(\tilde{m}=s_{a}\). According to (21) and the definition of \(\hat{\gamma}_{s_{a}}\), we know for \(0\leq j<\tilde{m}\),
\[\frac{\sigma^{2}}{n}\frac{d_{s_{a}}-d_{s_{a-1}}}{R_{s_{a-1}}-R_{s_{a}}}=\hat{ \gamma}_{s_{a}}=\max_{1\leq i\leq s_{a}-1}\frac{\sigma^{2}}{n}\frac{d_{s_{a}} -d_{i}}{R_{i}-R_{s_{a}}}\geq\frac{\sigma^{2}}{n}\frac{d_{s_{a}}-d_{j}}{R_{j}- R_{s_{a}}}.\]
Thus,
\[R_{s_{a}}+\frac{\lambda\sigma^{2}}{n}d_{s_{a}} \leq R_{j}+R_{s_{a}}-R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}+ \lambda\hat{\gamma}_{s_{a}}(R_{j}-R_{s_{a}})\] \[=R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}+(R_{j}-R_{s_{a}})(\lambda \hat{\gamma}_{s_{a}}-1)\] \[<R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}.\]
where the last inequality holds since \(\hat{\gamma}_{s_{a}}=\hat{\gamma}_{\tilde{m}}<1/\lambda\). Similarly, for any \(\tilde{m}<j\leq M\),
\[\frac{\sigma^{2}}{n}\frac{d_{s_{a+1}}-d_{s_{a}}}{R_{s_{a}}-R_{s_{a+1}}}=\hat{ \gamma}_{s_{a}+1}=\min_{s_{a}+1\leq i\leq M}\frac{\sigma^{2}}{n}\frac{d_{i}-d _{s_{a}}}{R_{s_{a}}-R_{i}}\leq\frac{\sigma^{2}}{n}\frac{d_{j}-d_{s_{a}}}{R_{s _{a}}-R_{j}}.\]
Thus,
\[R_{s_{a}}+\frac{\lambda\sigma^{2}}{n}d_{s_{a}} \leq R_{j}+R_{s_{a}}-R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}- \lambda\hat{\gamma}_{s_{a}+1}(R_{s_{a}}-R_{j})\] \[=R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}+(R_{s_{a}}-R_{j})(1- \lambda\hat{\gamma}_{s_{a}+1})\] \[<R_{j}+\frac{\lambda\sigma^{2}}{n}d_{j}.\]
where the last inequality holds since \(\hat{\gamma}_{s_{a}+1}=\hat{\gamma}_{\tilde{m}+1}\geq 1/\lambda\) and \(R_{s_{a}}-R_{j}>0\).
Next we show (13). According to Lemma 5.1, we can focus on program (10). We need the following lemmas: Lemma A.3 solves the weighted isotonic regression problem with bounded constraints and Lemma A.4 paves the way for establishing almost sure uniqueness.
**Lemma A.3**.: _Let \(w_{k}>0\) (\(k=1,2,\ldots,M\)). Then the solution to the program_
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k} \left(z_{k}-\mu_{k}\right)^{2}\\ \text{subject to}& a\leq\mu_{1}\leq\cdots\leq\mu_{M} \leq b,\end{split} \tag{22}\]
_is \(\tilde{\mu}_{k}=\phi_{a,b}\Big{(}\max\limits_{1\leq i\leq k}\min\limits_{k\leq j \leq M}\frac{\sum_{l_{j=1}^{j}w_{l}\neq j_{j}}^{j}}{\sum_{l_{i=1}^{j}w_{l}}^{j}} \Big{)}\) for \(k=1,2,\ldots,M\), where \(\phi_{a,b}(z)=\min\{b,\max\{a,z\}\}\) is the clip function for \(z\) at \(a\) and \(b\). In particular, when \(a=0\) and \(b=\infty\), we have \(\tilde{\mu}_{k}=\Big{(}\max\limits_{1\leq i\leq k}\min\limits_{k\leq j\leq M }\frac{\sum_{l_{j=1}^{j}w_{l}\neq j_{j}}^{j}}{\sum_{l_{i=1}^{j}w_{l}}^{j}} \Big{)}_{+}\) for \(k=1,2,\ldots,M\)._
Proof of Lemma a.3.: We first consider another program
\[\begin{split}\text{minimize}&\sum_{k=1}^{M}w_{k} \left(z_{k}-\mu_{k}\right)^{2}\\ \text{subject to}&\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_ {M},\end{split} \tag{23}\]
A well-known result (Barlow and Brunk, 1972, Equation 2.8) shows the solution of (23), denoted by \(\mathbf{\mu}^{*}\), is \(\mu_{k}^{*}=\max\limits_{1\leq i\leq k}\min\limits_{k\leq j\leq M}\frac{\sum_ {l_{j=1}^{j}w_{l}\neq j_{j}}^{j}}{\sum_{l_{i=1}^{j}w_{l}}^{j}}\) for \(1\leq k\leq M\). Let \(\mathbf{\eta}=(\eta_{0},\eta_{1},\ldots,\eta_{M})\) and \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{M-1})\) with each entry of \(\mathbf{\eta}\) and \(\lambda\) nonnegative. We denote the Lagrangians of (22) and (23) respectively by
\[L_{1}(\mathbf{\mu},\mathbf{\eta})=\sum_{k=1}^{M}w_{k}(z_{k}-\mu_{k})^{2}-\eta_{0}(\mu_ {1}-a)+\sum_{k=1}^{M-1}\eta_{k}(\mu_{k}-\mu_{k+1})+\eta_{M}(\mu_{M}-b)\]
and
\[L_{2}(\mathbf{\mu},\lambda)=\sum_{k=1}^{M}w_{k}(z_{k}-\mu_{k})^{2}+\sum_{k=1}^{M- 1}\lambda_{k}(\mu_{k}-\mu_{k+1}).\]
Because the program (23) is convex, there exists \(\lambda^{*}=(\lambda_{1}^{*},\lambda_{2}^{*},\ldots,\lambda_{M-1}^{*})\) such that \((\mathbf{\mu}^{*},\lambda^{*})\) satisfies the Karush-Kuhn-Tucker (KKT) conditions, i.e.,
\[\nabla L_{2}(\mathbf{\mu}^{*},\lambda^{*})=0 \Rightarrow-2w_{k}(z_{k}-\mu_{k}^{*})-\lambda_{k-1}^{*}+\lambda_ {k}^{*}=0,\quad k=1,2,\ldots,M \tag{24}\] \[\lambda^{*}\geq 0,\] \[\mu_{1}^{*}\leq\mu_{2}^{*}\leq\cdots\leq\mu_{M}^{*},\] \[\lambda_{k}^{*}(\mu_{k}^{*}-\mu_{k+1}^{*})=0,\quad k=1,2,\ldots,M-1, \tag{25}\]
where we let \(\lambda_{0}^{*}=\lambda_{M}^{*}=0\). Let \(m_{1}=\max\{k\in[M]:\mu_{k}^{*}<a\}\) and \(m_{2}=\max\{k\in[M]:\mu_{k}^{*}\leq b\}\). If \(\mu_{k}^{*}\geq a\) for any \(k=1,2,\ldots,M\), then let \(m_{1}=0\). Since \(\{\mu_{1}^{*}\}\) is not decreasing and \(a\leq b\), we have \(m_{1}\leq m_{2}\). By (25) and the facts that \(\mu_{m_{1}+1}^{*}-\mu_{m_{1}}^{*}>0\) and \(\mu_{m_{2}+1}^{*}-\mu_{m_{2}}^{*}>0\), we know \(\lambda_{m_{1}}^{*}=\lambda_{m_{2}}^{*}=0\). [Here we define \(\mu_{M+1}^{*}=\infty\).] Now we construct an \(\mathbf{\eta}^{*}=(\eta_{0}^{*},\eta_{1}^{*},\ldots,\eta_{M}^{*})\) satisfying
\[\eta_{k}^{*}=\begin{cases}2w_{k}z_{k}-2w_{k}b+\eta_{k-1}^{*}&k=m_{2}+1,m_{2}+ 2,\ldots,M\\ \lambda_{k}^{*}&k=m_{1},m_{1}+1,\ldots,m_{2}\\ -2w_{k+1}z_{k+1}+2w_{k+1}a+\eta_{k+1}^{*}&k=0,1,\ldots,m_{1}-1.\end{cases}\]
In what follows, we verify that \((\mathbf{\tilde{\mu}},\mathbf{\eta}^{*})\) satisfies the KKT conditions for (22). Since \(\mu_{k}^{*}\) is increasing, it is easy to see \(a\leq\tilde{\mu}_{1}\leq\cdots\tilde{\mu}_{M}\leq b\), which confirms the primal feasibility. Note that
\(\tilde{\mu}_{k+1}=\lambda_{k}^{*}(\mu_{k}^{*}-\mu_{k+1}^{*})=0\) for \(m_{2}>k>m_{1}\). Since \(\eta_{m_{1}}^{*}=\lambda_{m_{1}}^{*}=\eta_{m_{2}}^{*}=\lambda_{m_{2}}^{*}=0\) and \(\tilde{\mu}_{k}=a\) (\(k\leq m_{1}\)), and \(\tilde{\mu}_{k}=b\) (\(k>m_{2}\)), we also know \(\eta_{k}^{*}(\tilde{\mu}_{k}-\tilde{\mu}_{k+1})=0\) for \(k\leq m_{1}\) and \(k\geq m_{2}\). These confirm the complementary slackness. Furthermore, for \(m_{2}\geq k>m_{1}\),
\[\frac{\partial}{\partial\mu_{k}}L_{1}(\tilde{\mathbf{\mu}},\mathbf{\eta}^{*})=-2w_{k} (z_{k}-\tilde{\mu}_{k})-\eta_{k-1}^{*}+\eta_{k}^{*}=-2w_{k}(z_{k}-\mu_{k}^{*}) -\lambda_{k-1}^{*}+\lambda_{k}^{*}=0,\]
and for \(k\leq m_{1}\),
\[\frac{\partial}{\partial\mu_{k}}L_{1}(\tilde{\mathbf{\mu}},\mathbf{\eta}^{*})=-2w_{k} (z_{k}-\tilde{\mu}_{k})-\eta_{k-1}^{*}+\eta_{k}^{*}=-2w_{k}z_{k}+2w_{k}a-\eta_ {k-1}^{*}+\eta_{k}^{*}=0,\]
where we use the definition of \(\eta_{k}^{*}\) and \(\tilde{\mu}_{k}=a\) (\(k\leq m_{1}\)) for the above equality. Similarly, for \(k>m_{2}\), we also have \(\frac{\partial}{\partial\mu_{k}}L_{1}(\tilde{\mathbf{\mu}},\mathbf{\eta}^{*})=0\). These lead to \(\nabla L_{1}(\tilde{\mathbf{\mu}},\mathbf{\eta}^{*})=0\). Lastly, to show \(\mathbf{\eta}^{*}\geq 0\), it suffices to show \(\eta_{k}^{*}\geq 0\) for \(k\leq m_{1}-1\) and \(k\geq m_{2}+1\) since \(\eta_{k}^{*}=\lambda_{k}^{*}\geq 0\) (\(m_{2}\geq k\geq m_{1}\)). Note that
\[\eta_{m_{1}-1}^{*}=-2w_{m_{1}}z_{m_{1}}+2w_{m_{1}}a+\eta_{m_{1}}^{*}\geq-2w_{m _{1}}z_{m_{1}}+\eta_{m_{1}}^{*}+2w_{m_{1}}\mu_{m_{1}}^{*}=\lambda_{m_{1}-1}^{* }\geq 0, \tag{26}\]
where we use facts that \(\mu_{m_{1}}^{*}<a\) and \(\eta_{m_{1}}^{*}=\lambda_{m_{1}}^{*}\), and the stationary condition (24) for program (23). In addition,
\[\eta_{m_{1}-2}^{*}=-2w_{m_{1}-1}z_{m_{1}-1}+2w_{m_{1}-1}a+\eta_{m_{1}-1}^{*} \geq-2w_{m_{1}-1}z_{m_{1}-1}+\lambda_{m_{1}-1}^{*}+2w_{m_{1}-1}\mu_{m_{1}-1}^{ *}=\lambda_{m_{1}-2}^{*}\geq 0,\]
where for the first inequality we use (26) and the fact that \(\mu_{m_{1}-1}^{*}\leq\mu_{m_{1}}^{*}<0\). For the last equality we use (24) again. Therefore, with an induction argument we can establish \(\eta_{k}^{*}\geq 0\) for \(k\leq m_{1}-1\). Similarly, we know \(\eta_{k}^{*}\geq 0\) for \(k\geq m_{2}+1\), which implies \(\mathbf{\eta}^{*}\geq 0\). In conclusion, we show that \((\tilde{\mathbf{\mu}},\mathbf{\eta}^{*})\) satisfies the KKT conditions for program (22). Since (22) is a convex program, we know \(\tilde{\mathbf{\mu}}\) and \(\mathbf{\eta}^{*}\) should be its primal and dual optimizers, which completes the proof.
**Lemma A.4**.: _Given any constant \(c>0\). The probability that there exists an index \(1\leq k\leq M\) such that \(\hat{\gamma}_{k}=c\) is zero._
Proof of Lemma a.4.: We show that given any constant \(c>0\), the probability of there existing two indices \(0\leq a<b\leq M\) such that \(\frac{\sigma^{2}}{n}\frac{d_{b}-d_{a}}{R_{a}-R_{b}}=c\) is zero. To this end, given indices \(a\) and \(b\), note that \((n/\sigma^{2})(R_{a}-R_{b})\) follows a noncentral chi-squared distribution with v degrees of freedom and noncentrality parameter \((n/\sigma^{2})(\|\mathbf{f}-\mathbf{f}_{a}\|^{2}-\|\mathbf{f}-\mathbf{f}_{b} \|^{2})\). Therefore, we have that \(\mathbb{P}(\frac{\sigma^{2}}{n}\frac{d_{b}-d_{a}}{R_{a}-R_{b}}=c)=0\). The lemma can thus be obtained by noticing that the countable union of measure-zero sets has measure zero.
Now we start to prove (13). We begin by considering the case that \(\lambda\leq\tau\). In this case, program (10) is reduced to the bounded isotonic regression program
minimize \[\sum_{k=1}^{M}\Delta R_{k}\Big{(}\frac{\tau\sigma^{2}}{n}\frac{ \Delta d_{k}}{\Delta R_{k}}-\beta_{k}\Big{)}^{2}\] subject to \[\beta_{1}\leq\beta_{2}\leq\cdots\leq\beta_{M}\leq 1.\]
From Lemma A.3, we know its solution is \(\{\tau\hat{\gamma}_{k}\mathbf{1}(\hat{\gamma}_{k}\leq 1/\tau)+\mathbf{1}(\hat{ \gamma}_{k}>1/\tau),\ k=1,2,\ldots,M\}\), where \(\hat{\gamma}_{k}\) is defined in (11). Then the solution to program (6) is
\[\hat{\alpha}_{k} =(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}\leq 1/\tau)-(1- \tau\hat{\gamma}_{k+1})\mathbf{1}(\hat{\gamma}_{k+1}\leq 1/\tau)\] \[=(1-\tau\hat{\gamma}_{k})_{+}-(1-\tau\hat{\gamma}_{k+1})_{+},\quad k =1,2,\ldots,M.\]
We next consider the case that \(\lambda>\tau\). Recall that in the proof of Lemma A.2, we assume \(s_{0}=0,s_{1},\ldots,s_{u}=M\) correspond to the change points of \(\hat{\gamma}\) and also assume \(\hat{m}=\tilde{m}=s_{a}\) where \(0\leq a\leq M\). Let \(\hat{\mathbf{\beta}}\) be the solution to program (10) and suppose \(r_{q}=\max\left\{k\in\{0,1,\ldots,M\}:\hat{\beta}_{k}<1\right\}\). Note that we want to show almost surely \(\hat{\alpha}_{k}=(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<1/ \lambda)-(1-\tau\hat{\gamma}_{k+1})\mathbf{1}(\hat{\gamma}_{k+1}<1/\lambda)\), which implies that \(\hat{\alpha}_{k}=0\) and \(\hat{\beta}_{k}=1\) when \(k>s_{a}\). Hence, we first show that almost surely, \(r_{q}=\hat{m}=s_{a}\). The general idea of the proof is that if \(r_{q}\neq s_{a}\), we can always construct an alternative valid solution with a smaller objective value, thereby establishing a contradiction.
Suppose \(r_{q}>s_{a}\). Note that when given \(r_{q}\), program (10) is reduced to the problem
\[\begin{split}\text{minimize}&\sum_{k=1}^{r_{q}} \Delta R_{k}\Big{(}\frac{\tau\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}- \beta_{k}\Big{)}^{2}+\frac{(\lambda-\tau)^{2}}{\lambda}\frac{\sigma^{2}}{n} \sum_{k=1}^{r_{q}}\Delta d_{k}\mathbf{1}(\beta_{k}\neq 1)\\ \text{subject to}&\beta_{1}\leq\beta_{2}\leq\cdots \leq\beta_{r_{q}}\leq 1.\end{split} \tag{27}\]
Suppose the change points that correspond to its solution \(\hat{\mathbf{\beta}}\) are \(r_{0}=0,r_{1},\ldots,r_{q}\) and also suppose \(s_{b-1}<r_{q}\leq s_{b}\) where \(b>a\). The solution \(\hat{\mathbf{\beta}}\) has the form \(\hat{\beta}_{k}=\frac{\tau\sigma^{2}}{n}\frac{d_{j}-d_{j-1}}{R_{j_{\gamma-1}}- R_{j_{\gamma}}}\) when \(r_{l-1}<k\leq r_{l}\) (\(l\leq q\)). Note that for \(s_{b-1}<k\leq r_{q}\),
\[\hat{\beta}_{k}=\frac{\tau\sigma^{2}}{n}\min_{k\leq j\leq r_{q}}\max_{1\leq i \leq k}\frac{d_{j}-d_{i-1}}{R_{i-1}-R_{j}}\geq\tau\hat{\gamma}_{k}\geq\tau\hat {\gamma}_{s_{a}+1}\geq\frac{\tau}{\lambda}\]
where we use the fact that \(k\geq s_{b-1}+1\geq s_{a}+1\). Thus, leveraging Lemma A.4, we conclude almost surely \(\tau/\lambda<\hat{\beta}_{r_{q}}<1\). Let \(\Delta R_{r_{l}}=R_{r_{l-1}}-R_{r_{l}}\) and \(\Delta d_{r_{l}}=d_{r_{l}}-d_{r_{l-1}}\) for any \(l\leq q\). Note that
\[\begin{split}&\sum_{k=r_{q-1}+1}^{r_{q}}\Delta R_{k}\Big{(}\frac{ \tau\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}-\hat{\beta}_{k}\Big{)}^{2}+ \frac{(\lambda-\tau)^{2}}{\lambda}\frac{\sigma^{2}}{n}(d_{r_{q}}-d_{r_{q-1}}) -\sum_{k=r_{q-1}+1}^{r_{q}}\Delta R_{k}\Big{(}\frac{\tau\sigma^{2}}{n}\frac{ \Delta d_{k}}{\Delta R_{k}}-1\Big{)}^{2}\\ &=\ \Delta R_{r_{q}}\Big{(}-\hat{\beta}_{r_{q}}^{2}+\frac{( \lambda-\tau)^{2}}{\tau\lambda}\hat{\beta}_{r_{q}}-1+2\hat{\beta}_{r_{q}}\Big{)} \\ &=\ -\frac{\Delta R_{r_{q}}}{\tau\lambda}(\tau\hat{\beta}_{r_{q}}- \lambda)(\lambda\hat{\beta}_{r_{q}}-\tau)>0.\end{split} \tag{28}\]
This means if we change \(\hat{\beta}_{k}\) to \(1\) for any \(r_{q-1}<k\leq r_{q}\), then we will get a smaller objective value of program (10). This establishes a contradiction, leading to the conclusion that \(r_{q}\leq s_{a}\).
Suppose \(r_{q}<s_{a}\). Again, we assume \(s_{b-1}<r_{q}\leq s_{b}\) where \(b\leq a\). If there exists \(b<a\) such that \(r_{q}=s_{b}\), then for any \(s_{b}+1\leq k\leq s_{b+1}\), we consider changing \(\hat{\beta}_{k}\) from \(1\) to \(\tilde{\beta}=\frac{\tau\sigma^{2}}{n}\frac{d_{s_{b+1}}-d_{s_{b}}}{R_{s_{b}}- R_{b+1}}\). Following a similar inequality as (28), we know
\[\begin{split}&\sum_{k=s_{b+1}}^{s_{b+1}}\Delta R_{k}\Big{(}\frac{ \tau\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}-\tilde{\beta}\Big{)}^{2}+ \frac{(\lambda-\tau)^{2}}{\lambda}\frac{\sigma^{2}}{n}(d_{s_{b+1}}-d_{s_{b}}) -\sum_{k=s_{b}+1}^{s_{b+1}}\Delta R_{k}\Big{(}\frac{\tau\sigma^{2}}{n}\frac{ \Delta d_{k}}{\Delta R_{k}}-1\Big{)}^{2}\\ &=\ -\frac{R_{s_{b}}-R_{s_{b+1}}}{\tau\lambda}(\tau\tilde{\beta}-\lambda)( \lambda\tilde{\beta}-\tau)<0,\end{split}\]
where in the last inequality we use the fact that \(\tilde{\beta}<\tau/\lambda\) since \(b<a\). This means changing \(\hat{\beta}_{k}\) (\(s_{b}+1\leq k\leq s_{b+1}\)) from 1 to \(\tilde{\beta}\) can get a smaller objective value, which leads to a contradiction.
Thus, we can assume \(s_{b-1}<r_{q}<s_{b}\) where \(b\leq a\). When \(\hat{\beta}_{r_{q}}>\tau/\lambda\), with a similar argument as above, we can change \(\hat{\beta}_{k}\) to 1 for any \(r_{q-1}<k\leq r_{q}\) and get a smaller objective value of program (10). We thus only need to focus on the case of \(\hat{\beta}_{r_{q}}\leq\tau/\lambda\), and almost surely we can assume \(\hat{\beta}_{r_{q}}<\tau/\lambda\). For any \(k\leq s_{b}\) we let \(s_{c-1}<k\leq s_{c}\) where \(c\leq b\). Then, the solution \(\hat{\beta}_{k}\) to program (27) satisfies
\[\tau\hat{\gamma}_{k}=\frac{\tau\sigma^{2}}{n}\min_{k\leq j\leq M}\max_{1\leq i \leq k}\frac{d_{j}-d_{i-1}}{R_{i-1}-R_{j}}\leq\hat{\beta}_{k}=\frac{\tau\sigma ^{2}}{n}\min_{k\leq j\leq r_{q}}\max_{1\leq i\leq k}\frac{d_{j}-d_{i-1}}{R_{i -1}-R_{j}}\leq\frac{\tau\sigma^{2}}{n}\max_{1\leq i\leq k}\frac{d_{s_{c}}-d_{i -1}}{R_{i-1}-R_{s_{c}}}=\tau\hat{\gamma}_{k}, \tag{29}\]
which means \(\hat{\beta}_{k}=\tau\hat{\gamma}_{k}\) for any \(k\leq s_{b-1}\) and also \(r_{l}=s_{l}\) for any \(l\leq b-1\). Let \(\tilde{\beta}=\frac{\tau\sigma^{2}}{n}\frac{d_{s_{b}}-d_{s_{b-1}}}{R_{s_{b-1} }-R_{s_{b}}}\). We next show that if we change \(\hat{\beta}_{k}\) to \(\tilde{\beta}\) for any \(s_{b-1}<k\leq s_{b}\), then we obtain a smaller objective value of program (10). It is worth noting that \(\tilde{\beta}\geq\hat{\beta}_{k}\) for any \(k\leq s_{b-1}\) since they are the solutions of isotonic regression, and \(\tilde{\beta}<\tau/\lambda<1\). Hence, the replacement of \(\hat{\beta}_{k}\) (\(s_{b-1}<k\leq s_{b}\)) with \(\tilde{\beta}\) is indeed valid. Thus, it suffices to show
\[\begin{split}&\sum_{k=r_{b-1}+1=s_{b-1}+1}^{r_{q}}w_{k}(z_{k}- \hat{\beta}_{k})^{2}+\frac{(\lambda-\tau)^{2}}{\lambda}\frac{\sigma^{2}}{n}(d_ {r_{q}}-d_{r_{b-1}})+\sum_{k=r_{q}+1}^{s_{b}}w_{k}(z_{k}-1)^{2}\\ &>\ \sum_{k=r_{b-1}+1=s_{b-1}+1}^{s_{b}}w_{k}(z_{k}-\tilde{\beta})^{2}+ \frac{(\lambda-\tau)^{2}}{\lambda}\frac{\sigma^{2}}{n}(d_{s_{b}}-d_{s_{b-1}}) \end{split} \tag{30}\]
where \(w_{k}=\Delta R_{k}>0\) and \(z_{k}=(\tau\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k})>0\), and we have constraints \(\tilde{\beta}<\hat{\beta}_{r_{b}}<\cdots<\hat{\beta}_{r_{q}}<\tau/\lambda\). By taking \(\hat{\beta}_{k}=\frac{\tau\sigma^{2}}{n}\frac{d_{s_{l}}-d_{s_{l-1}}}{R_{s_{l-1 }}-R_{s_{l}}}\) when \(r_{l-1}<k\leq r_{l}\) (\(l\leq q\)) and \(\tilde{\beta}\) into (30), we know (30) is equivalent to
\[\sum_{u=b}^{q}\frac{(\sum_{k=r_{u-1}+1}^{r_{u}}w_{k}z_{k})^{2}}{\sum_{k=r_{u-1 }+1}^{r_{u}}w_{k}}\leq\frac{(\sum_{k=s_{b-1}+1}^{s_{b}}w_{k}z_{k})^{2}}{\sum_{k= s_{b-1}+1}^{s_{b}}w_{k}}+\sum_{k=r_{q}+1}^{s_{b}}w_{k}-\frac{\lambda^{2}+\tau^{2}}{ \lambda\tau}\sum_{k=r_{q}+1}^{s_{b}}w_{k}z_{k}. \tag{31}\]
For simplicity of notation, let \(p=q-b\), \(t_{i}=\frac{\sum_{k=r_{b-1}+1}^{r_{b+i}}w_{k}z_{k}}{\sum_{k=r_{b-1}+1}^{r_{b+i} }w_{k}}\) and \(c_{i}=\frac{\sum_{k=r_{b-1}+1}^{r_{b+i}}w_{k}}{\sum_{k=r_{b-1}+1}^{r_{b}}w_{k}}\) where \(i=0,1,\ldots,p\). In addition, let \(\tilde{b}=\frac{\sum_{k=r_{q}+1}^{s_{b}}w_{k}z_{k}}{\sum_{k=r_{q}+1}^{s_{b}}w_{k}}\) and \(\tilde{c}=\frac{\sum_{k=r_{q}+1}^{s_{b}}w_{k}}{\sum_{k=r_{b-1}+1}^{s_{b}}w_{k}}\). Then (31) becomes equivalent to
\[c_{0}t_{0}^{2}+c_{1}t_{1}^{2}+\cdots+c_{p}t_{p}^{2}\leq(c_{0}t_{0}+c_{1}t_{1}+ \cdots+c_{p}t_{p}+\tilde{c}\tilde{b})^{2}+\tilde{c}-\frac{\lambda^{2}+\tau^{2} }{\lambda\tau}\tilde{b}\tilde{c}, \tag{32}\]
and we have constraints \(\tilde{\beta}<\hat{\beta}_{r_{b}}<\cdots<\hat{\beta}_{r_{q}}<\tau/\lambda\) equivalent to \(c_{0}+c_{1}+\cdots+c_{p}+\tilde{c}=1\) and
\[c_{0}t_{0}+c_{1}t_{1}+\cdots+c_{p}t_{p}+\tilde{c}\tilde{b}<t_{0}<\cdots<t_{p} \leq\tau/\lambda. \tag{33}\]
Note that since \(r_{q}<s_{b}\), we have \(0<\tilde{c}\leq 1\). Then (33) can be rewritten as
\[\tilde{b}<\frac{t_{0}-(c_{0}t_{0}+c_{1}t_{1}+\cdots+c_{p}t_{p})}{\tilde{c}}\quad \text{and}\quad t_{0}<t_{1}<\cdots<t_{p}<\tau/\lambda.\]
Note that we can write the right hand side of (32) as a quadratic function of \(\tilde{b}\), which we denote by \(g(\tilde{b})\). Then,
\[g(\tilde{b})=\tilde{c}^{2}\tilde{b}^{2}+2\tilde{c}(c_{0}t_{0}+c_{1}t_{1}+\cdots+c _{p}t_{p}-\frac{\lambda^{2}+\tau^{2}}{2\lambda\tau})\tilde{b}+(c_{0}t_{0}+c_{1} t_{1}+\cdots+c_{p}t_{p})^{2}+\tilde{c}.\]
Since \(t_{0}<\tau/\lambda<1\leq\frac{\lambda^{2}+\tau^{2}}{2\lambda\tau}\) and \(\tilde{b}\leq\frac{t_{0}-(c_{0}t_{0}+c_{1}t_{1}+\cdots+c_{p}t_{p})}{\tilde{c}}\), we know \(g(\tilde{b})\geq g\Big{(}\frac{t_{0}-(c_{0}t_{0}+c_{1}t_{1}+\cdots+c_{p}t_{p}) }{\tilde{c}}\Big{)}\). Then to establish (32), it suffices to show
\[c_{0}t_{0}^{2}+c_{1}t_{1}^{2}+\cdots+c_{p}t_{p}^{2} <g\Big{(}\frac{t_{0}-(c_{0}t_{0}+c_{1}t_{1}+\cdots+c_{p}t_{p})}{ \tilde{c}}\Big{)}\] \[=t_{0}^{2}+\tilde{c}-\frac{\lambda^{2}+\tau^{2}}{\lambda\tau}(t_{ 0}-c_{0}t_{0}-c_{1}t_{1}-\cdots-c_{p}t_{p}),\]
which is equivalent to
\[\sum_{j=1}^{p}c_{j}h(t_{j})<(1-c_{0})t_{0}^{2}-\frac{\lambda^{2}+\tau^{2}}{ \lambda\tau}(1-c_{0})t_{0}+\tilde{c},\]
where we let \(h(z)=z^{2}-\frac{\lambda^{2}+\tau^{2}}{\lambda\tau}z\). Note that \(h(z)\) is also a quadratic function and \(t_{0}<t_{j}<\tau/\lambda\) for any \(j=1,2,\ldots,p\). Then
\[\sum_{j=1}^{p}c_{j}h(t_{j})<\sum_{j=1}^{p}c_{j}h(t_{1}).\]
In addition, note that
\[\sum_{j=1}^{p}c_{j}h(t_{0})<(1-c_{0})t_{0}^{2}-\frac{\lambda^{2}+\tau^{2}}{ \lambda\tau}(1-c_{0})t_{0}+\tilde{c}\]
is equivalent to
\[\frac{\tilde{c}}{\lambda\tau}(\tau t_{0}-\lambda)(\lambda t_{0}-\tau)>0.\]
This is satisfied by the constraint that \(t_{0}<\tau/\lambda\), and thus we prove (32). Overall, we know changing \(\hat{\beta}_{k}\) to \(\tilde{\beta}\) for any \(s_{b-1}<k\leq s_{b}\) can lead to a smaller objective value of program (10), which gives a contradiction. Therefore, we proved almost surely \(r_{q}=s_{a}\). In this case, with a similar argument as (29), we know \(\hat{\beta}_{k}=\tau\hat{\gamma}_{k}\) for any \(k\leq r_{q}\). Thus,
\[\hat{\alpha}_{k}=(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<1/ \lambda)-(1-\tau\hat{\gamma}_{k+1})\mathbf{1}(\hat{\gamma}_{k+1}<1/\lambda), \quad k=1,2,\ldots,M,\]
and this is the almost surely unique solution.
Proof of Theorem 4.1.: To prove this theorem, we need the following lemma, which is an extension of Stein's Lemma for discontinuous functions.
**Lemma A.5** (Corollary 1 in Tibshirani (2015)).: _Let \(X\sim N(\mu,\sigma^{2})\), where \(\mu\in\mathbb{R}\) and \(\sigma>0\). Let \(h\) be a piecewise absolutely continuous function, with discontinuity set \(\{\delta_{1},\delta_{2},\ldots,\delta_{m}\}\), and derivative \(h^{\prime}\) satisfying \(\mathbb{E}[|h^{\prime}(X)|]<\infty\). Then,_
\[\frac{1}{\sigma^{2}}\mathbb{E}[(X-\mu)h(X)]=\mathbb{E}[h^{\prime}(X)]+\frac{1 }{\sigma}\sum_{k=1}^{m}\phi\Big{(}\frac{\delta_{k}-\mu}{\sigma}\Big{)}[h( \delta_{k})_{+}-h(\delta_{k})_{-}],\]
_where \(\phi\) is the standard normal density function, and \(h(x)_{+}=\lim_{t\downarrow x}h(t)\) and \(h(x)_{-}=\lim_{t\uparrow x}h(t)\)._
We first characterize \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}]\). By the usual population risk decomposition formula,
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}]=\mathbb{E}[ \|\mathbf{y}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}]-\sigma^{2}+\frac{2 \sigma^{2}}{n}\mathrm{df}(\hat{f}_{\mathrm{best}}).\]
For simplicity of notation, we let \(\tilde{\beta}_{k}=\mathbf{1}(\hat{\gamma}_{k}\geq 1/\lambda)\), where \(k=1,2,\ldots,M\). Then \(\hat{f}_{\mathrm{best}}(\mathbf{x})=\sum_{k=1}^{M}(\hat{\mu}_{k}(\mathbf{x})- \hat{\mu}_{k-1}(\mathbf{x}))(1-\tilde{\beta}_{k})\). To compute \(\mathrm{df}(\hat{f}_{\mathrm{best}})\), note that
\[\mathrm{df}(\hat{f}_{\mathrm{best}})=\frac{1}{\sigma^{2}}\sum_{i=1}^{n}\mathrm{ cov}\left(\hat{f}_{\mathrm{best}}(\mathbf{x}_{i}),y_{i}\right)=\frac{1}{\sigma^{2}} \sum_{i=1}^{n}\sum_{k=1}^{M}\sum_{l\in A_{k}\setminus A_{k-1}}\mathrm{cov} \left(\left\langle\mathbf{y},\psi_{l}\right\rangle\psi_{l}(\mathbf{x}_{i})(1- \tilde{\beta}_{k}),y_{i}\right).\]
In addition,
\[\frac{1}{n}\sum_{i=1}^{n}\mathrm{cov}\left(\left\langle\mathbf{y },\psi_{l}\right\rangle\psi_{l}(\mathbf{x}_{i})(1-\tilde{\beta}_{k}),y_{i}\right)\] \[= \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\left\langle\mathbf{y},\psi_{l}\right\rangle\psi_{l}(\mathbf{x}_{i})y_{i}(1-\tilde{\beta}_{k}) \right]-\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\left\langle\mathbf{y},\psi_ {l}\right\rangle\psi_{l}(\mathbf{x}_{i})f(\mathbf{x}_{i})(1-\tilde{\beta}_{k})\right]\] \[= \mathrm{cov}\left(\left\langle\mathbf{y},\psi_{l}\right\rangle \left\langle\mathbf{y},\psi_{l}\right\rangle(1-\tilde{\beta}_{k})\right).\]
Since \(\left\langle\mathbf{y},\psi_{l}\right\rangle\sim N(\left\langle\mathbf{f}, \psi_{l}\right\rangle,\sigma^{2}/n)\) and \(\left\langle\mathbf{y},\psi_{l}\right\rangle(1-\tilde{\beta}_{k})\) is a piecewise absolutely continuous function in \(\left\langle\mathbf{y},\psi_{l}\right\rangle\), we can leverage Lemma A.5. Specifically, we have
\[\mathrm{df}(\hat{f}_{\mathrm{best}}) =\frac{1}{\sigma^{2}}\sum_{i=1}^{n}\sum_{k=1}^{M}\sum_{l\in A_{k }\setminus A_{k-1}}\mathrm{cov}\left(\left\langle\mathbf{y},\psi_{l}\right\rangle \psi_{l}(\mathbf{x}_{i})(1-\tilde{\beta}_{k}),y_{i}\right)\] \[=\] \[= \tag{34}\] \[=\] \[=\] \[=\]
where we denote \(\mathrm{sdf}(\hat{f}_{\mathrm{best}})\) as the risk arising from the discontinuous points of \(\hat{f}_{\mathrm{best}}\), also called the _search degrees of freedom_[15]. Note that in the third equality above, it is necessary to first condition on \(\langle\mathbf{y},\mathbf{\psi}_{t^{\prime}}\rangle\) for all \(l^{\prime}\neq l\), and subsequently absorb the conditional expectation. For the sake of simplicity in notation, we will omit this step in the following discussion. The last equality holds since in each piece, we have \(\frac{\partial}{\partial\langle\mathbf{y},\mathbf{\psi}_{t^{\prime}}\rangle}(1- \hat{\beta}_{k})=0\). Therefore, with Lemma A.1 and (34), we know
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}] =\mathbb{E}[\|\mathbf{y}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}] -\sigma^{2}+\frac{2\sigma^{2}}{n}\mathrm{df}(\hat{f}_{\mathrm{best}})\] \[=\mathbb{E}\left[R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left((1-\tilde{ \beta}_{k})^{2}-2(1-\tilde{\beta}_{k})\right)+\sum_{k=1}^{M}\frac{2\sigma^{2} }{n}\Delta d_{k}(1-\tilde{\beta}_{k})\right]+\frac{2\sigma^{2}}{n}\mathrm{sdf }(\hat{f}_{\mathrm{best}})-\sigma^{2}\] \[=\mathbb{E}\left[R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left(\tilde{ \beta}_{k}^{2}-1\right)+\sum_{k=1}^{M}\frac{2\sigma^{2}}{n}\Delta d_{k}(1- \tilde{\beta}_{k})\right]+\frac{2\sigma^{2}}{n}\mathrm{sdf}(\hat{f}_{\mathrm{ best}})-\sigma^{2}.\]
We defer the characterization of \(\mathrm{sdf}(\hat{f}_{\mathrm{best}})\) to a later point. To compute \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^{2}]\) we need the following lemma.
**Lemma A.6**.: _The following results hold._
1. _For any_ \(1\leq k\leq M\) _and_ \(l\in A_{M}\)_, given_ \(\langle\mathbf{y},\mathbf{\psi}_{t^{\prime}}\rangle\) _for all_ \(l^{\prime}\neq l\)_, we have_ \(\hat{\gamma}_{k}\) _is nonincreasing with_ \(|\left\langle\mathbf{y},\mathbf{\psi}_{t^{\prime}}\right\rangle|\)_._
2. _For any_ \(1\leq k\leq M\) _and_ \(l\in A_{M}\)_, given_ \(\langle\mathbf{y},\mathbf{\psi}_{t^{\prime}}\rangle\) _for all_ \(l^{\prime}\neq l\)_, we have_ \((1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<\gamma)\) _is piecewise absolutely continuous with_ \(\langle\mathbf{y},\mathbf{\psi}_{t}\rangle\)_._
Proof of Lemma A.6.: To prove part (i), suppose that \(l\in A_{a}\backslash A_{a-1}\) where \(1\leq a\leq M\). If \(\hat{\gamma}_{a}<\hat{\gamma}_{k}\), then as \(|\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle|\) increases, since \(l\in A_{a}\backslash A_{a-1}\) and \(\langle\mathbf{y},\mathbf{\psi}_{t}\rangle^{2}\) appears in the denominator \(R_{j-1}-R_{i}\) of \(\hat{\gamma}_{a}=\frac{\sigma^{2}}{n}\min_{a\leq i\leq M}\max_{1\leq j\leq a} \frac{d_{i}-d_{j-1}}{R_{j-1}-R_{i}}\), the \(\hat{\gamma}_{a}\) will keep decreasing while \(\hat{\gamma}_{k}\) remains unchanged. If \(\hat{\gamma}_{a}\geq\hat{\gamma}_{k}\), then as \(|\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle|\) increases, \(\hat{\gamma}_{a}\) keeps decreasing and \(\hat{\gamma}_{k}\) still remains unchanged until \(\hat{\gamma}_{a}\) reaches \(\hat{\gamma}_{k}\). After this point, they remain equal and continue to decrease.
To prove part (ii), by symmetry it suffices to show the piecewise absolutely continuity on \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\in[0,\infty)\). Let \(\hat{\tau}_{k}=(1-\tau\hat{\gamma}_{k})\mathbf{1}(\hat{\gamma}_{k}<\gamma)\). We first consider the case of \(\lambda>\tau\). Since \(\hat{\gamma}_{k}\) is the solution of vanilla isotonic regression, it is continuous. By part (i), we know there is at most one discontinuous point when \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\geq 0\). We denote it by \(t\) and note that it satisfies \(\hat{\gamma}_{k}=1/\lambda\) when \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle=t\). Then from part (i) we know \(\hat{\tau}_{k}=0\) when \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\leq t\) and thus it is absolutely continuous. When \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle>t\), note that we can write \(\hat{\gamma}_{k}=\frac{\sigma^{2}}{n}\frac{d_{c}-d_{b}}{R_{b}-R_{c}}\), where \(1\leq b<c\leq M\). Here \(a,b\) are two change points of \(\hat{\mathbf{\gamma}}\), and they are both functions of \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\). Since \(a,b\) have only finite number of choices, we can thus rewrite \(\hat{\gamma}_{k}\) as
\[\hat{\gamma}_{k}=\sum_{i=1}^{K}\frac{\sigma^{2}}{n}\frac{d_{c_{i}}-d_{b_{i}}}{R_{ b_{i}}-R_{c_{i}}}\mathbf{1}(t_{i-1}<\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle \leq t_{i})+\frac{\sigma^{2}}{n}\frac{d_{c_{K+1}}-d_{b_{K+1}}}{R_{b_{K+1}}-R_{c_ {K+1}}}\mathbf{1}(t_{K}<\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle), \tag{35}\]
where \(1\leq b_{i}<c_{i}\leq M\) for any \(i=1,2,\ldots,K+1\) and \(t=t_{0}<t_{1}<\cdots<t_{K}\). Here \(b_{i},c_{i}\) are constants independent of \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\) for any \(i\), and \(K\) is also a constant independent of \(\left\langle\mathbf{y},\mathbf{\psi}_{t}\right\rangle\). To
clarify, it is important to note that \(b_{i}\), \(c_{i}\), and \(K\) should depend on \(\left\langle\mathbf{y},\boldsymbol{\psi}_{l^{\prime}}\right\rangle\) where \(l^{\prime}\neq l\). However, in this particular lemma, we are provided with \(\{\left\langle\mathbf{y},\boldsymbol{\psi}_{l^{\prime}}\right\rangle\}_{l^{ \prime}\neq l}\). Therefore, for the purposes of this lemma, we treat \(b_{i},c_{i}\), and \(K\) as constants. Note that \(\frac{\sigma^{2}}{n}\frac{d_{c_{i}}-d_{b_{i}}}{R_{b_{i}}-R_{c_{i}}}\) is absolutely continuous. Hence, we know \(\hat{\gamma}_{k}\) is absolutely continuous when \(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle>t\), and so is \((1-\tau\hat{\gamma}_{k})\). Therefore, we proved \(\hat{\tau}_{k}\) is piecewise absolutely continuous when \(\lambda>\tau\). In the case of \(\lambda\leq\tau\), we have \(\hat{\tau}_{k}=(1-\tau\hat{\gamma}_{k})\mathbf{1}(\tau\hat{\gamma}_{k}<1)\) is continuous. With a similar argument as before, we also know that \(\hat{\tau}_{k}\) is absolutely continuous.
Now we start to compute \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]\). Since
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]=\mathbb{E}[\| \mathbf{y}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]+\frac{2\sigma^{2}}{n}\text {df}(\hat{f}_{\text{stack}})-\sigma^{2},\]
we need to compute \(\text{df}(\hat{f}_{\text{stack}})\). In the case of \(\lambda>\tau\), recall that in the proof of Theorem 5.2, we know \(\hat{\beta}_{k}=\tau\hat{\gamma}_{k}\) for any \(k\leq s_{a}=\tilde{m}\) where \(\tilde{m}=\max\left\{k\in\{0,1,\ldots,M\}:\hat{\gamma}_{k}<1/\lambda\right\}\) and \(\hat{\beta}_{k}=1\) for any \(k>\tilde{m}\). Using a similar argument as in the computation of \(\text{df}(\hat{f}_{\text{best}})\), we have
\[\begin{split}\text{df}(\hat{f}_{\text{stack}})&= \frac{1}{\sigma^{2}}\sum_{i=1}^{n}\sum_{k=1}^{M}\sum_{l\in A_{k}\backslash A_ {k-1}}\text{cov}\left(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle \psi_{l}(\mathbf{x}_{i})(1-\hat{\beta}_{k}),y_{i}\right)\\ &=\frac{n}{\sigma^{2}}\sum_{k=1}^{M}\sum_{l\in A_{k}\backslash A _{k-1}}\text{cov}\left(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle,\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle(1-\hat{\beta}_{k} )\right)\\ &=\sum_{k=1}^{M}\sum_{l\in A_{k}\backslash A_{k-1}}\mathbb{E} \left[\frac{\partial}{\partial\left\langle\mathbf{y},\boldsymbol{\psi}_{l} \right\rangle}\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle(1- \hat{\beta}_{k})\right]+\text{sdf}(\hat{f}_{\text{stack}})\\ &=\sum_{k=1}^{M}\sum_{l\in A_{k}\backslash A_{k-1}}\mathbb{E} \left[1-\hat{\beta}_{k}+\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle \frac{\partial}{\partial\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle }(1-\hat{\beta}_{k})\right]+\text{sdf}(\hat{f}_{\text{stack}}),\end{split} \tag{36}\]
where \(\text{sdf}(\hat{f}_{\text{stack}})\) is the search degrees of freedom of \(\hat{f}_{\text{stack}}\). To compute \(\mathbb{E}\left[\frac{\partial}{\partial\left\langle\mathbf{y},\boldsymbol{ \psi}_{l}\right\rangle}\hat{\beta}_{k}\right]\), according to (35) and by the symmetry of the case when \(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle<0\), we have
\[\begin{split}&\mathbb{E}\left[\frac{\partial}{\partial\left\langle \mathbf{y},\boldsymbol{\psi}_{l}\right\rangle}\hat{\beta}_{k}\right]\\ &=\ \mathbb{E}\left[\left(\frac{\partial}{\partial\left\langle \mathbf{y},\boldsymbol{\psi}_{l}\right\rangle}\hat{\beta}_{k}\right)\left(\sum_{ i=1}^{K}\mathbf{1}(t_{i-1}<|\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle |\leq t_{i})+\mathbf{1}(t_{i}<|\left\langle\mathbf{y},\boldsymbol{\psi}_{l} \right\rangle|)\right)\right]\\ &=\ \sum_{i=1}^{K}\mathbb{E}\left[\left(\frac{\partial}{\partial \left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle}\frac{\tau\sigma^{2}} {n}\frac{d_{c_{i}}-d_{b_{i}}}{R_{b_{i}}-R_{c_{i}}}\right)\mathbf{1}(t_{i-1}<| \left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle|\leq t_{i})\right]\\ &\quad+\mathbb{E}\Bigg{[}\left(\frac{\partial}{\partial\left\langle \mathbf{y},\boldsymbol{\psi}_{l}\right\rangle}\frac{\tau\sigma^{2}}{n}\frac{d_{ c_{K+1}}-d_{b_{K+1}}}{R_{b_{K+1}}-R_{c_{K+1}}}\right)\mathbf{1}(t_{K}<| \left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle|)\Bigg{]}.\end{split} \tag{37}\]
Note that in (36), we know \(l\in A_{k}\backslash A_{k-1}\) which means \(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle^{2}\) always appears in the denominator \(R_{j-1}-R_{i}\) of \(\hat{\gamma}_{k}=\frac{\sigma^{2}}{n}\min_{k\leq i\leq M}\max_{1\leq j\leq k }\frac{d_{i}-d_{j-1}}{R_{j-1}-R_{i}}\). Thus, in (37), we know \(\left\langle\mathbf{y},\boldsymbol{\psi}_{l}\right\rangle\) should also always appear in \(R_{b_{i}}-R_{c_{i}}\) for any \(i=1,2,\ldots,K+1\). Recall that \(s_{0}=0,s_{1},\ldots,s_{u}=M\) correspond to
the change points of \(\hat{\mathbf{\gamma}}\). Suppose \(a(k)\) is the index for which \(s_{a(k)-1}<k\leq s_{a(k)}\). Here \(s_{0}\), \(s_{1},\ldots,s_{u}\), and \(a(k)\) are all functions of \(\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle\). Then we can continue from (37) and obtain
Continuing from (36), we have
\[\mathrm{df}(\hat{f}_{\mathrm{stack}})\] \[=\ \mathbb{E}\left[\sum_{k=1}^{M}\Delta d_{k}(1-\hat{\beta}_{k})+2 \sum_{l=1}^{s_{a}}\sum_{l\in A_{k}\backslash A_{k-1}}\left\langle\mathbf{y},\mathbf{ \psi}_{l}\right\rangle^{2}\frac{\tau\sigma^{2}}{n}\frac{d_{s_{a(k)}}-d_{s_{a( k)-1}}}{\left(R_{s_{a(k)-1}}-R_{s_{a(k)}}\right)^{2}}\right]+\mathrm{sdf}(\hat{f}_{ \mathrm{stack}})\] \[=\ \mathbb{E}\left[\sum_{k=1}^{M}\Delta d_{k}(1-\hat{\beta}_{k})+2 \sum_{l=1}^{a}\sum_{l\in A_{s_{l}}\backslash A_{s_{l-1}}}\left\langle\mathbf{y}, \mathbf{\psi}_{l}\right\rangle^{2}\frac{\tau\sigma^{2}}{n}\frac{d_{s_{l}}-d_{s_{l -1}}}{\left(R_{s_{l-1}}-R_{s_{l}}\right)^{2}}\right]+\mathrm{sdf}(\hat{f}_{ \mathrm{stack}})\] \[=\ \mathbb{E}\left[\sum_{k=1}^{M}\Delta d_{k}(1-\hat{\beta}_{k})+2 \sum_{l=1}^{a}\hat{\beta}_{s_{l}}\right]+\mathrm{sdf}(\hat{f}_{\mathrm{stack }}).\]
In the case of \(\lambda\leq\tau\), let \(\tilde{a}\) satisfy \(s_{\tilde{a}}=\max\left\{k\in\left\{0,1,\ldots,M\right\}:\hat{\gamma}_{k}<1/\tau\right\}\). Since \(\hat{\beta}_{k}\) is continuous in this case, it should hold that \(\mathrm{sdf}(\hat{f}_{\mathrm{stack}})=0\). Thus, with a similar argument as above, we have
\[\mathrm{df}(\hat{f}_{\mathrm{stack}})=\mathbb{E}\left[\sum_{k=1}^{M}\Delta d_{ k}(1-\hat{\beta}_{k})+2\sum_{l=1}^{\tilde{a}}\hat{\beta}_{s_{l}}\right]. \tag{38}\]
Next we start to characterize \(\mathrm{sdf}(\hat{f}_{\mathrm{best}})\) and \(\mathrm{sdf}(\hat{f}_{\mathrm{stack}})\). Given \(1\leq k\leq M\) and \(l\in A_{k}\backslash A_{k-1}\), we know both \(\hat{\beta}_{k}\) and \(\tilde{\beta}_{k}\) should have the same discontinuous points with respect to \(\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle\). In addition, suppose \(t\geq 0\) is the discontinuous point then \(-t\) is also the discontinuous point. By part (i) of Lemma A.6, we know \(\hat{\gamma}_{k}\) is nonincreasing with \(\left|\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle\right|\) and thus, both \(\hat{\beta}_{k}\) and \(\tilde{\beta}_{k}\) have at most two discontinuous points. To indicate the dependence with \(l\) and \(k\), we denote the two discontinuous points by \(t_{l,k}\) and \(-t_{l,k}\). Again since \(\hat{\gamma}_{k}\) is nonincreasing with \(\left|\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle\right|\), we know \(\tilde{\beta}_{k}=\mathbf{1}(\left|\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle \right|\leq t_{l,k})\) and \(\hat{\beta}_{k}=\mathbf{1}(\left|\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle \right|\leq t_{l,k})+\tau\hat{\gamma}_{k}\mathbf{1}(\left|\left\langle\mathbf{y},\mathbf{ \psi}_{l}\right\rangle\right|>t_{l,k})\). Therefore, applying Lemma A.5 to \(h(\left\langle\mathbf{y},\mathbf{\psi}_{l}\right\rangle)=\left\langle\mathbf{y},\mathbf{\psi} _{l}\right\rangle(1-\beta_{k})\), we have
\[\mathrm{sdf}(\hat{f}_{\mathrm{best}})=\frac{\sqrt{n}}{\sigma}\sum_{k=1}^{M} \sum_{l\in A_{k}\backslash A_{k-1}}\mathbb{E}\left[t_{l,k}\left(\phi\left( \frac{\sqrt{n}(t_{l,k}-\mu_{l})}{\sigma}\right)+\phi\left(\frac{\sqrt{n}(-t_{ l,k}-\mu_{l})}{\sigma}\right)\right)\right]>0,\]
where \(\mu_{l}=\langle\mathbf{f},\mathbf{\psi}_{l}\rangle\), c.f., [11, 11, 12]. Also, when \(\lambda>\tau\), we take \(h(\langle\mathbf{y},\mathbf{\psi}_{l}\rangle)=\langle\mathbf{y},\mathbf{\psi}_{l} \rangle\left(1-\beta_{k}\right)\) and obtain
\[\mathrm{sdf}(\hat{f}_{\mathrm{stack}}) =\frac{\sqrt{n}}{\sigma}\sum_{k=1}^{M}\sum_{l\in A_{k}\backslash A _{k-1}}\mathbb{E}\bigg{[}t_{l,k}(1-\tau/\lambda)\bigg{(}\phi\bigg{(}\frac{ \sqrt{n}(t_{l,k}-\mu_{l})}{\sigma}\bigg{)}+\phi\bigg{(}\frac{\sqrt{n}(-t_{l,k} -\mu_{l})}{\sigma}\bigg{)}\bigg{)}\bigg{]}\] \[=(1-\tau/\lambda)\mathrm{sdf}(\hat{f}_{\mathrm{best}}).\]
When \(\lambda\leq\tau\), according to (38), we already know that \(\mathrm{sdf}(\hat{f}_{\mathrm{stack}})=0\). Now we are ready to show \(\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^{2}\big{]}< \mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}\big{]}\). In the case of \(\lambda>\tau\), recall that \(0=s_{0},s_{1},\ldots,s_{u}=M\) correspond to the change points of \(\hat{\mathbf{\gamma}}\) and \(\hat{\beta}_{k}=0\) when \(k\leq s_{a}\) and \(1\) otherwise. Thus, we know
\[\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^ {2}\big{]} =\mathbb{E}\left[R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left(\tilde{ \beta}_{k}^{2}-1\right)+\sum_{k=1}^{M}\frac{2\sigma^{2}}{n}\Delta d_{k}(1- \tilde{\beta}_{k})\right]+\frac{2\sigma^{2}}{n}\mathrm{sdf}(\hat{f}_{\mathrm{ best}})-\sigma^{2}\] \[=\mathbb{E}\left[R_{0}+\sum_{l=1}^{a}\Delta R_{s_{l}}\left(\tilde {\beta}_{s_{l}}^{2}-1\right)+\sum_{l=1}^{a}\frac{2\sigma^{2}}{n}\Delta d_{s_{l }}(1-\tilde{\beta}_{s_{l}})\right]+\frac{2\sigma^{2}}{n}\mathrm{sdf}(\hat{f}_{ \mathrm{best}})-\sigma^{2},\]
where \(\tilde{\beta}_{s_{l}}=0\) for any \(1\leq l\leq a\). With a similar argument, we have
\[\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}} \|^{2}\big{]}\] \[=\ \mathbb{E}\left[R_{0}+\sum_{l=1}^{a}\Delta R_{s_{l}}\left( \hat{\beta}_{s_{l}}^{2}-1\right)+\sum_{l=1}^{a}\frac{2\sigma^{2}}{n}\Delta d_{s _{l}}(1-\hat{\beta}_{s_{l}})+\frac{4\sigma^{2}}{n}\sum_{l=1}^{a}\hat{\beta}_{s _{l}}\right]+\frac{2\sigma^{2}}{n}\mathrm{sdf}(\hat{f}_{\mathrm{stack}})- \sigma^{2},\]
where \(\hat{\beta}_{s_{l}}=\tau\hat{\gamma}_{s_{l}}=\frac{\tau\sigma^{2}}{n}\frac{d_{s _{l}}-d_{s_{l-1}}}{R_{s_{l-1}}-R_{s_{l}}}\) for any \(1\leq l\leq a\). Define the quadratic function \(h_{l}(z)=\Delta R_{s_{l}}z^{2}-\frac{2\sigma^{2}}{n}(\Delta d_{s_{l}}-2)z\). It is evident that \(\Delta d_{s_{l}}\geq\min_{k}\Delta d_{k}\geq 4/(2-\tau)\), then \(h_{l}(\hat{\beta}_{s_{l}})\leq h_{l}(0)\) holds for all \(1\leq l\leq a\). Also, we have proved \(\mathrm{sdf}(\hat{f}_{\mathrm{stack}})=(1-\tau/\lambda)\mathrm{sdf}(\hat{f}_{ \mathrm{best}})<\mathrm{sdf}(\hat{f}_{\mathrm{best}})\), since \(\mathrm{sdf}(\hat{f}_{\mathrm{best}})>0\). Thus, \(\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^{2}\big{]}< \mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}\big{]}\) when \(\lambda>\tau\). In addition, the risk gap \(\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{best}}\|^{2}\big{]}- \mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^{2}\big{]}\) is equal to
\[\frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\sum_{l=1}^{a}\frac{ \Delta d_{s_{l}}(\Delta d_{s_{l}}-4/(2-\tau))}{\Delta R_{s_{l}}}\Bigg{]}+\frac {2\tau}{\lambda}\frac{\sigma^{2}}{n}\mathrm{sdf}(\hat{f}_{\mathrm{best}}). \tag{39}\]
We can further lower bound the first term in (39) by
\[\frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\sum_{l=1}^ {a}\frac{(\Delta d_{s_{l}}-4/(2-\tau))^{2}}{\Delta R_{s_{l}}}\Bigg{]} \geq\ \frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\frac{(d_{s_{a}}-4 a/(2-\tau))^{2}}{R_{0}-R_{s_{a}}}\Bigg{]} \tag{40}\] \[\geq\ \frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\frac{(d_{s_{a}}-4 s_{a}/(2-\tau))^{2}}{R_{0}-R_{s_{a}}}\Bigg{]}\] \[\geq\ \frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\min_{1\leq k \leq M}\frac{(d_{k}-4k/(2-\tau))^{2}}{R_{0}-R_{k}}\Bigg{]},\]
where we apply the Cauchy-Schwarz inequality to obtain the first inequality. This establishes (8) when \(\lambda>\tau\). In the case of \(\lambda\leq\tau\), by (38) we have
\[\mathbb{E}\big{[}\|\mathbf{f}-\mathbf{\hat{f}}_{\mathrm{stack}}\|^ {2}\big{]} \tag{41}\] \[=\ \mathbb{E}\left[R_{0}+\sum_{l=1}^{\tilde{\alpha}}\Delta R_{s_{ l}}\left(\hat{\beta}_{s_{l}}^{2}-1\right)+\sum_{l=1}^{\tilde{\alpha}}\frac{2\sigma^{2}}{n} \Delta d_{s_{l}}(1-\hat{\beta}_{s_{l}})+\frac{4\sigma^{2}}{n}\sum_{l=1}^{\tilde{ \alpha}}\hat{\beta}_{s_{l}}\right]-\sigma^{2},\]
where we recall \(\tilde{a}\) satisfies \(s_{\tilde{a}}=\max\left\{k\in\{0,1,\ldots,M\}:\hat{\gamma}_{k}<1/\tau\right\}\). Again, we know \(h_{t}(\hat{\beta}_{s_{l}})\leq h_{t}(0)\) for any \(1\leq l\leq\tilde{a}\) when \(\Delta d_{s_{l}}\geq 4/(2-\tau)\). Note that \(\tilde{a}\leq a\) and for any \(\tilde{a}<l\leq a\), according to the definition of \(\tilde{a}\), we have \(\hat{\gamma}_{s_{l}}=\frac{\sigma^{2}}{n}\frac{\Delta d_{s_{l}}}{\Delta R_{s_ {l}}}\geq 1/\tau>1/2\). Thus for any \(\tilde{a}<l\leq a\),
\[\Delta R_{s_{l}}\left(\hat{\beta}_{s_{l}}^{2}-1\right)+\frac{2\sigma^{2}}{n} \Delta d_{s_{l}}(1-\tilde{\beta}_{s_{l}})=-\Delta R_{s_{l}}+\frac{2\sigma^{2}} {n}\Delta d_{s_{l}}>0.\]
Therefore, we obtain \(\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]<\mathbb{E}[ \|\mathbf{f}-\mathbf{\hat{f}}_{\text{best}}\|^{2}]\), and with a similar argument as (40), we have can lower bound the risk gap by
\[\frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\sum_{l=1}^ {a}\frac{\Delta d_{s_{l}}(\Delta d_{s_{l}}-4/(2-\tau))}{\Delta R_{s_{l}}} \Bigg{]}+\frac{2\sigma^{2}}{n}\text{sdf}(\hat{f}_{\text{best}})\] \[> \frac{\sigma^{4}\tau(2-\tau)}{n^{2}}\mathbb{E}\Bigg{[}\min_{1\leq k \leq M}\frac{(d_{k}-4k/(2-\tau))^{2}}{R_{0}-R_{k}}\Bigg{]}+\frac{2\sigma^{2} }{n}\text{sdf}(\hat{f}_{\text{best}}),\]
which establishes (8) when \(\lambda\leq\tau\).
Proof of Theorem 4.2.: With equality (41) and the assumption that \(\tau=\lambda=1\), we know
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]\] \[= \mathbb{E}\Bigg{[}R_{0}+\sum_{l=1}^{\tilde{a}}\Delta R_{s_{l}} \left(\hat{\beta}_{s_{l}}^{2}-1\right)+\sum_{l=1}^{\tilde{a}}\frac{2\sigma^{2 }}{n}\Delta d_{s_{l}}(1-\hat{\beta}_{s_{l}})+\frac{4\sigma^{2}}{n}\sum_{l=1}^ {\tilde{a}}\hat{\beta}_{s_{l}}\Bigg{]}-\sigma^{2}\] \[= \mathbb{E}\Bigg{[}R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left(\hat{ \beta}_{k}^{2}-1\right)+\sum_{k=1}^{M}\frac{2\sigma^{2}}{n}\Delta d_{k}(1-\hat {\beta}_{k})+\frac{4\sigma^{2}}{n}\sum_{k=1}^{M}\hat{\beta}_{k}\mathbf{1}( \hat{\beta}_{k}\neq\hat{\beta}_{k+1})\Bigg{]}-\sigma^{2}\] \[\leq \mathbb{E}\Bigg{[}R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left(\beta_{k} ^{2}-1\right)+\sum_{k=1}^{M}\frac{2\sigma^{2}}{n}\Delta d_{k}(1-\beta_{k}) \Bigg{]}+\mathbb{E}\Bigg{[}\frac{4\sigma^{2}}{n}\sum_{k=1}^{M}\hat{\beta}_{k} \mathbf{1}(\hat{\beta}_{k}\neq\hat{\beta}_{k+1})\Bigg{]}-\sigma^{2},\]
where the last inequality holds for any deterministic or random \(\beta_{1}\leq\beta_{2}\leq\cdots\leq\beta_{M}\leq 1\), since \(\hat{\beta}\) is the solution of program (10) and \(\tau=\lambda=1\). For the second equality, we use the fact that \(\hat{\beta}_{k}=1\) when \(k>s_{a}\). Recall that \(\alpha_{k}=\beta_{k+1}-\beta_{k}\geq 0\). Note that with deterministic \(\{\beta_{k}\}\) (and \(\{\alpha_{k}\}\)),
\[\mathbb{E}\Bigg{[}\left\|\mathbf{f}-\sum_{k=1}^{M}\alpha_{k} \mathbf{\hat{\mu}}_{k}\right\|^{2}\Bigg{]} =\mathbb{E}\Bigg{[}\left\|\mathbf{y}-\sum_{k=1}^{M}\alpha_{k} \mathbf{\hat{\mu}}_{k}\right\|^{2}\Bigg{]}-\sigma^{2}+\sum_{k=1}^{M}\frac{2 \sigma^{2}}{n}d_{k}\alpha_{k}\] \[=\mathbb{E}\Bigg{[}R_{0}+\sum_{k=1}^{M}\Delta R_{k}\left(\beta_{k} ^{2}-1\right)+\sum_{k=1}^{M}\frac{2\sigma^{2}}{n}\Delta d_{k}(1-\beta_{k}) \Bigg{]}-\sigma^{2}.\]
Since \(\boldsymbol{\alpha}^{\star}\) is deterministic, we thus have
\[\mathbb{E}[\|\mathbf{f}-\mathbf{\hat{f}}_{\text{stack}}\|^{2}]\leq\mathbb{E} \Bigg{[}\left\|\mathbf{f}-\sum_{k=1}^{M}\alpha_{k}^{\star}\mathbf{\hat{\mu}}_{ k}\right\|^{2}\Bigg{]}+\mathbb{E}\Bigg{[}\frac{4\sigma^{2}}{n}\sum_{k=1}^{M}\hat{ \beta}_{k}\mathbf{1}(\hat{\beta}_{k}\neq\hat{\beta}_{k+1})\Bigg{]}.\]
Theorem 4.2 follows from the fact that \(\sum_{k=1}^{M}\hat{\beta}_{k}\mathbf{1}(\hat{\beta}_{k}\neq\hat{\beta}_{k+1})= \sum_{k=1}^{M}\left(1-\sum_{m=k}^{M}\hat{\alpha}_{m}\right)\mathbf{1}(\hat{ \alpha}_{k}\neq 0)\), and the inequality
\[\mathbb{E}\left[\sum_{k=1}^{M}\left(1-\sum_{m=k}^{M}\hat{\alpha}_{m}\right) \mathbf{1}(\hat{\alpha}_{k}\neq 0)\right]\leq\mathbb{E}\left[\sum_{k=1}^{M} \mathbf{1}(\hat{\alpha}_{k}\neq 0)\right]=\mathbb{E}\left[\|\hat{\alpha}\|_{\ell_{0}} \right].\]
Proof of Theorem 7.1.: By setting \(\lambda=\tau=1\) in (6), and noting that \(\|\boldsymbol{\alpha}\|_{\ell_{0}}=\sum_{k=1}^{M}\mathbf{1}(\hat{\beta}_{k} \neq\hat{\beta}_{k+1})\), we can establish that the solution to (16) is equivalent to the solution of the following program:
minimize \[\sum_{k=1}^{M}w_{k}(z_{k}-\beta_{k})^{2}+\frac{4\sigma^{2}}{n} \sum_{k=1}^{M}\mathbf{1}(\beta_{k}\neq\beta_{k+1})\] subject to \[\beta_{1}\leq\beta_{2}\leq\cdots\leq\beta_{M}\leq 1,\]
where \(w_{k}=\Delta R_{k}>0\) and \(z_{k}=(\sigma^{2}/n)(\Delta d_{k}/\Delta R_{k})>0\). Recall that we assume \(r_{q}=\max\left\{k\in\{0,1,\ldots,M\}:\hat{\beta}_{k}<1\right\}\). According to Lemma A.2, to show \(\dim(\hat{f}_{\text{stack}})\geq\dim(\hat{f}_{\text{best}})\), it suffices to show \(r_{q}\geq\tilde{m}\). If \(r_{q}=M\), then \(M=r_{q}\geq\tilde{m}\) is already satisfied. If \(r_{q}<M\), we first show for any \(r_{q}<j\leq M\),
\[\frac{\hat{\beta}_{r_{q}}+1}{2}\leq\frac{\sigma^{2}}{n}\frac{d_{j}-d_{r_{q}}}{ R_{r_{q}}-R_{j}}. \tag{42}\]
Consider another sequence \(\boldsymbol{\tilde{\beta}}\) where \(\tilde{\beta}_{k}=\hat{\beta}_{k}\mathbf{1}(1\leq k\leq r_{q})+\hat{\beta}_{ r_{q}}\mathbf{1}(r_{q}<k\leq j)+\mathbf{1}(j<k\leq M)\). By the optimality of \(\hat{\beta}\), we have
\[\sum_{k=1}^{M}\Delta R_{k}\left(-\hat{\beta}_{k}+\frac{\sigma^{2 }}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\right)^{2}+\frac{4\sigma^{2}}{n}\sum_{ k=1}^{M}\hat{\beta}_{k}\mathbf{1}(\hat{\beta}_{k}\neq\hat{\beta}_{k+1})\] \[\leq \sum_{k=1}^{M}\Delta R_{k}\left(-\tilde{\beta}_{k}+\frac{\sigma^{ 2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\right)^{2}+\frac{4\sigma^{2}}{n}\sum_ {k=1}^{M}\tilde{\beta}_{k}\mathbf{1}(\tilde{\beta}_{k}\neq\tilde{\beta}_{k+1}).\]
This implies
\[\sum_{k=r_{q}+1}^{j}\Delta R_{k}\left(1-\frac{\sigma^{2}}{n} \frac{\Delta d_{k}}{\Delta R_{k}}\right)^{2}+\sum_{k=j+1}^{M}\Delta R_{k} \left(1-\frac{\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\right)^{2}\] \[\leq \sum_{k=r_{q}+1}^{j}\Delta R_{k}\left(\hat{\beta}_{r_{q}}-\frac{ \sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}}\right)^{2}+\sum_{k=j+1}^{M} \Delta R_{k}\left(1-\frac{\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}} \right)^{2}.\]
After simplification, we have
\[\left(R_{r_{q}}-R_{j}\right)\left(1-\frac{\sigma^{2}}{n}\frac{d_ {j}-d_{r_{q}}}{R_{r_{q}}-R_{j}}\right)^{2} \leq\left(R_{r_{q}}-R_{j}\right)\left(\hat{\beta}_{r_{q}}-\frac{ \sigma^{2}}{n}\frac{d_{j}-d_{r_{q}}}{R_{r_{q}}-R_{j}}\right)^{2}\] \[\Leftrightarrow\left|1-\frac{\sigma^{2}}{n}\frac{d_{j}-d_{r_{q}}} {R_{r_{q}}-R_{j}}\right| \leq\left|\hat{\beta}_{r_{q}}-\frac{\sigma^{2}}{n}\frac{d_{j}-d_ {r_{q}}}{R_{r_{q}}-R_{j}}\right|,\]
which implies (42) since \(\hat{\beta}_{r_{q}}<1\). Suppose \(r_{q}<\tilde{m}\). Then
\[\frac{1}{2}\geq\tilde{\gamma}_{\tilde{m}}=\max_{1\leq i\leq\tilde{m}}\frac{ \sigma^{2}}{n}\frac{d_{\tilde{m}}-d_{i-1}}{R_{i-1}-R_{\tilde{m}}}\geq\frac{ \sigma^{2}}{n}\frac{d_{\tilde{m}}-d_{r_{q}}}{R_{r_{q}}-R_{\tilde{m}}}.\]
Noting that \(\hat{\beta}_{r_{q}}>0\), we obtain a contradiction when taking \(j=\tilde{m}\) in (42). Therefore \(r_{q}\geq\tilde{m}\).
To demonstrate \(\sum_{k=1}^{M}\hat{\alpha}_{k}<1\), it suffices to show \(\hat{\beta}_{1}>0\). Using an argument similar to (Gao et al., 2020, Lemma 5.1) to characterize the form of the solution to (weighted) reduced isotonic regression, we can show that there exists an index \(1\leq k\leq M\) for which \(\hat{\beta}_{1}=\frac{\sigma^{2}}{n}\frac{d_{k}}{R_{0}-R_{k}}>0\).
Proof of the solution of program (18).: The program proposed in (Bellec, 2018, Bellec and Yang, 2020) is
\[\begin{split}&\text{minimize}\quad R(\alpha)+\frac{2\sigma^{2}}{n} \sum_{k=1}^{M}\alpha_{k}d_{k}+\eta\sum_{m=1}^{M}\alpha_{m}\frac{1}{n}\sum_{i=1 }^{n}\left(\tilde{\mu}_{m}(\mathbf{x}_{i})-\sum_{k=1}^{M}\alpha_{k}\hat{\mu}_ {k}(\mathbf{x}_{i})\right)^{2}\\ &\text{subject to}\quad\alpha_{k}\geq 0,\ \ k=1,2,\ldots,M,\ \ \text{and}\ \ \ \sum_{k=1}^{M}\alpha_{k}=1,\end{split} \tag{43}\]
with \(\eta=1/2\). Thanks to a bias-variance decomposition, when \(\sum_{k=1}^{M}\alpha_{k}=1\),
\[\begin{split}& R(\alpha)+\frac{2\sigma^{2}}{n}\sum_{k=1}^{M} \alpha_{k}d_{k}+\eta\sum_{m=1}^{M}\alpha_{m}\frac{1}{n}\sum_{i=1}^{n}\left( \hat{\mu}_{m}(\mathbf{x}_{i})-\sum_{k=1}^{M}\alpha_{k}\hat{\mu}_{k}(\mathbf{x }_{i})\right)^{2}\\ &=\ (1-\eta)R(\alpha)+\frac{2\sigma^{2}}{n}\sum_{k=1}^{M}\alpha_{k}d_{k}+ \eta\sum_{k=1}^{M}\alpha_{k}R_{k}.\end{split}\]
Thus, program (43) corresponds to Lemma A.7, which also demonstrates its equivalence to program (18).
**Lemma A.7**.: _Consider the program_
\[\begin{split}&\text{minimize}\quad R(\alpha)+\sum_{k=1}^{M} \alpha_{k}\Bigg{(}\frac{\frac{2\sigma^{2}}{n}d_{k}+\eta R_{k}}{1-\eta}\Bigg{)} \\ &\text{subject to}\quad\alpha_{k}\geq 0,\ \ k=1,2,\ldots,M.\end{split} \tag{44}\]
_If \(0<\eta<1\) and \(\sum_{k=1}^{M}\alpha_{k}=1\), then the solution to (44) is_
\[\ddot{\alpha}_{k}=\phi\Bigg{(}\frac{1}{2}\Bigg{(}1+\frac{1-2\tilde{\gamma}_{ k}}{1-\eta}\Bigg{)}\Bigg{)}-\phi\Bigg{(}\frac{1}{2}\Bigg{(}1+\frac{1-2\tilde{ \gamma}_{k+1}}{1-\eta}\Bigg{)}\Bigg{)},\]
_where \(\phi(z)=\min\{1,\max\{0,z\}\}\) is the clip function for \(z\) at 0 and 1, and_
\[\tilde{\gamma}_{1}=0,\quad\tilde{\gamma}_{k}=\frac{\sigma^{2}}{n}\min_{k\leq i \leq M}\max_{1\leq j<k}\frac{d_{i}-d_{j}}{R_{j}-R_{i}},\quad k=2,3,\ldots,M.\]
Proof.: Recall that \(\alpha_{k}=\beta_{k+1}-\beta_{k}\) and \(c_{k}=1-\beta_{k}\). Using summation by parts and Lemma A.1, we can establish the equivalence between the solution of (44) and the solution of the following program:
\[\begin{split}\text{minimize}&\frac{\eta}{1-\eta}R_{0} c_{1}+\sum_{k=1}^{M}\Delta R_{k}\bigg{(}c_{k}-\frac{1}{2}\bigg{(}\frac{2-\eta}{1- \eta}-\frac{2}{1-\eta}\frac{\sigma^{2}}{n}\frac{\Delta d_{k}}{\Delta R_{k}} \bigg{)}\bigg{)}^{2}\\ \text{subject to}& c_{1}\geq c_{2}\geq\cdots\geq c_{M} \geq 0.\end{split} \tag{45}\]
When \(0<\eta<1\) and \(\sum_{k=1}^{M}\alpha_{k}=1\), we note that \(c_{1}=\sum_{k=1}^{M}\alpha_{k}=1\) and \(\beta_{1}=1-c_{1}=0\). Thus, (45) can be further reduced to (18). By Lemma A.3, we know the solution is \(\tilde{\beta}_{1}=0\) and \(\tilde{\beta}_{k}=1-\phi\Big{(}\frac{1}{2}\Big{(}1+\frac{1-2\tilde{\gamma}_{ k}}{1-\eta}\Big{)}\Big{)}=1-\phi\Big{(}\frac{1}{1-\eta}\Big{(}1-\eta/2-\tilde{ \gamma}_{k}\Big{)}\Big{)}\) for \(k=2,3,\ldots,M\). Consequently,
\[\breve{\alpha}_{k}=\phi\bigg{(}\frac{1}{2}\bigg{(}1+\frac{1-2\tilde{\gamma}_{ k}}{1-\eta}\bigg{)}\bigg{)}-\phi\bigg{(}\frac{1}{2}\bigg{(}1+\frac{1-2\tilde{ \gamma}_{k+1}}{1-\eta}\bigg{)}\bigg{)},\quad k=1,2,\ldots,M.\]
|
2308.16451 | Optical flow-based vascular respiratory motion compensation | This paper develops a new vascular respiratory motion compensation algorithm,
Motion-Related Compensation (MRC), to conduct vascular respiratory motion
compensation by extrapolating the correlation between invisible vascular and
visible non-vascular. Robot-assisted vascular intervention can significantly
reduce the radiation exposure of surgeons. In robot-assisted image-guided
intervention, blood vessels are constantly moving/deforming due to respiration,
and they are invisible in the X-ray images unless contrast agents are injected.
The vascular respiratory motion compensation technique predicts 2D vascular
roadmaps in live X-ray images. When blood vessels are visible after contrast
agents injection, vascular respiratory motion compensation is conducted based
on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn
the correlation between vascular and non-vascular motions. During the
intervention, the invisible blood vessels are predicted with visible tissues
and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted
for refinement. Experiments on in-vivo data sets show that the proposed method
can yield vascular respiratory motion compensation in 0.032 sec, with an
average error 1.086 mm. Our real-time and accurate vascular respiratory motion
compensation approach contributes to modern vascular intervention and surgical
robots. | Keke Yang, Zheng Zhang, Meng Li, Tuoyu Cao, Maani Ghaffari, Jingwei Song | 2023-08-31T04:38:12Z | http://arxiv.org/abs/2308.16451v1 | # Optical flow-based vascular respiratory motion compensation
###### Abstract
This paper develops a new vascular respiratory motion compensation algorithm, Motion-Related Compensation (MRC), to conduct vascular respiratory motion compensation by extrapolating the correlation between invisible vascular and visible non-vascular. Robot-assisted vascular intervention can significantly reduce the radiation exposure of surgeons. In robot-assisted image-guided intervention, blood vessels are constantly moving/deforming due to respiration, and they are invisible in the X-ray images unless contrast agents are injected. The vascular respiratory motion compensation technique predicts 2D vascular roadmaps in live X-ray images. When blood vessels are visible after contrast agents injection, vascular respiratory motion compensation is conducted based on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn the correlation between vascular and non-vascular motions. During the intervention, the invisible blood vessels are predicted with visible tissues and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted for refinement. Experiments on in-vivo data sets show that the proposed method can yield vascular respiratory motion compensation in \(0.032\,\mathrm{sec}\), with an average error \(1.086\,\mathrm{mm}\). Our real-time and accurate vascular respiratory motion compensation approach contributes to modern vascular intervention and surgical robots.
robot-assisted vascular interventions, vascular respiratory motion compensation, dynamic roadmapping, optical flow
## I Introduction
Robot-assisted vascular interventional therapy is a rapidly developing technology in the field of cardiovascular disease treatment [1, 2, 3]. Its value in peripheral blood vessels, particularly in tumor embolization therapy, is gaining increasing attention. Among them, robot-assisted vascular intervention reduces radiation and has drawn more attention recently [4]. In the vascular intervention procedure, roadmapping is the process of superimposing 2D vascular on live fluoroscopic images and plays a key role in surgeries [5].
Vascular respiratory motion compensation (or dynamic roadmapping) technique pushes conventional **static** roadmaps to **dynamic** roadmaps and helps the interventionists/robots manipulate catheters and guidewires or place stents by visualization of the map and devices on one screen [5]. In typical interventions, catheters and guidewires are guided under live fluoroscopic images, which contain 2D device information [6]. During the process, contrast agents are injected to provide clear vascular structures [7]. However, contrast agents flow quickly, and the vascular no longer develops after the contrast agents disappear [3]. Moreover, deforming/moving roadmapping brings large errors in organs like the liver due to respiration motion and requires additional compensation. Fig. 1 shows a typical vascular respiratory motion compensation for handling the two issues in conventional static roadmapping. The green mask obtained from the contrasted image (the fluoroscopic image with contrast agent injection as shown in Fig. 1 contrasted sequences) is mapped onto the live image directly. The error is evident from the guidewire due to breathing motion. Vascular respiratory motion compensation can dynamically move 2D roadmaps to properly match the live fluoroscopic images, especially after the contrast agent disappears and vascular structures are not visible from fluoroscopic images. The red mask in Fig. 1 is the prediction of vascular respiratory motion compensation, which can provide immediate feedback to robots/physicians during surgeries.
Existing vascular respiratory motion compensation methods can be categorized as respiratory state-driven and motion modeling [7]. Compensation based on respiratory state extraction connects vascular motion with the respiratory state, which can be extracted through external devices or images. External devices include respiratory belts [8], surface markers [9, 10],
Fig. 1: The scenario of vascular respiratory motion compensation: input live image with invisible vascular, output roadmap of the live image with mapped red vascular mask. The roadmap can be obtained by contrasted sequences and mapped onto the live image directly, as indicated by the green mask. Vascular respiratory motion compensation utilizes contrasted sequences to predict motion between the live image and roadmap, as white arrows indicate. The red dotted line is for easy visualization of motion.
and electromagnetic sensors [11]. Although estimating the respiratory state with external devices-based is straightforward and applicable, it disrupts the clinical workflow and is not robust due to unfixed respiratory rate and amplitude [7]. Unlike using additional devices, image-based approaches estimate respiratory state with anatomical landmarks such as the diaphragm to obtain respiratory state [12, 13, 7, 14]. [7] estimated respiratory state based on diaphragm location in the non-contrasted images and fitted between vascular affine transformation and respiratory state with a linear function. For non-contrasted live images, an affine transformation of vascular was estimated by respiratory state, which is computationally efficient. [7] reported an average one-frame processing time of \(17~{}ms\), which is the fastest. Image-based respiratory state extraction requires no additional clinical workflow but is limited by the Field-of-View (FoV). As [15] pointed out, although respiratory state-based methods are time-efficient, their accuracy and robustness are limited because motions over the respiratory cycle are generally less reproducible.
Different from respiratory state-driven approaches, model-based approaches build models to learn and predict the motion in fluoroscopic frames [3, 16, 17, 18, 19]. Model-based methods can be categorized as **catheter-based** and **catheter-free. Ccatheter-based** methods track the catheter tip to conduct vascular respiratory motion compensation. [16] learned displacements of the catheter tip by a Convolutional Neural Network (CNN). In their works, vascular motion was factorized into respiratory and heart motion. Heart motion compensation was done by ECG, and respiratory motion was predicted by CNN. [17] conducted cardiac and respiratory-related vascular motion compensation by ECG alignments and catheter tip tracking in X-ray fluoroscopy, respectively. In particular, to realize accurate and robust tracking of the catheter tip, they proposed a new deep Bayesian filtering method that integrated the detection outcome of a CNN and the sequential motion estimation using a particle filtering framework. **Ccatheter-free** methods conduct vascular respiratory motion compensation by soft tissue motion. [20, 21] utilized soft tissue around the heart to model vascular motion. These works are based on the assumption that vascular motion and soft tissue motion followed the same affine transformation, and Lucas-Kanade (LK) tracker was improved to estimate soft tissue affine transformation to conduct vascular respiratory motion compensation. [20] proposed special handling of static structures to recover soft tissue motion. [20] also applied the LK tracker on multiple observed fluoroscopic images to gain robustness. However, the LK tracker on the entire X-ray image needs heavy computation. Meanwhile, vascular motion and soft tissue motion are not always consensus. To sum up, methods based on motion modeling can yield high accuracy and robust compensations without additional input, but their computations are large. To our knowledge, no study can achieve both real-time implementation and high accuracy in hepatic vascular respiratory motion compensation.
In this paper, we propose Motion-Related Compensation (MRC) algorithm to achieve real-time and accurate vascular respiratory motion compensation. To enable fast vascular respiratory motion compensation, feature points are extracted from the observed fluoroscopic image; deforming vascular motions are estimated based on the tracked feature points. Furthermore, a correlation model is built between vascular points motion and non-vascular points motion when the contrast agent develops in X-ray images. After the disappearance of the contrast agent, our trained model can predict the invisible motion of the vascular points based on the motion of the non-vascular points. Moreover, a Gaussian-based Outlier Filtering (GOF) technique is adopted to refine the correlation model's prediction. In summary, our main contributions are:
* To our knowledge, the proposed MRC is the first applicable method that is both real-time and accurate for respiratory motion compensation. It achieves \(31Hz\) and \(1.086mm\) for the typical fluoroscopic image size \(512\times 512\) on a modern desktop with Intel Core i5-10500 CPU at 3.10GHz.
* We propose a novel vascular MRC method without assuming that vascular and non-vascular motion are identical, which uses multi-frame contrasted images to learn the model of vascular-nonvascular motion correlation.
* GOF is adopted to improve the accuracy of our MRC predictions.
## II Preliminaries
Fig. 1 describes the vascular respiratory motion compensation process. Blood vessels are visible from a small sequence with the contrast agent. After the contrast agent flows, the vascular is not visible in the live X-ray images. Vascular respiratory motion compensation on live images significantly benefits surgeons and surgical robots. Thus, the purpose of this research is to implement vascular motion compensation on live X-ray images based on the limited images with visible vascular structures.
Denote the sequence with and without the contrast agent as 2D images \(\mathbf{I}=\{\mathbf{I}_{1},\mathbf{I}_{2},...,\mathbf{I}_{k}\}\) and \(\mathbf{R}=\{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{q},...\}\). Reference frame taken from contrasted images is specially defined as \(\mathbf{I}_{r}\). The binary vascular mask of reference frame \(\mathbf{I}_{r}\) is denoted as \(\mathbf{M}_{r}\). The vascular corners and non-vascular corners extracted from reference frame \(\mathbf{I}_{r}\) by Shi-Tomasi approach [22] are denoted as \(\mathbf{C}_{r}^{v}=[\mathbf{c}_{r}^{v(1)},...,\mathbf{c}_{r}^{v(\mathbf{N}_{v} )}]^{\top}\) and \(\mathbf{C}_{r}^{n}=[\mathbf{c}_{r}^{n(1)},...,\mathbf{c}_{r}^{n(\mathbf{N}_{v} )}]^{\top}\), where \(\mathbf{c}_{r}^{v(i)},\mathbf{c}_{r}^{n(i)}\in\mathbb{R}^{2\times 1}\) and \(\mathbf{N}_{v}\), \(\mathbf{N}_{n}\) are the number of vascular corners and non-vascular corners. The vascular corners motion flow between \(\mathbf{I}_{i}\) and \(\mathbf{I}_{r}\) is denoted as \(\mathbf{D}_{i}^{v}=[\mathbf{d}_{i}^{v(1)},...,\mathbf{d}_{i}^{v(\mathbf{N}_{v} )}]^{\top}\in\mathbb{R}^{\mathbb{N}_{v}\times 2}\), where \(\mathbf{d}_{i}^{v(j)}\in\mathbb{R}^{2\times 1}\). The non-vascular motion flow is denoted as \(\mathbf{D}_{i}^{n}=[\mathbf{d}_{i}^{n(1)},...,\mathbf{d}_{i}^{n(\mathbf{N}_{v} )}]^{\top}\in\mathbb{R}^{\mathbb{N}_{v}\times 2}\), where \(\mathbf{d}_{i}^{n(j)}\in\mathbb{R}^{2\times 1}\). The vascular corners motion flow between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\) is denoted as \(\mathbf{F}_{q}^{v}=[\mathbf{f}_{q}^{v(1)},...,\mathbf{f}_{q}^{v(\mathbf{N}_{v} )}]^{\top}\in\mathbb{R}^{\mathbb{N}_{v}\times 2}\), where \(\mathbf{f}_{q}^{v(j)}\in\mathbb{R}^{2\times 1}\). The non-vascular motion flow is denoted as \(\mathbf{F}_{q}^{n}=[\mathbf{f}_{q}^{n(1)},...,\mathbf{f}_{q}^{n(\mathbf{N}_{v} )}]^{\top}\in\mathbb{R}^{\mathbb{N}_{v}\times 2}\), where \(\mathbf{f}_{q}^{n(j)}\in\mathbb{R}^{2\times 1}\).
## III Method
### _System Overview_
Fig. 2 shows the pipeline of our proposed MRC algorithm consisting of three modules: **sparse corners alignment**, **motion-related model**, and **GOF**. **Sparse corners alignment**
module calculates motion flow between reference frame \(\mathbf{I}_{r}\) and live moving frames and splits motion flow into vascular motion flow and non-vascular motion flow with vascular mask \(\mathbf{M}_{r}\) extracted from the reference frame. **Motion-related model** builds the correlation between vascular motion flow and non-vascular motion flow. **GOF** module filters outliers with the obtained non-vascular motion flow to refine prediction.
### _Sparse Corners Alignment_
Affine motion parameterization is used in [20, 21] to model non-vascular motion incurred by respiration, which is regarded as vascular motion directly. Affine motion parameterization is also used in [7] to model vascular motion. However, the affine motion model may not simulate real vascular motion due to a low degree of freedom. Moreover, it is not accurate to treat vascular motion and non-vascular motion equally; for example, the amplitude of respiratory motion at the top of the liver is larger than at the bottom of the liver. Therefore, we adopt sparse optical flow as a tracker to obtain motion at the pixel level, which does not limit the freedom of motion nor assume the consistency of vascular motion and non-vascular motion. Sparse flow selects a sparse feature set of pixels (e.g., interesting features such as edges and corners) to track its motion vector, which only tracks much fewer points but achieves fast speed and high accuracy. Inspired by [23], Shi-Tomasi corner detection method [22] is used to extract sparse features, and LK [24] is used to track corners' motion sequence in this paper. And experiments in Section IV validate their efficiencies.
Sparse corner alignment calculates vascular and non-vascular motion flow separately. Firstly, the vascular corners \(\mathbf{C}_{r}^{v}\in\mathbb{R}^{\mathrm{N}_{v}\times 2}\) and the non-vascular corners \(\mathbf{C}_{r}^{n}\in\mathbb{R}^{\mathrm{N}_{n}\times 2}\) in reference frame \(\mathbf{I}_{r}\) are extracted using Shi-Tomasi [22]. Then, sparse vascular motion flow \(\mathbf{D}_{i}^{v}\in\mathbb{R}^{\mathrm{N}_{v}\times 2}\) and non-vascular motion flow \(\mathbf{D}_{i}^{n}\in\mathbb{R}^{\mathrm{N}_{n}\times 2}\) between other frame \(\mathbf{I}_{i}\) and \(\mathbf{I}_{r}\) can be estimated by LK [24]. Similarly, for any live image \(\mathbf{R}_{q}\), its non-vascular motion flow \(\mathbf{F}_{q}^{n}\) can be obtained by LK [24].
Although dense flow can also obtain motion at the pixel level, it computes the optical flow sequence for every pixel, which produces accurate results but runs much slower. In addition, ORB and SIFT are more efficient in aligning sparse features with scale and orientation differences. Experiments show that ORB and SIFT are less robust in vascular respiratory motion compensation.
### _Motion-Related Model_
The motion-related model first formulates a correlation between vascular and non-vascular motion and then predicts vascular motion after contrast agents disappear. We assume the motion of soft tissues caused by breathing is smooth, for example, the hepatic artery follows the diaphragm to make the correlated motion. Inspired by this, we assume that vascular motion is caused by the motion of the surrounding non-vascular tissues. Enforcing the smoothness presumption, our motion-related model retrieves vascular motion and non-vascular motion based on the sequence with visible vascular structures. It should be noted that the model parameters need to be updated for each injection of the contrast agent for each patient. Our model can only carry out vascular respiratory motion compensation for the blood vessels that has been observed. It is reasonable to do vascular respiratory motion compensation by the peripheral non-vascular motion that is elastic and connected to blood vessels [25]. It should be noted that contrast agents flow quickly, and our algorithm can reduce the use of contrast agents.
[20] retrieves vascular respiratory motion considering vascular and non-vascular motion as the same. It is not accurate in practice due to different motion magnitudes incurred by respiration. To establish a relation between vascular and non-vascular motion, a linear and a non-linear regression can be used to fit the relation. Its slow speed [26], hyperparameters sensitivity, and similar accuracy to linear regression drive us to select a linear model. In addition, [25] used the linear elastic
Fig. 2: Overview of the proposed MRC algorithm.
model, which indicates a linear regression. And experiments also verify the effectiveness of the linear regression model.
Specifically, the Pearson coefficient is adopted to quantify the correlation [27]. The \(i\)th vascular corner motion flows on \(\mathrm{k}\) frames are denoted as \(\mathbf{Y}_{i}=[\mathbf{d}_{1}^{v(i)},...,\mathbf{d}_{k}^{v(i)}]\in\mathbb{R}^{ 2\times k}\). The \(j\)th non-vascular motion flows on \(\mathrm{k}\) frames are denoted as \(\mathbf{X}_{j}=[\mathbf{d}_{1}^{n(j)},...,\mathbf{d}_{k}^{n(j)}]\in\mathbb{R} ^{2\times k}\). Pearson coefficient between the \(i\)th vascular corner motion flows, and the \(j\)th non-vascular motion flows can be calculated by
\[\rho_{i,j}=\frac{\mathrm{cov}(\mathbf{X}_{j}|_{x},\mathbf{Y}_{i}|_{x})}{\sigma \mathbf{x}_{\mathbf{y}_{j}\perp}\sigma\mathbf{v}_{\mathbf{i}|_{x}}}\cdot\frac{ \mathrm{cov}(\mathbf{X}_{j}|_{y},\mathbf{Y}_{i}|_{y})}{\sigma\mathbf{x}_{ \mathbf{y}_{j}\circ}\mathbf{v}_{\mathbf{i}|_{y}}}, \tag{1}\]
where \(|_{x}\) represents its first component and \(|_{y}\) represents its second component, \(\mathrm{cov}(\cdot,\cdot)\) calculates the covariance between two vectors. The \(j\)th non-vascular corner is used to predict the \(i\)th vascular corner motion flow if \(\rho_{i,j}>\rho_{th}\) where \(\rho_{th}\) is a predefined threshold. Then, least square serves as linear regressor between \(\mathbf{X}_{j}\) and \(\mathbf{Y}_{i}\) as (2) shows. And \(\rho\) and \(\mathbf{\hat{Q}}\) are correlation model parameters, which can be used to predict vascular corner motion flow by non-vascular motion flow.
\[\mathbf{\hat{Q}}_{i,j}=\begin{bmatrix}\hat{a}_{x}&\hat{a}_{y}\\ \hat{b}_{x}&\hat{b}_{y}\end{bmatrix} \tag{2}\] \[\hat{a}_{x},\hat{b}_{x}=\operatorname*{arg\,min}_{\mathrm{a,b}} \sum_{m=1}^{\mathrm{k}}\|\mathbf{a}\cdot\mathbf{d}_{m}^{n(j)}|_{x}+\mathrm{b} -\mathbf{d}_{m}^{v(i)}|_{x}\|^{2},\] \[\hat{a}_{y},\hat{b}_{y}=\operatorname*{arg\,min}_{\mathrm{a,b}} \sum_{m=1}^{\mathrm{k}}\|\mathbf{a}\cdot\mathbf{d}_{m}^{n(j)}|_{y}+\mathrm{b} -\mathbf{d}_{m}^{v(i)}|_{y}\|^{2}\]
In summary, for each vascular corner, \(\mathbf{c}_{r}^{v(i)}\) in \(\mathbf{I}_{r}\), non-vascular points with high correlation are selected based on pre-defined threshold \(\rho_{th}\). Then, fitting parameters between this vascular corner and each selected non-vascular corner point are calculated by (2). The procedure of establishing the motion-related model is shown in Algorithm 1.
```
0: reference frame \(\mathbf{I}_{r}\), contrasted sequences \(\mathbf{I}\)
0: weight matrix \(\mathbf{W}\in\mathbb{R}^{\mathrm{N}_{v}\times\mathrm{N}_{n}}\) and fitting parameters matrix \(\mathbf{L}^{A},\mathbf{L}^{B}\in\mathbb{R}^{\mathrm{N}_{v}\times\mathrm{N}_{n}}\times\)2
1: Initialize \(\mathbf{W}\) and \(\mathbf{L}^{A},\mathbf{L}^{B}\) with zeros matrix
2: Calculate training vascular motion flows \(\{\mathbf{D}_{1}^{v},\mathbf{D}_{2}^{v},...,\mathbf{D}_{k}^{v}\}\) and training non-vascular motion flows \(\{\mathbf{D}_{1}^{n},\mathbf{D}_{2}^{n},...,\mathbf{D}_{k}^{n}\}\)
3:for each vascular corner \(i\in[1,\mathrm{N}_{\mathrm{n}}]\)do
4:for each non-vascular corner \(j\in[1,\mathrm{N}_{\mathrm{n}}]\)do
5: Calculate Pearson coefficient \(\rho_{i,j}\) by (1)
6:if\(\rho_{i,j}>\rho_{th}\)then
7:\(\mathbf{W}_{i,j}=\rho_{i,j}\)
8: Calculate fitting parameters \(\mathbf{\hat{Q}}_{i,j}\) by (2)
9:\(\mathbf{L}_{i,j,..}^{A}=[\hat{a}_{x},\hat{a}_{y}]\), \(\mathbf{L}_{i,j,..}^{B}=[\hat{b}_{x},\hat{b}_{y}]\)
10:endif
11:endfor
12: Normalize weight vector \(\mathbf{W}_{i..}=\frac{\mathbf{w}_{i..}}{\sum_{j=1}^{\mathrm{N}_{n}}\mathbf{ W}_{i,j}}\)
13:endfor
```
**Algorithm 1** Motion-related model estimation process
The \(i\)th vascular corner motion flow is predicted by the weighted average of all selected non-vascular motion flows according to
\[\hat{\mathbf{f}}_{q}^{v(i)}=(\mathbf{W}_{i,..}\cdot(\mathbf{L}_{i,..}^{A} \odot\mathbf{F}_{q}^{n}+\mathbf{L}_{i,..}^{B}))^{\top}, \tag{3}\]
where \(\odot\) denotes Hadamard product (element-wise multiplication), \(\hat{\mathbf{f}}_{q}^{v(i)}\in\mathbb{R}^{2\times 1}\) is the predicted \(i\)th vascular corner motion flow between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\). However, sparse motion flows of some non-vascular corner points between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\) may have large errors, which makes the predicted vascular motion flow \(\hat{\mathbf{f}}_{q}^{v(i)}\) sometimes inaccurate. To refine vascular motion flow prediction, we propose GOF-based flow motion prediction in the following subsection.
### _GOF-based Motion Flow Predicting_
GOF-based process refines vascular motion prediction \(\hat{\mathbf{f}}_{q}^{v(i)}\) from the motion-related model and deletes outliers based on Gaussian distribution. In order to reduce the influence of large errors between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\), we assume the predictions are i.i.d and follow Gaussian distribution.
For the \(i\)th vascular corner, with fitting coefficients \(\mathbf{L}^{A}\) and \(\mathbf{L}^{B}\), its non-vascular prediction \(\hat{\mathbf{P}}_{i}=[\hat{\mathbf{p}}_{i}^{(1)},\hat{\mathbf{p}}_{i}^{(2)},..., \hat{\mathbf{p}}_{i}^{(\mathrm{N}_{n})}]^{\top}\in\mathbb{R}^{\mathrm{N}_{n} \times 2}\) is inferred by
\[\hat{\mathbf{P}}_{i}=\mathbf{L}_{i,..}^{A}\odot\mathbf{F}_{q}^{n}+\mathbf{L}_{i,..}^{B}, \tag{4}\]
where \(\hat{\mathbf{p}}_{i}^{(j)}\in\mathbb{R}^{2\times 1}\) represents the \(j\)th non-vascular prediction for the \(i\)th vascular corner. Ideally, for any \(j\in[1,\mathrm{N}_{n}]\), the value of \(\hat{\mathbf{p}}_{i}^{(j)}\) should be equivalent since they belong to the same vascular point. Therefore, each element in \(\mathbf{P}_{i}\) should obey Gaussian distribution in both directions. That is variable \(\mathbf{P}_{i}|_{x}\sim\mathcal{N}(\mu_{i|x},\sigma_{i}^{2}|_{x})\), \(\mathbf{P}_{i}|_{y}\sim\mathcal{N}(\mu_{i|y},\sigma_{i}^{2}|_{y})\) where \(\mu_{i},\sigma_{i}\in\mathbb{R}^{2}\) are mean and standard deviation of the \(i\)th vascular corner predicting \(\hat{\mathbf{P}}_{i}\). \(\mu_{i}\) and \(\sigma_{i}\) can be statistically calculated by (5). For a random variable \(\mathrm{Z}\sim\mathcal{N}(\mu,\sigma^{2})\), it has a probability 0.9974 within the range of \((\mu-3\sigma,\mu+3\sigma)\). Therefore, the outlier in \(\hat{\mathbf{P}}_{i}\) caused by non-vascular motion flow estimating error can be deleted based on the \(3\sigma\) bound, which updates motion-related model \(\mathbf{W}\) as shown in Algorithm 2. Then each vascular corner motion flow prediction can be refined by (6) utilizing updated weight \(\hat{\mathbf{W}}\). Algorithm 2 describes how to predict vascular motion flows based on GOF, which outputs vascular motion flows \(\tilde{\mathbf{F}}_{q}^{v}\in\mathbb{R}^{\mathrm{N}_{v}\times 2}\). Finally, the reference frame vascular mask \(\mathbf{M}_{r}\) is mapped onto the live image \(\mathbf{R}_{q}\) according to the predicted vascular motion flows \(\tilde{\mathbf{F}}_{q}^{v}\).
\[\mu_{i}|_{x}=\frac{\sum_{j=1}^{\mathrm{N}_{n}}\hat{\mathbf{p}}_{i}^{(j)}|_{x}}{ \mathrm{N}_{n}},\sigma_{i}|_{x}=\sqrt{\frac{\sum_{j=1}^{\mathrm{N}_{n}}(\hat{ \mathbf{p}}_{i}^{(j)}|_{x}-\mu_{i}|_{x})^{2}}{\mathrm{N}_{n}}}, \tag{5}\]
\[\mu_{i}|_{y}=\frac{\sum_{j=1}^{\mathrm{N}_{n}}\hat{\mathbf{p}}_{i}^{(j)}|_{y}}{ \mathrm{N}_{n}},\sigma_{i}|_{y}=\sqrt{\frac{\sum_{j=1}^{\mathrm{N}_{n}}(\hat{ \mathbf{p}}_{i}^{(j)}|_{y}-\mu_{i}|_{y})^{2}}{\mathrm{N}_{n}}},\]
\[\tilde{\mathbf{f}}_{q}^{v(i)}=(\hat{\mathbf{W}}_{i,..}\cdot\hat{\mathbf{P}}_{i})^{\top}.\] (
```
0: weight matrix \(\mathbf{W}\), linear fitting coefficient \(\mathbf{L}^{A},\mathbf{L}^{B}\), live image \(\mathbf{R}_{q}\), reference frame \(\mathbf{I}_{r}\)
0: predicted vascular motion flows \(\tilde{\mathbf{F}}_{q}^{v}\in\mathbb{R}^{\mathrm{N}_{\mathrm{e}}\times 2}\) between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\)
1: Calculate non-vascular motion flows \(\mathbf{F}_{q}^{n}\in\mathbb{R}^{\mathrm{N}_{\mathrm{e}}\times 2}\) between \(\mathbf{R}_{q}\) and \(\mathbf{I}_{r}\)
2:for each vascular corner \(i\in[1,\mathrm{N}_{v}]\)do
3: Calculate non-vascular predicting \(\tilde{\mathbf{P}}_{i}\) by (4)
4: /* Delete outlier */
5: Calculate \(\mu_{i},\sigma_{i}\) by (5)
6: for each non-vascular predicting \(j\in[1,\mathrm{N}_{n}]\)do
7:if\(\tilde{\mathbf{p}}_{i}^{(j)}|_{x}\notin(\mu_{i}|_{x}-3\sigma_{i}|_{x},\mu_{i} |_{x}+3\sigma_{i}|_{x})\) or \(\tilde{\mathbf{p}}_{i}^{(j)}|_{y}\notin(\mu_{i}|_{y}-3\sigma_{i}|_{y},\mu_{i} |_{y}+3\sigma_{i}|_{y})\)then
8:\(\mathbf{W}_{i,j}=0\)
9:endif
10:endfor
11: Re-normalize weight \(\tilde{\mathbf{W}}_{i,:}=\frac{\mathbf{W}_{i,:}}{\sum_{j=1}^{\mathrm{N}_{ \mathrm{e}}}\mathbf{W}_{i,j}}\)
12: Predict vascular motion flow \(\tilde{\mathbf{F}}_{q}^{(j)}\) by (6)
13:endfor
```
**Algorithm 2** Predicting vascular motion flows based on GOF.
## IV Experiment Results
### _Experimental Setup_
To validate our proposed MRC algorithm, X-ray sequences generated during hepatic vascular surgeries TACE and TIPS from Zhongshan Hospital were collected, and we also obtained X-ray image sequences generated from a male porcine. 13 image sequences are screened with contrast agents. These image sequences include frames with and without contrast agents. During the process, the patient breathed freely. The images are with the size either in \(512\times 512\) or in \(1024\times 1024\) and the pixel resolution either \(0.746mm\), \(0.308mm\), or \(0.390mm\). The Region of Interest of the image ranges from \(216\times 231\) to \(562\times 726\). For the images without the contrast agent, physicians 1 manually labeled the vascular centerlines for each sequence, which was used as _the reference_ for validation. We performed the proposed MRC on a total of \(520\) frames and quantitative evaluation on \(222\) labeled frames. Table I lists detailed information on 13 sequences used for MRC. We also collected X-ray image sequences generated during coronary artery surgeries, but we did not count them due to poor image quality and large deformation.
Footnote 1: The second and third authors obtained M.D. degrees and did the score.
Our MRC algorithm was compared with two other state-of-the-art algorithms, WSSD [20] and CRD [7]. To show that linear regression is suitable for robot application, we compare it with a typical nonlinear regression method named Gaussian Process Regression (GPR)2[28]. All the hyperparameters were pre-tuned to guarantee the best performance. All experiments were conducted on a commercial desktop with Intel Core i5-10500 CPU at 3.10GHz with 16Gb memory. The hyperparameter of our proposed MRC \(\rho_{th}\) was set as \(0.9\) in our experiments.
Footnote 2: Details can be found in the supplementary material.
### _Evaluation criteria_
Vascular MRC algorithms can be evaluated in terms of time and accuracy. The accuracy can be evaluated qualitatively by experienced physicians and quantitatively with the average Euclidean distance between predicted and _reference_ frames. In this paper, we adopt ratio \(\mathrm{R}\) and mean Euclidean distance \(\mathrm{MD}\) to quantitatively evaluate algorithms accuracy, which are calculated by
\[\mathrm{R}=\frac{|\mathbf{M}_{gt}\cap\mathcal{M}(\mathbf{M}_{r},\tilde{ \mathbf{F}}_{q}^{v})|}{|\mathbf{M}_{gt}|}, \tag{7}\]
\[\mathrm{MD}=\frac{1}{\mathrm{N}_{p}}\sum_{m=1}^{\mathrm{N}_{p}}\|g(\mathbf{M}_{ gt})(m)-g(\mathcal{M}(\mathbf{M}_{r},\tilde{\mathbf{F}}_{q}^{v}))(m)\|_{2}, \tag{8}\]
where \(\mathbf{M}_{gt}\) is the labeled _reference_ centerline, function \(\mathcal{M}\) maps \(\mathbf{M}_{r}\) onto live image \(\mathbf{R}_{q}\) using motion \(\tilde{\mathbf{F}}_{q}^{v}\), \(g\) extracts centerline point coordinates of mask, \(\mathrm{N}_{p}\) is the number of centerline points in \(\mathbf{M}_{gt}\).
### _Experiment Results_
We conducted vascular compensation experiments on 13 sequences. It should be noted that CRD [7] was tested on 4 sequences because data input required the image to contain the liver's top. For Seq.L and Seq.M, GPR is not conducted because of the very long training time caused by large data. What's more, our MRC's error is low enough (mean \(MD=0.652mm\) and mean \(MD=0.523mm\)). Even if GPR's error on Seq.L and Seq.M is zero, the average accuracy of GPR is lower than our MRC.
#### Iv-C1 Accuracy
To assess the compensation accuracy, two clinicians scored them using the compensated roadmaps. Table II shows the mean score, which indicates that MRC possesses the best visualization for clinicians. Fig. 3 shows some sample results.3 It should be noted that in order to provide better visualization, only the image of the vascular region is shown. As Fig. 3 indicates, there are cases where the results of WSSD, GPR, and MRC look the same, such as for sequences A, C, D, E, H, I, and M. In some cases, MRC works better than WSSD and GPR, such as in sequences B, F, and J. For sequences G, K, L, and M, MRC and GPR look the same and better than WSSD. CRD is worse than WSSD and MRC in sequences H, L, and M. According to the frame image shown in the figure, the result of CRD is the same as WSSD and MRC for sequences E. However, in some frames of sequence E, the visualization results are worse than that of WSSD and MRC. And its overall results vary considerably, which can be concluded from Fig. 4 and Fig. 5. Although the error variance of CRD can be decreased by increasing the number of training X-ray images, patients suffer from higher doses of X-ray.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & Seq.ASe\({}_{0}\) & Seq.CSe\({}_{0}\) & DSe\({}_{0}\) & ESe\({}_{0}\) & ESe\({}_{0}\) & GSe\({}_{0}\) & HSe\({}_{0}\) & ISe\({}_{0}\) & ISe\({}_{0}\) & ISe\({}_{0}\) & ISe\({}_{0}\) & M \\ \hline Training & 18 & 12 & 14 & 11 & 11 & 16 & 12 & 10 & 11 & 8 & 11 & 7 & 11 \\ Testing & 42 & 118 & 11 & 103 & 44 & 40 & 83 & 20 & 15 & 7 & 20 & 8 & 9 \\ Labeled & 22 & 22 & 11 & 22 & 22 & 22 & 22 & 20 & 15 & 7 & 20 & 8 & 9 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Number of images used to compensate vascular motion. Seq.A-Seq.M represents different sequences.
Results of the mean Euclidean distance on 13 sequences are shown in Fig. 4. It can be seen that \(\mathrm{MD}\) of CRD is dispersed, and its mean and median are higher. For sequences A, B, F, G, H, I, K, L, and M MRC performs best on \(\mathrm{MD}\). For sequences C, D, and E, GPR possesses minimum error on \(\mathrm{MD}\). For sequences J, WSSD performs best on \(\mathrm{MD}\). Fig. 4 (b) shows that MRC achieves the best accuracy.
Fig. 5 shows the results of ratio \(\mathrm{R}\). 1 means the best prediction, and 0 indicates failure. It can be seen that \(\mathrm{R}\) of CRD is lower than the other three algorithms. For sequences B, D, F, G, H, I, K, and L, MRC possesses the highest \(\mathrm{R}\). For sequences A, C, E, J, and M, WSSD performs best on \(\mathrm{R}\) than the other three algorithms. In general, our proposed MRC performs best on \(\mathrm{R}\) as shown in Fig. 5 (b). The error of the sparse corners alignment algorithm varies with the image and can not be estimated, which makes GPR's accuracy vary with the accuracy of the additive Gaussian noise prior. To sum up, our MRC performs best on accuracy, which benefits from a reasonable motion model in that we consider the difference of motion in different positions incurred by respiration.
#### Iv-A2 Time Consumption
Real-time performance is critical for clinical usage. Both learning time consumption and motion
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & Seq.A & Seq.B & Seq.C & Seq.D & Seq.E & Seq.F & Seq.G & Seq.H & Seq.I & Seq.J & Seq.K & Seq.L & Seq.M & mean \\ \hline WSSD & 3 & 2 & 3.5 & 3.5 & 4.5 & 2 & 2 & 3.5 & 3 & 2.5 & 3 & 4 & 3.5 & 3.077 \\ MRC & 4 & 3 & 4 & 3.5 & 4.5 & 4 & 3 & 4.5 & 4 & 4.5 & 4 & 4.5 & 4 & 3.962 \\ GPR & 2.5 & 2 & 3 & 2 & 4 & 1.5 & 1.5 & 4 & 3.5 & 3 & 3.5 & - & - & 2.773 \\ CRD & - & - & - & - & 2.5 & - & - & 0.1 & - & - & - & 0.1 & 0.1 & 0.7 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The mean score of two clinicians grading according to the visualization roadmap. 5 is a perfect score. Seq.A-Seq.M represents different sequences.
Fig. 3: Visualization of vascular respiratory motion compensation. Each row represents one sample frame. The first column is the original image, the second column is the _reference_ labeled by experts, and the remaining eight columns are the results of WSSD, MRC, GPR, and CRD algorithms. For every algorithm, the compensation column uses bright red to emphasize the vascular mask, and the compensation + X-ray column uses dark red to observe accuracy.
prediction time consumption are tested. The model learning time is shown in Table III. The proposed MRC performs well on model learning time. The mean of each frame predicting vascular motion time is shown in Table IV. It demonstrates that the prediction time of MRC is less than WSSD and GPR on all 13 sequences. Although GPR can be accelerated [26, 29], its time cannot exceed linear regression. For sequences E, H, L, and M, the prediction time of CRD is less than MRC, but its robustness and accuracy are limited. In the meantime, application scenarios of CRD need the FOV of the image including the top of the liver. In summary, our MRC performs best on learning time and second best on prediction time because of relatively simple modeling and sparse point tracker. CRD is the fastest approach since it estimates the respiratory state from the image to predict motion. However, it is not robust if the respiration pattern changes.
#### Iv-C3 Ablation Study
Ablation experiments were conducted to verify the efficiency of our GOF and sparse optical flow. The results are shown in Table V. Our MRC is tested with and without GOF. We also test the choice of dense and sparse optical flow. Results indicate that sparse optical flow can significantly reduce time consumption. GOF can improve the accuracy of sparse optical flow prediction with some time consumption. Still, GOF is basically ineffective for dense optical flow prediction due to the more accurate dense optical flow algorithm. Thus, sparse motion flow with GOF possesses compromised accuracy and time.
### _Limitations & Future Works_
Our proposed MRC comes with three drawbacks. First, only one frame contrasted image vascular mask is used to map onto the live image, the mapped vascular mask size may be small. Future works will conduct the fusion multi-frames contrasted images of vascular information to enrich the vascular mask. Secondly, the predicted vascular motion flow is not smooth enough due to the sparse flow of vascular. Although the local region is not smooth, it does not affect physicians' overall judgment. Lastly, it cannot handle the scenario with heart-beat
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & MD[mm] & R & prediction time[s] & Learning time[s] \\ \hline sparse with GOF & **1.086** & **0.595** & 0.032 & 0.092 \\ sparse without GOF & 1.565 & 0.432 & **0.012** & **0.091** \\ dense with GOF & 1.192 & 0.585 & 10.587 & 26.691 \\ dense without GOF & 1.192 & 0.587 & 4.842 & 21.524 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Results of ablation experiments in accuracy and time. Sparse and dense represent sparse motion flow and dense motion flow.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & Seq.A & Seq.B & Seq.C & Seq.D & Seq.E & Seq.F & Seq.G & Seq.H & Seq.I & Seq.J & Seq.K & Seq.L & Seq.M & Mean \\ \hline WSSD & 0.028 & 0.050 & 0.041 & 0.033 & 0.134 & 0.130 & 0.355 & 0.091 & 0.124 & 0.090 & 0.093 & 0.156 & 0.143 & 0.116 \\ MRC & **0.012** & **0.019** & **0.008** & **0.005** & 0.030 & **0.124** & **0.046** & 0.016 & **0.011** & **0.035** & **0.029** & **0.040** & 0.155 & 0.032 \\ GPR & 17.265 & 7.785 & 22.405 & 12.852 & 47.526 & 520.735 & 328.753 & 50.823 & 8.835 & 27.107 & 17.027 & - & - & 109.543 \\ CRD & - & - & - & - & **0.002** & - & - & **0.001** & - & - & - & 0.002 & 0.002 & 0.002 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The mean of time[s] of each frame to predict vascular motion on 13 sequences. Seq.A-Seq.M represents different sequences.
Fig. 4: The box chart of mean Euclidean distance[mm] on 13 sequences. (a) shows \(\mathrm{MD}\) on 13 sequences respectively, Seq.A-Seq.M represents different sequences. (b) shows the overall results on all 13 sequences. It should be noted that the overall results of CRD are on 4 sequences.
Fig. 5: The box chart of value of ratio \(\mathrm{R}\) on 13 sequences. (a) shows \(\mathrm{R}\) on 13 sequences respectively, Seq.A-Seq.M represents different sequences. (b) shows the overall results on all 13 sequences. It should be noted that the overall results of CRD are on 4 sequences.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & Seq.A & Seq.B & Seq.C & Seq.D & Seq.E & Seq.F & Seq.G & Seq.H & Seq.I & Seq.J & Seq.J & Seq.K & Seq.L & Seq.M \\ \hline WSSD & 110.127 & 82.646 & 121.894 & 68.250 & 272.435 & 376.654 & 593.059 & 232.944 & 262.118 & 117.140 & 150.492 & 152.328 & 235.327 \\ MRC & **0.077** & **0.029** & **0.041** & **0.029** & **0.060** & **0.404** & **0.221** & **0.071** & **0.076** & **0.044** & **0.039** & **0.044** & **0.089** \\ GPR & 10935.236 & 4978.496 & 13922.305 & 7642.906 & 28351.750 & 268172.254 & 194248.282 & 29759.035 & 5347.611 & 15609.621 & 10876.458 & - & - \\ CRD & - & - & - & - & 12.641 & - & - & 10.755 & - & - & - & 5.105 & 18.140 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The model learning time[s] for different sequences and different methods. Seq.A-Seq.M represents different sequences.
because of poor image quality, leading to the error in optical flow4. We plan to utilize optical flow based on deep learning to compute flow motion.
Footnote 4: The last video clip shows the failure case.
## V Conclusion
We propose MRC to conduct vascular respiratory motion compensation in real-time to predict vascular on the live fluoroscopic image with invisible vascular. Based on the linear correlation between vascular motion flow and non-vascular motion flow, multi-frame contrasted images are used to train a motion-related model. In the prediction stage, predictions from non-vascular points are refined with GOF. The proposed method was tested on 13 in-vivo vascular intervention fluoroscopic sequences. Results show that the proposed method achieves a compensation accuracy of \(1.086\ mm\) in \(0.032\ s\). Our approach provides a practical real-time solution to assist physicians/robots in vascular interventions. This work is in the process of commercialization by the company United Imaging of Health Co., Ltd.
|
2309.09658 | A Novel Method of Fuzzy Topic Modeling based on Transformer Processing | Topic modeling is admittedly a convenient way to monitor markets trend.
Conventionally, Latent Dirichlet Allocation, LDA, is considered a must-do model
to gain this type of information. By given the merit of deducing keyword with
token conditional probability in LDA, we can know the most possible or
essential topic. However, the results are not intuitive because the given
topics cannot wholly fit human knowledge. LDA offers the first possible
relevant keywords, which also brings out another problem of whether the
connection is reliable based on the statistic possibility. It is also hard to
decide the topic number manually in advance. As the booming trend of using
fuzzy membership to cluster and using transformers to embed words, this work
presents the fuzzy topic modeling based on soft clustering and document
embedding from state-of-the-art transformer-based model. In our practical
application in a press release monitoring, the fuzzy topic modeling gives a
more natural result than the traditional output from LDA. | Ching-Hsun Tseng, Shin-Jye Lee, Po-Wei Cheng, Chien Lee, Chih-Chieh Hung | 2023-09-18T10:52:54Z | http://arxiv.org/abs/2309.09658v1 | # A Novel Method of Fuzzy Topic Modeling based on Transformer Processing
###### Abstract
Topic modeling is admittedly a convenient way to monitor markets trend. Conventionally, Latent Dirichlet Allocation, LDA, is considered a must-do model to gain this type of information. By given the merit of deducing keyword with token conditional probability in LDA, we can know the most possible or essential topic. However, the results are not intuitive because the given topics cannot wholly fit human knowledge. LDA offers the first possible relevant keywords, which also brings out another problem of whether the connection is reliable based on the statistic possibility. It is also hard to decide the topic number manually in advance. As the booming trend of using fuzzy membership to cluster and using transformers to embed words, this work presents the fuzzy topic modeling based on soft clustering and document embedding from state-of-the-art transformer-based model. In our practical application in a press release monitoring, the fuzzy topic modeling gives a more natural result than the traditional output from LDA.
Fuzzy, Soft Clustering, Transformer
## I Introduction
In natural language processing, the typical way is to collect all vocabularies into a dictionary and then tokenize each word into a dictionary. The typical way to extract the topics among these articles is using Latent Dirichlet Allocation, LDA. By calculating the number of each word, LDA offers the contribution property based on word-to-word conditional probability. Although this approach is admittedly straightforward in a small amount of dataset, the output might be unintuitive when we face an enormous dataset because a significant number of words could share the conditional probability for each word. Also, it is the norm to suffer an excellent computing cost in building an enormous dictionary. Then, in such a sparse vector towards one word, the chances of experiencing the curse of dimensionality is notoriously high. Even worse, in a real world task, it is necessary to update the dictionary and run the LDA from scratch. Thus, finding a better way to implement topic analysis is expected to gain a better result without setting the topic number in advance.
In this dilemma, this work applies a novel model to extract a more meaningful vector towards word or document, to begin with, instead of relying on the possibility. Because of the booming development of the transformers, such as bidirectional encoder representation from the transformer, BERT[4], and the merit of attention effect, vector not only represents the meaning of the word itself but also contains the relationship towards every word in a document. By extracting output from the hidden layer of the transformers, each word embedding is shown in multiple continuous figures, which can be viewed as a relative fuzzy index towards each direction because of the multi-head attention in a transformer. Whereas training a transformer still needs to build a token dictionary, it is, as a result, more efficient and accessible to use a pre-trained weight of BERT, which is released by Google. Most importantly, one of the significant advantages of using a transformer is because of the merit of calculating the dot-product in multi-head attention we can speed up the process by the parallelizing workflow in GPUs.
On the other hand, the pooling of mean to reshape each document embedding in the same dimension is applied in this work. It is hard to visualize the embedding in such a high dimension, so it is necessary to make a dimension reduction, projecting the vector into a low dimension space. Therefore, visualizing the relative location between documents for clustering similar data points in 2-D space becomes possible since it is not wise to cluster data in high dimensional space directly. Typically, the standard way is using linear discriminant analysis and principal component analysis. However, most dimensional reduction methods assume the data is either linearly separable or following a normal distribution. Also, although the methods above can successfully reduce the vector's dimension, the data location cannot entirely reflect the actual distance between the vector to vector in the original dimension.
In contrast, t-distributed stochastic neighbor embedding can handle these concerns well because it experiences a complicated step to project data step-by-step. It calculates each data's original distribution and clusters the similar data in low dimensional space according to the t-distribution. Our experiments reveal that t-SNE usually gives a better result than PCA and LDA as t-SNE helps cluster data based on the distribution in advance instead of merely projecting data.
After a series of preprocessing, the goal of this work is to find a better topic among a large amount articles. Instead of using conditional probability to indicate the possible topic, clustering the document embedding of each article in low dimension without setting the wanted topic number in advance is applied. Among a range of clustering models, including K-means, KNN, and DBSCAN, HDBSCAN |
2309.15271 | Kinematic Modularity of Elementary Dynamic Actions | In this paper, a kinematically modular approach to robot control is
presented. The method involves structures called Elementary Dynamic Actions and
a network model combining these elements. With this control framework, a rich
repertoire of movements can be generated by combination of basic modules. The
problems of solving inverse kinematics, managing kinematic singularity and
kinematic redundancy are avoided. The modular approach is robust against
contact and physical interaction, which makes it particularly effective for
contact-rich manipulation. Each kinematic module can be learned by Imitation
Learning, thereby resulting in a modular learning strategy for robot control.
The theoretical foundations and their real robot implementation are presented.
Using a KUKA LBR iiwa14 robot, three tasks were considered: (1) generating a
sequence of discrete movements, (2) generating a combination of discrete and
rhythmic movements, and (3) a drawing and erasing task. The results obtained
indicate that this modular approach has the potential to simplify the
generation of a diverse range of robot actions. | Moses C. Nah, Johannes Lachner, Federico Tessari, Neville Hogan | 2023-09-26T21:10:12Z | http://arxiv.org/abs/2309.15271v2 | # Kinematic Modularity of Elementary Dynamic Actions
###### Abstract
In this paper, a kinematically modular approach to robot control is presented. The method involves structures called Elementary Dynamic Actions and a network model combining these elements. With this control framework, a rich repertoire of movements can be generated by combination of basic kinematic modules. Each module can be learned by Imitation Learning, thereby resulting in a modular learning strategy for robot control. The theoretical foundations and their real robot implementation are presented. Using a KUKA LBR iiwa14 robot, three tasks were considered: (1) generating a sequence of discrete movements, (2) generating a combination of discrete and rhythmic movements, and (3) a drawing and erasing task. The obtained results indicate that this modular approach has the potential to simplify the generation of a diverse range of robot actions.
## I Introduction
To generate complex motor behavior that can match that of humans, robot control based on motor primitives has been proposed [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. The method originates from motor neuroscience research, where the complex motor behavior of biological systems appears to be generated by a combination of fundamental building blocks [12, 13, 14, 15, 16, 17, 18, 19]. By parameterizing a controller using motor primitives, robots can efficiently learn, adapt, and execute a wide range of tasks.
In robotics, two distinct motor-primitive approaches have been identified: Elementary Dynamic Actions (EDA)1[8, 9, 10] and Dynamic Movement Primitives (DMP) [6, 20]. EDA provides a modular framework for robot control that also accounts for physical interaction [8, 9, 10, 11, 21]. One of its applications, impedance control [22, 23, 24], has been a prominent approach for tasks involving contact and physical interaction. DMP provides a rigorous mathematical framework to generate movements of arbitrary complexity [6, 20]. Its prominent application, Imitation Learning (or Learning from Demonstration [4, 5]), provides a systematic method to learn (or imitate) trajectories that are provided by demonstration.
Footnote 1: The original name suggested by Hogan and Sternad [8] was “Dynamic _Motor_ Primitives.” However, to avoid confusion due to the similarity to “Dynamic _Movement_ Primitives,” we instead use “Elementary Dynamic Actions.”
In this paper, we combine EDA with DMP to achieve a modular learning approach for robot control. Using this control framework, a wide range of robot movements can be produced by combining basic modules. Each module can be learned with Imitation Learning, resulting in "modular" Imitation Learning. This approach also preserves the advantages of EDA for tasks involving contact and physical interaction.
A theoretical overview of the proposed modular strategy is presented, together with an actual robot implementation. Using a KUKA LBR iwa14 robot, a modular control in task-space is demonstrated. Three tasks are considered: (1) generating a sequence of discrete movements, (2) generating a combination of discrete and rhythmic movements, (3) a drawing and erasing task. The first two tasks highlight the kinematic modularity offered by EDA; the third task serves as an illustrative example of combining the modular property of EDA with DMP's Imitation Learning. These demonstrations show the proposed approach has the potential to simplify the generation of a range of diverse robot motions.
## II Theoretical Foundations
In this section, a brief overview of EDA and DMP is provided. A torque-actuated \(n\) degrees of freedom (DOFs) open-chain robotic manipulator is considered.
### _Elementary Dynamic Actions and the Norton Equivalent Network Model_
EDA, introduced by Hogan and Sternad [8, 9, 10], consist of (at least) three distinct classes of primitives (Figure 1A):
* Submovements for discrete movements [25].
* Oscillations for rhythmic movements [25].
* Mechanical impedances to manage physical interaction [11].
This paper will focus on submovements and oscillations. Only the necessary details of mechanical impedances are briefly presented.
#### Ii-A1 Kinematic Primitives
_Submovements and Oscillations_:
A submovement \(\mathbf{x}_{0}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) is a smooth trajectory with a time derivative that is a unimodal function, i.e., has a single peak value:
\[\mathbf{\dot{x}}_{0}(t)=\mathbf{v}\ \boldsymbol{\hat{\sigma}}(t) \tag{1}\]
In this equation, \(t\in\mathbb{R}_{\geq 0}\) is time; \(\boldsymbol{\hat{\sigma}}:\mathbb{R}_{\geq 0}\rightarrow[0,1]\) denotes a smooth unimodal basis function with peak value \(1\); \(\mathbf{v}\in\mathbb{R}^{n}\) is the velocity amplitude of each DOF. Since submovements model discrete movement, \(\boldsymbol{\hat{\sigma}}(t)\) has a finite support, i.e., there exists \(T>0\) such that \(\boldsymbol{\hat{\sigma}}(t)=0\) for \(t\geq T\).
An oscillation \(\mathbf{x}_{0}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) is a smooth non-zero trajectory which is a periodic function:
\[\forall t>0:\ \ \exists T>0:\ \ \mathbf{x}_{0}(t)=\mathbf{x}_{0}(t+T) \tag{2}\]
Compared to submovements, oscillations model rhythmic and repetitive motions.
#### Ii-A2 Interactive Primitive -- Mechanical Impedances
Mechanical impedance \(\mathbf{Z}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is an operator which maps (generalized) displacement \(\Delta\mathbf{x}(t)\in\mathbb{R}^{n}\) to (generalized) force \(\mathbf{F}(t)\in\mathbb{R}^{n}\)[10, 11, 21]:
\[\mathbf{Z}:\Delta\mathbf{x}(t)\longrightarrow\mathbf{F}(t)\]
In this equation, \(\Delta\mathbf{x}(t)\) is the displacement of an actual trajectory of (generalized) position \(\mathbf{x}(t)\) from a virtual trajectory \(\mathbf{x}_{0}(t)\) to which the mechanical impedance is connected, i.e., \(\Delta\mathbf{x}(t)=\mathbf{x}_{0}(t)-\mathbf{x}(t)\). Loosely speaking, mechanical impedance is a generalization of stiffness to encompass nonlinear dynamic behavior [10].
Mechanical impedances can be linearly superimposed even though each mechanical impedance is a nonlinear operator. This is the superposition principle of mechanical impedances [10, 11, 21, 23]:
\[\mathbf{Z}=\sum\mathbf{Z}_{i} \tag{3}\]
Note that the impedance operators can include transformation maps (i.e., Jacobian matrices) (Section III).
While mechanical impedance introduces favorable properties which simplify a wide range of control tasks, in this paper we will focus on kinematic primitives of EDA, i.e., submovements and oscillations. For more details on the use of mechanical impedance, readers are referred to [10, 21, 26, 27, 28, 29].
#### Ii-A3 Norton Equivalent Network Model
The three distinct classes of EDA can be combined using a Norton equivalent network model [10], which provides an effective framework to relate the three classes of EDA (Figure 1B). The forward-path dynamics specifies the virtual trajectory \(\mathbf{x}_{0}(t)\), which consists of submovements and/or oscillations. The mechanical impedance \(\mathbf{Z}\), determines \(\mathbf{F}(t)\) with the virtual trajectory \(\mathbf{x}_{0}(t)\), and its (generalized) force output \(\mathbf{F}(t)\) is used as an input to the robot. Hence, a key objective of EDA is to find appropriate choices of \(\mathbf{x}_{0}(t)\) and \(\mathbf{Z}\) to generate the desired robot behavior.
By combining EDA with the Norton equivalent network model, submovements and/or oscillations can be directly superimposed at the level of the virtual trajectory \(\mathbf{x}_{0}(t)\). This is the core concept that provides kinematic modularity of EDA, which simplifies generating a wide range of movements (Section III).
### _Dynamic Movement Primitives and Imitation Learning_
DMP, introduced by Ijspeert, Schaal, et al. [6] consists of three distinct classes of primitives: canonical system, nonlinear forcing term and transformation system. Using these three primitives, DMP can represent both discrete and rhythmic movements. However, different choices of canonical systems and nonlinear forcing terms are employed for these movements [6]. For this overview, we focus on DMP for discrete movement.
#### Ii-B1 Dynamic Movement Primitives
A canonical system for discrete movement \(s:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is a scalar variable governed by a stable first-order differential equation:
\[\tau\dot{s}(t)=-\alpha_{s}s(t)\]
In this equation, \(\alpha_{s}\) and \(\tau\) are positive constants.
A nonlinear forcing term for discrete movement, \(f:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\), which takes the canonical system \(s(t)\) as the function argument, is defined by:
\[\begin{split} f(s(t))&=\frac{\sum_{i=1}^{N}w_{i}\phi _{i}(s(t))}{\sum_{i=1}^{N}\phi_{i}(s(t))}s(t)(g-y_{0})\\ \phi_{i}(s(t))&=\exp\big{\{}-h_{i}(s(t)-c_{i})^{2} \big{\}}\end{split} \tag{4}\]
In these equations, \(\phi_{i}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is the \(i\)-th basis function of the nonlinear forcing term which is a Gaussian function; \(N\) is the number of basis functions; \(y_{0}\) and \(g\) are the initial and final positions of the discrete movement, respectively; \(w_{i}\) is the weight and \(c_{i}\), \(h_{i}\) determine the center, width of the \(i\)-th basis function, respectively.
A transformation system is a second-order differential equation with a nonlinear forcing term \(f(s(t))\) as an input:
\[\begin{split}\tau\dot{y}(t)&=z(t)\\ \tau\dot{z}(t)&=\alpha_{z}\{\beta_{z}(g-y(t))-z(t)\} +f(s(t))\end{split} \tag{5}\]
In these equations, \(\alpha_{z}\) and \(\beta_{z}\) are positive constants; \(y(t)\) and \(z(t)\) are state variables which correspond to position and (time-scaled) velocity of the transformation system, respectively. \(y(t)\) can represent both joint-space and task-space trajectories. However, for end-effector orientation, a different transformation system must be employed [30, 31].
Fig. 1: (A) Three Elementary Dynamic Actions (EDA). Submovements (orange box) and oscillations (blue box) correspond to kinematic primitives and mechanical impedances manage physical interaction. (B) EDA combined using a Norton equivalent network model. The virtual trajectory \(\mathbf{x}_{0}(t)\) (yellow box) consists of submovements (orange box) and/or oscillations (blue box), and mechanical impedance \(\mathbf{Z}\) regulates interactive dynamics.
#### Ii-C2 Imitation Learning
Imitation Learning generates (or mimics) a desired trajectory \(y_{des}(t)\) by learning the best-fit weights of the nonlinear forcing term \(f(s(t))\) (Eq. (4)) Given \(P\) sample points of \((y_{des}(t_{j}),\,\dot{y}_{des}(t_{j}),\,\ddot{y}_{des}(t_{j}))\) for \(j\in[1,2,\cdots,P]\), the best-fit weight \(w_{i}^{*}\) is calculated using Locally Weighted Regression [6, 32, 33]:
\[\begin{split} w_{i}^{*}=\frac{\sum_{j=1}^{P}a_{j}\phi_{ij}f_{j}}{ \sum_{j=1}^{P}a_{j}^{2}\phi_{ij}}\\ a_{j}=s(t_{j})(g-y_{0})\qquad\qquad\phi_{ij}=\phi_{i}(s(t_{j}))\\ f_{j}=\tau^{2}\ddot{y}_{des}(t_{j})+\alpha_{c}\tau\dot{y}_{des}(t_ {j})+\alpha_{c}\beta_{c}\{y_{des}(t_{j})-g\}\end{split} \tag{6}\]
Along with Locally Weighted Regression, one can also use Linear Least Square Regression to find the best-fit weights [30, 31]. To use Imitation Learning for \(y_{des}(t)\), one must also calculate the velocity \(\dot{y}_{des}(t)\) and acceleration \(\dot{y}_{des}(t)\).
To generalize Imitation Learning to an \(n\)-DOF system, \(n\) transformation systems with \(n\) nonlinear forcing terms may be synchronized with a single canonical system [6]. For a desired trajectory \(\mathbf{y}_{des}(t)\in\mathbb{R}^{n}\), Imitation Learning for each DOF is conducted [30, 31, 34, 35, 36, 37].
## III The Three Control Tasks and Methods
In this section, we introduce three control tasks and the corresponding methods to achieve them.
1. Generating a sequence of discrete movements.
2. Generating a combination of discrete and rhythmic movements.
3. Drawing and erasing a path on a table.
The first two tasks highlight the advantages of EDA's kinematic modularity. The final task, which involves physical contact, provides an illustrative example of how Imitation Learning can be combined with EDA.
The torque command of a robot, \(\mathbf{\tau}_{in}(t)\in\mathbb{R}^{n}\) is defined by superimposing three mechanical impedances (Eq. (3)):
\[\mathbf{\tau}_{in}(t)=\mathbf{Z}_{q}(\mathbf{q})+\mathbf{J}_{p}(\mathbf{q})^{ \mathrm{T}}\mathbf{Z}_{p}(\mathbf{p},\mathbf{p}_{0}(t))+\mathbf{J}_{r}( \mathbf{q})^{\mathrm{T}}\mathbf{Z}_{r}(\mathbf{R},\mathbf{R}_{0}) \tag{7}\]
where:
\[\begin{split}\mathbf{Z}_{q}(\mathbf{q})&=-\mathbf{B} _{q}\dot{\mathbf{q}}\\ \mathbf{Z}_{p}(\mathbf{p},\mathbf{p}_{0}(t))&=\mathbf{K }_{p}\{\mathbf{p}_{0}(t)-\mathbf{p}\}+\mathbf{B}_{p}\{\dot{\mathbf{p}}_{0}(t) -\dot{\mathbf{p}}\}\\ \mathbf{Z}_{r}(\mathbf{R},\mathbf{R}_{0})&=k_{r} \mathbf{RLog}(\mathbf{R}^{\mathrm{T}}\mathbf{R}_{0})-b_{r}\mathbf{\omega}\end{split} \tag{8}\]
In these equations, \(\mathbf{q}\equiv\mathbf{q}(t)\in\mathbb{R}^{n}\) denotes the robot's joint trajectories; \(\mathbf{p}\equiv\mathbf{p}(\mathbf{q}(t))\in\mathbb{R}^{3}\) and \(\mathbf{R}\equiv\mathbf{R}(\mathbf{q}(t))\in\mathrm{SO}(3)\) are the robot's end-effector position and orientation, respectively; \(\mathbf{p}\) and \(\mathbf{R}\) are derived by the Forward Kinematics Map of the robot; \(\mathbf{J}_{p}(\mathbf{q}),\mathbf{J}_{r}(\mathbf{q})\in\mathbb{R}^{3\times n}\) denote the Jacobian matrices for the translational velocity \(\dot{\mathbf{p}}\) and rotational velocity \(\omega\) of the end-effector, respectively, i.e., \(\dot{\mathbf{p}}=\mathbf{J}_{p}(\mathbf{q})\dot{\mathbf{q}}\) and \(\omega=\mathbf{J}_{r}(\mathbf{q})\dot{\mathbf{q}}\); \(\mathbf{R}_{0}\in\mathrm{SO}(3)\) denotes the virtual end-effector orientation which is set to be constant; \(\mathbf{Log}:\mathrm{SO}(3)\rightarrow\mathbb{R}^{3}\) denotes the Matrix Logarithm Map. For more details about Special Orthogonal Group SO(3) and the Matrix Logarithmic Map, readers are referred to [38, 39].
\(\mathbf{Z}_{q}\), \(\mathbf{Z}_{p}\), \(\mathbf{Z}_{r}\) correspond to joint-space impedance, translational task-space impedance, and rotational task-space impedance, respectively; \(\mathbf{B}_{q}\in\mathbb{R}^{n\times n}\) denotes the joint damping matrix; \(\mathbf{K}_{p},\mathbf{B}_{p}\in\mathbb{R}^{3\times 3}\) denote the translational stiffness and damping matrices, respectively; \(k_{r},b_{r}\in\mathbb{R}\) denote the rotational stiffness and damping coefficients, respectively; Matrices \(\mathbf{K}_{q},\mathbf{K}_{p},\mathbf{B}_{p}\) (respectively \(k_{r}\), \(b_{r}\)) are chosen to be constant symmetric positive definite matrices (respectively positive values).
The virtual end-effector trajectory is denoted \(\mathbf{p}_{0}(t)\in\mathbb{R}^{3}\), which consists of submovements and/or oscillations (Section II-A). Hence, with the impedance parameters \(\mathbf{Z}_{q}\), \(\mathbf{Z}_{p}\), and \(\mathbf{Z}_{r}\) set to be constant, the control objective is to define trajectory \(\mathbf{p}_{0}(t)\) that achieves the three control tasks.
### _Generating a Sequence of Discrete Movements_
Given an initial end-effector position \(\mathbf{p}_{1}\in\mathbb{R}^{3}\), let \(\mathbf{p}_{2}\in\mathbb{R}^{3}\) be a goal location which the robot's end-effector aims to reach. This goal-directed discrete movement \(\mathbf{p}(t)\rightarrow\mathbf{p}_{2}\) can be achieved by setting \(\mathbf{p}_{0}(t)\) as \(\mathbf{p}_{0,sub1}(t)\)[40, 41, 42, 43]:
\[\mathbf{p}_{0,sub1}(t)=\begin{cases}\mathbf{p}_{1}&0\leq t<t_{i}\\ \mathbf{p}_{1}+(\mathbf{p}_{2}-\mathbf{p}_{1})f_{1}(t)&t_{i}\leq t<t_{i}+T_{1 }\\ \mathbf{p}_{2}&t_{i}+T_{1}\leq t\end{cases} \tag{9}\]
In these equations, \(f_{1}(t)\) is a submovement starting at time \(t_{i}\) with duration \(T_{1}\) (Eq. (1))
\[\begin{split}\mathbf{p}_{0}(t)&=\mathbf{p}_{0,sub1}(t)+ \mathbf{p}_{0,sub2}(t)\\ \mathbf{p}_{0,sub2}(t)&=\begin{cases}\mathbf{0}&0\leq t<t_{g}\\ (\mathbf{p}_{3}-\mathbf{p}_{2})f_{2}(t)&t_{g}\leq t<t_{g}+T_{2}\\ \mathbf{p}_{3}&t_{g}+T_{2}\leq t\end{cases}\end{split} \tag{10}\]
In these equations, \(f_{2}(t)\) is a submovement starting at time \(t_{g}\) with duration \(T_{2}\). With this new \(\mathbf{p}_{0}(t)\) combined with the two submovements \(\mathbf{p}_{0,sub1}(t)\) and \(\mathbf{p}_{0,sub2}(t)\), the convergence of \(\mathbf{p}(t)\) to the new goal \(\mathbf{p}_{3}\) is achieved [40, 41, 42, 43].
### _Generating a Combination of Discrete and Rhythmic Movements_
Consider a goal-directed discrete movement from starting location \(\mathbf{p}_{1}\in\mathbb{R}^{3}\) to goal location \(\mathbf{p}_{2}\in\mathbb{R}^{3}\). As discussed in Section III-A, this movement can be achieved by a single submovement \(\mathbf{p}_{0,sub1}(t)\), i.e., (Eq. (9)). Our goal is to overlap a rhythmic movement onto this goal-directed discrete movement. This can be achieved by direct summation of an oscillation \(\mathbf{p}_{0,osc}(t)\) (Eq. (2)) onto \(\mathbf{p}_{0,sub1}(t)\):
\[\mathbf{p}_{0}(t)=\mathbf{p}_{0,osc}(t)+\mathbf{p}_{0,sub1}(t) \tag{11}\]
### _Drawing and Erasing Task_
Consider a task of teaching a robot to draw trajectory \(\mathbf{p}_{des}(t)\in\mathbb{R}^{2}\) on a table. Without loss of generality, assume that the drawing table resides on a horizontal \(XY\)-plane. After drawing \(\mathbf{p}_{des}(t)\), the robot retraces \(\mathbf{p}_{des}(t)\) with an additional oscillatory movement to thoroughly erase the trajectory.
For this drawing and erasing task, one can combine Imitation Learning with EDA. In detail, Imitation Learning with a two-dimensional DMP was used to learn \(\mathbf{p}_{des}(t)\) and this trajectory was used as the \(X\)- and \(Y\)-coordinates of \(\mathbf{p}_{0}(t)\) to draw \(\mathbf{p}_{des}(t)\).
Once \(\mathbf{p}_{des}(t)\) is drawn on the plane, the drawing can be erased by simply superimposing an oscillation \(\mathbf{p}_{0,osc}(t)\) on a time-reversed trajectory of \(\mathbf{p}_{des}(t)\):
\[\mathbf{p}_{0}(t)=\begin{cases}\mathbf{p}_{des}(T-t)+\mathbf{p}_{0,osc}(t)&0 \leq t<T\\ \mathbf{p}_{0,osc}(t)&T\leq t\end{cases} \tag{12}\]
In this equation, \(T\in\mathbb{R}_{>0}\) is the duration of \(\mathbf{p}_{des}(t)\).
Note that the \(Z\)-coordinates of \(\mathbf{p}_{0}(t)\), \(p_{0,z}(t)\), is not learned with Imitation Learning. Instead, an appropriate value of \(p_{0,z}(t)\) must be chosen to remain in contact with the table. To elaborate, let the \(Z\)-coordinate of the drawing table be \(h_{z}\). The pen extended from the robot's end-effector is facing downward, towards the \(-Z\) direction. To ensure that the pen maintains contact with the plane throughout its operation, it is necessary to set \(p_{0,z}(t)\) lower than the drawing plane \(h_{z}\) with an offset \(\epsilon>0\), i.e., \(p_{0,z}(t)=h_{z}-\epsilon\). Moreover, since \(\boldsymbol{\tau}_{in}(t)\) includes \(\mathbf{Z}_{r}\) with constant \(\mathbf{R}_{0}\), the drawing can be achieved while maintaining orientation [44, 45].
## IV Experimental Results
For the robot experiment, we use KUKA LBR iiwa14, which has seven torque-actuated DOFs (i.e., \(n=7\)). For control, KUKA's Fast Robot Interface (FRI) was employed. For all three tasks, the built-in gravity compensation was activated. For the impedance parameters \(\mathbf{B}_{q}\), \(k_{r}\) and \(b_{r}\) (Eq. (8)), identical values were used for all three tasks. The impedance values: \(\mathbf{B}_{q}=1.0\mathbf{I}_{7}\)\(N\cdot m\cdot s/rad\), where \(\mathbf{I}_{n}\in\mathbb{R}^{n\times n}\) denotes an identity matrix; \(k_{r}=50\)\(N\cdot m/rad\), \(b_{r}=5\)\(N\cdot m\cdot s/rad\). \(\mathbf{q}(t)\) was directly called from the FRI interface; \(\mathbf{q}(t)\) was derived by first-order finite difference of \(\mathbf{q}(t)\) with a time step of 3ms. The Forward Kinematics Map to derive \(\mathbf{p}\), \(\mathbf{R}\), and the Jacobian matrices \(\mathbf{J}_{p}(\mathbf{q})\), \(\mathbf{J}_{r}(\mathbf{q})\) were calculated using Exp[licit]TM-FRI Library.2
Footnote 2: Github repository: [https://github.com/explicit-robotics/Explicit-FRI](https://github.com/explicit-robotics/Explicit-FRI)
### _Generating a Sequence of Discrete Movements_
For the basis function of the two submovements \(\mathbf{p}_{0,sub1}(t)\) (Eq. (9)) and \(\mathbf{p}_{0,sub2}(t)\) (Eq. (10)), a minimum-jerk trajectory was used [46]:
\[\begin{split} f_{1}(t)&=10\Big{(}\frac{t-t_{i}}{T_ {1}}\Big{)}^{3}-15\Big{(}\frac{t-t_{i}}{T_{1}}\Big{)}^{4}+6\Big{(}\frac{t-t_{i} }{T_{1}}\Big{)}^{5}\\ f_{2}(t)&=10\Big{(}\frac{t-t_{g}}{T_{2}}\Big{)}^{ 3}-15\Big{(}\frac{t-t_{g}}{T_{2}}\Big{)}^{4}+6\Big{(}\frac{t-t_{g}}{T_{2}} \Big{)}^{5}\end{split} \tag{13}\]
The values of the impedance parameters \(\mathbf{K}_{p}\), \(\mathbf{B}_{p}\) (Eq. (8)) were \(800\mathbf{I}_{3}\)\(N/m\), \(80\mathbf{I}_{3}\)\(N\cdot s/m\), respectively.
The results are shown in Figure 2. With the proposed approach, a convergence of \(\mathbf{p}(t)\) to the new goal location
Fig. 2: A sequence of discrete movements using a KUKA LBR iiwa14, (A, B) Time-lapse of the robot movement towards the (A) original (old) and (B) new goal location. Start \(\mathbf{p}_{1}\), original goal \(\mathbf{p}_{2}\) and new goal \(\mathbf{p}_{3}\) are depicted as orange markers. The origin of the robot’s coordinate frame is attached at the robot base, depicted as a green marker. (C) The end-effector trajectory \(\mathbf{p}(t)\) (black filled line) and the virtual trajectory (black dashed line) \(\mathbf{p}_{0}(t)\) depicted on the \(YZ\)-plane. (D, E) Time \(t\) vs. end-effector velocity \(\dot{\mathbf{p}}(t)\) along the (D) \(Y\)-coordinate and (E) \(Z\)-coordinate. Black filled lines show the end-effector velocity, which was derived by a first-order finite difference of \(\mathbf{p}(t)\) with a sampling interval of 3ms. The two unimodal speed profiles filled in orange depict the two submovements \(\mathbf{p}_{0,sub1}(t)\) (left) (Eq. (9)) and \(\mathbf{p}_{0,sub2}(t)\) (right) (Eq. (10), (13)). As shown in (D) and (E), the second submovement is directly superimposed, without any modification of the first submovement. Parameters of the submovements (Eq. (9), (10), (13)): \(\mathbf{p}_{1}=[0.6735,0.1396,0.2048]m\), \(\mathbf{p}_{2}=[0.6735,0.4396,0.4048]m\), \(\mathbf{p}_{3}=[0.6735,0.4396,0.3048]m\), \(T_{1}=2.0s\), \(T_{2}=2.0s\), \(t_{i}=0.5s\), \(t_{g}=1.5s\).
was achieved (Figure 2A, 2B) by simply superimposing a second submovement onto the first one (Figure 2C, 2D, 2E). It is worth emphasizing that the task was achieved without any modification of the first submovement. The simplicity of this approach is not guaranteed for other methods. For instance, using DMP, the task requires online modification of the initiated movement [47], which may introduce practical difficulties for implementation (Eq. (5)).
### _Generating a Combination of Discrete and Rhythmic Movements_
For the submovement, \(\mathbf{p}_{0,sub1}(t)\) was employed. For the oscillation, a circular trajectory residing on the \(YZ\)-plane was used:
\[\mathbf{p}_{0,osc}(t)=r[0,\cos(\omega_{0}t),\sin(\omega_{0}t)] \tag{14}\]
In this equation, \(r\) and \(\omega_{0}\) are the radius and angular velocity of the circular trajectory, respectively. The values of the impedance parameters were identical to those in Section IV-A, i.e., \(\mathbf{K}_{p}=800\mathbf{I}_{3}\)\(N/m\) and \(80\mathbf{I}_{3}\)\(N\cdot s/m\), respectively (Eq. (8)). The results are shown in Figure 3. With the proposed approach, a combination of discrete and rhythmic movements of the robot's end-effector was achieved (Figure 3A) by simply superimposing submovement and oscillation (Figure 3B, 3C, 3D). Note that this direct combination of both discrete and rhythmic movements may not be obvious for DMP, since DMP accounts for these two movements separately [48, 49, 50, 6] (Section II-B).
### _Drawing and Erasing Task_
For Imitation Learning of \(\mathbf{p}_{des}(t)\), the human-demonstrated data points of \(\mathbf{p}_{des}(t)\) were collected with a sampling rate of 333Hz. The velocity \(\mathbf{\dot{p}}_{des}(t)\) and acceleration \(\mathbf{\dot{p}}_{des}(t)\) for Imitation Learning were derived by first-order finite difference of \(\mathbf{p}_{des}(t)\) with Gaussian filtering. For the filtering, MATLAB's smoothdata function with a time window size 165ms was used. With these filtered data, Locally Weighted Regression was used for Imitation Learning (Eq. (6)). For the drawing (respectively erasing), the impedance parameters \(\mathbf{K}_{p}\) and \(\mathbf{B}_{p}\) were chosen to be \(400\mathbf{I}_{3}\)\(N/m\) (respectively \(800\mathbf{I}_{3}\)\(N/m\)) and \(40\mathbf{I}_{3}\)\(N\cdot s/m\) (respectively \(80\mathbf{I}_{3}\)\(N\cdot s/m\)). For the erasing, an oscillation \(\mathbf{p}_{0,osc}(t)\) of Eq. (14) but residing on the \(XY\)-plane (not the \(YZ\)-plane) was used.
The results are shown in Figure 4, which shows the whole process of generating \(\mathbf{p}_{0}(t)\) for the drawing and erasing tasks. By merging Imitation Learning with EDA (Figure 4A, 4B, 4C), the drawing (Figure 4D) and erasing tasks (Figure 4E) were successfully achieved. It is worth emphasizing that the key to this approach is the combination of Imitation Learning and the modular property of EDA. The trajectory \(\mathbf{q}_{des}(t)\) was separately learned with Imitation Learning and directly combined with an oscillation \(\mathbf{p}_{0,osc}(t)\). With modest parameter tuning (i.e., changing the angular velocity \(\omega_{0}\)), trajectory \(\mathbf{p}_{0,osc}(t)\) used in task IV-B was simply reused.
## V Discussion and Conclusion
Thanks to the kinematic modularity of EDA, the two tasks of (1) sequencing discrete movements and (2) combining discrete and rhythmic movements were greatly simplified. For the former, the subsequent movement was directly superimposed onto the previous movement, without any modification of the first submovement. For the latter, the discrete and rhythmic movements were separately planned and directly superimposed. The authors want to emphasize that the sim
Fig. 3: A combination of discrete and rhythmic movements using a KUKA LBR iiwa14. (A) Elements of the robot movement. The origin of the robot’s coordinate frame is attached at the robot base, depicted as green marker. Orange markers depict \(\mathbf{p}_{1}\) (left) and \(\mathbf{p}_{2}\) (right) of \(\mathbf{p}_{0,sub1}(t)\). Blue line depicts \(\mathbf{p}_{0,osc}(t)\) (Eq. (11), (14)). (B) The end-effector trajectory \(\mathbf{p}(t)\) (black filled line) and the virtual trajectory (black dashed line) \(\mathbf{p}(t)\) depicted on the \(YZ\)-plane. Multiple submovements that move between \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) were generated. (C, D) Time \(t\) vs. end-effector trajectory \(\mathbf{p}(t)\) along (C) \(Y\)-coordinate and (D) \(Z\)-coordinate. Black dashed lines depict \(\mathbf{p}_{0}(t)\). Blue lines highlight the duration of a movement without any discrete movement. Parameters of submovement and oscillation (Eq. (11), (14)): \(\mathbf{p}_{1}=[0.5735,0,0,0.5048]m\), \(\mathbf{p}_{2}=[0.5735,0.35,0.5048]m\), \(\mathbf{T}_{1}=1.5s\), \(r=0.03m\), \(\omega_{0}=3\pi rad/s\).
plicity of this approach is not trivial. For example, using DMP, the former task is achieved by using an additional online modulation method, or by introducing an additional dynamics for goal \(g\). These additional methods may pose practical challenges for actual robot implementation. For the latter task, DMP accounts for discrete and rhythmic movements separately [6, 48, 49, 50]. Hence, one cannot directly combine the two movements generated (or learned) by DMP.
The robot demonstration of the drawing and erasing task illustrated how to get the best of both motor-primitives approaches. DMP and Imitation Learning provided a rigorous mathematical framework to generate (or learn) trajectories of arbitrary complexity. EDA and its Norton equivalent network model provided a modular framework for robot control. Using Imitation Learning to learn the virtual trajectory of EDA, a modular learning strategy for robot control was established. While the presented example focused on modular control of end-effector position, the approach can be generalized to different coordinates. For instance, Imitation Learning of end-effector orientation and end-effector position can be separately conducted and later combined using the modular property of EDA.
Merging the methods of EDA with DMP preserved the kinematic modularity of EDA and combined it with its favorable behavior during physical interaction. This facilitated the drawing and erasing task, which involves physical contact. The key to this approach is the compliant robot behavior emerging from mechanical impedances, which may be challenging to generate using position-actuated robots [21]. However, the compliance of mechanical impedances resulted in non-negligible tracking error between the virtual and actual end-effector trajectory. This might be ameliorated to some degree by increasing impedance in the direction of desired motion. Nonetheless, the authors consider that the modular approach presented using torque-actuated robots, outweighs the advantages of utilizing position-actuated robots.
As discussed, a key objective of EDA is to find appropriate choices of virtual trajectory and the corresponding mechanical impedance. The former property was addressed in this paper by using the Imitation Learning of DMP, which thereby resulted in modular Imitation Learning. However, choosing appropriate values of mechanical impedance may not be obvious; the presented values were discovered by trial-and-error. A systematic method to choose (or learn) the impedance parameters is an avenue of future research [51].
In conclusion, by integrating EDA with DMP's Imitation Learning, a modular learning strategy for robot control was established. This modular approach has great potential to simplify the generation of a wide range of robot actions.
Fig. 4: The drawing and erasing task using a KUKA LBR iwa. A green pen was used for the drawing. (A) Data collection of human-demonstrated \(\mathbf{p}_{des}(t)\) which we aim to draw (Section III-C). The end-effector trajectory along \(X\)- and \(Y\)-coordinates were collected. (B) Time \(t\) vs. \(X\)-coordinate (black line) and \(Y\)-coordinate (purple line) of \(\mathbf{p}_{des}(t)\) (top row), \(\mathbf{p}_{des}(t)\) (middle row), \(\mathbf{p}_{des}(t)\) (bottom row). With a sampling rate of \(333\)Hz, a first-order finite difference method was used to calculate the velocity and acceleration (left column). These trajectories were Gaussian filtered (right column) using MATLAB’s smoothdata function with a time window size of \(165\)ms. (C) The resulting trajectory \(\mathbf{p}_{des}(t)\) (black dashed line) generated with imitation learning. (D) The drawing task was achieved by setting \(\mathbf{p}_{0}(t)\) as \(\mathbf{p}_{des}(t)\). The black dashed line depicts \(\mathbf{p}_{0}(t)\) (or \(\mathbf{p}_{des}(t)\)), green line depicts \(\mathbf{p}(t)\). (E) The erasing task was achieved by superimposing an oscillating \(\mathbf{p}_{osc}(t)\) onto a time-reversed \(\mathbf{p}_{des}(t)\). The green pen was replaced by a rectangular eraser. The green line depicts \(\mathbf{p}(t)\). For (C, D, E), trajectories were plotted in MATLAB and overlapped onto the drawing/erasing table. Parameters of DMP: \(\alpha_{i}=1000\), \(\beta_{i}=250\), \(N=100\), \(P=2331\), \(\tau=7\), \(c_{i}=\exp(-\alpha_{i}(i-1)/(N-1))\), \(h_{i}=1/(c_{i+1}-c)^{2}\) for \(i\in[1,2,\cdots,N-1]\), \(c_{N}=\exp(-\alpha_{i})\), \(h_{N}=h_{N-1}\). Parameters of oscillation: \(r=0.03m\), \(\omega_{0}=2\pi rad/s\). |
2309.13166 | Invisible Watermarking for Audio Generation Diffusion Models | Diffusion models have gained prominence in the image domain for their
capabilities in data generation and transformation, achieving state-of-the-art
performance in various tasks in both image and audio domains. In the rapidly
evolving field of audio-based machine learning, safeguarding model integrity
and establishing data copyright are of paramount importance. This paper
presents the first watermarking technique applied to audio diffusion models
trained on mel-spectrograms. This offers a novel approach to the aforementioned
challenges. Our model excels not only in benign audio generation, but also
incorporates an invisible watermarking trigger mechanism for model
verification. This watermark trigger serves as a protective layer, enabling the
identification of model ownership and ensuring its integrity. Through extensive
experiments, we demonstrate that invisible watermark triggers can effectively
protect against unauthorized modifications while maintaining high utility in
benign audio generation tasks. | Xirong Cao, Xiang Li, Divyesh Jadav, Yanzhao Wu, Zhehui Chen, Chen Zeng, Wenqi Wei | 2023-09-22T20:10:46Z | http://arxiv.org/abs/2309.13166v2 | # Invisible Watermarking for Audio Generation Diffusion Models
###### Abstract
Diffusion models have gained prominence in the image domain for their capabilities in data generation and transformation, achieving state-of-the-art performance in various tasks in both image and audio domains. In the rapidly evolving field of audio-based machine learning, safeguarding model integrity and establishing data copyright are of paramount importance. This paper presents the first watermarking technique applied to audio diffusion models trained on mel-spectrograms. This offers a novel approach to the aforementioned challenges. Our model excels not only in benign audio generation, but also incorporates an invisible watermarking trigger mechanism for model verification. This watermark trigger serves as a protective layer, enabling the identification of model ownership and ensuring its integrity. Through extensive experiments, we demonstrate that invisible watermark triggers can effectively protect against unauthorized modifications while maintaining high utility in benign audio generation tasks.
audio diffusion, watermarking, copyright protection
## 1 Introduction
In recent years, diffusion models have risen to prominence in generative tasks, particularly in the domains of image and audio synthesis. In comparison to other generative models like GANs [1] and VAEs [2], diffusion models are capable of delivering superior quality and diversity in the content they generate. This has fueled the creation of advanced diffusion models tailored for controlled generation tasks, including text-to-image [3, 4] and text-to-audio conversion [5]. Nonetheless, the misuse of these potent models may give rise to legal concerns, including:
* **Intellectual Property:** The increasing adoption of pre-trained diffusion models in diverse applications calls for rigorous adherence to copyright laws. Yet, the opaque nature of these applications poses challenges when it comes to model inspection.
* **Content Authenticity:** diffusion models' ability to generate potentially deceptive or harmful content, such as Deepfakes [6], poses legal and ethical challenges. The sophistication of diffusion models exacerbates the difficulty in monitoring and regulating such content.
While watermarking techniques have been proven effective in neural networks for classification and in GANs [7] for generative tasks, their applicability in diffusion models remains an open question. This is due to diffusion models' unique characteristics, such as stochastic behavior and intricate architectures. While image-based diffusion models have received significant attention in the context of watermarking [8, 9, 10], the domain of audio synthesis models has remained relatively underdeveloped in terms of intellectual property protection. This intriguing gap in research motivates us to delve deeper into the following questions: _How can we effectively watermark audio diffusion models? Are there specific challenges and opportunities unique to audio watermarking in the context of diffusion models?_
In this paper, we investigate how to watermark audio diffusion models. Specifically, we present a novel watermark strategy for two types of diffusion models, i.e. DDPM [11] and DDIM [12]. Different from the image domain, there are various audio representations such as time-frequency representation (Mel-spectrogram, MFCC), and time series representation (raw audio signal). In our study, we consider mel-spectrogram for audio representation. When taking the standard Gaussian noise as input, the diffusion model is capable of generating diverse, high-quality mel-spectrograms of different audios. However, when the initial Gaussian noises are blended with the watermark trigger, the model will generate the mel-spectrogram of the predefined watermark audio, hence allowing us to identify the model ownership while maintaining its high utility.
Our work makes three original contributions. _First_, we introduce the first watermarking method for audio diffusion models. _Second_, we demonstrate that the choice of watermark trigger is a critical factor for watermarking audio diffusion models. To address this, we provide two invisible watermark trigger options: Infrasound and environment sound. These watermark triggers options are carefully selected to remain undetectable not only at the audio level but also within the mel-spectrogram, effectively thwarting model-stealing attempts and safeguarding intellectual property. _Third_, we conduct extensive experiments to evaluate invisible triggers in watermarking audio diffusion models. Our findings indicate that the two invisible watermark triggers consistently achieve high water |
2309.14609 | Elemental topological ferroelectrics and polar metals of few-layer
materials | Ferroelectricity can exist in elemental phases as a result of charge
transfers between atoms occupying inequivalent Wyckoff positions. We
investigate the emergence of ferroelectricity in two-dimensional elemental
materials with buckled honeycomb lattices. Various multi-bilayer structures
hosting ferroelectricity are designed by stacking-engineering. Ferroelectric
materials candidates formed by group IV and V elements are predicted
theoretically. Ultrathin Bi films show layer-stacking-dependent physical
properties of ferroelectricity, topology, and metallicity. The two-bilayer Bi
film with a polar stacking sequence is found to be an elemental topological
ferroelectric material. Three and four bilayers Bi films with polar structures
are ferroelectric-like elemental polar metals with topological nontrivial edge
states. For Ge and Sn, trivial elemental polar metals are predicted. Our work
reveals the possibility of design two-dimensional elemental topological
ferroelectrics and polar metals by stacking-engineering. | Hu Zhang, Lulu Zhao, RuiFeng Zhang, Chendong Jin, Ruqian Lian, Peng-Lai Gong, RuiNing Wang, JiangLong Wang, Xing-Qiang Shi | 2023-09-26T01:34:55Z | http://arxiv.org/abs/2309.14609v1 | # Elemental topological ferroelectrics and polar metals of few-layer materials
###### Abstract
Ferroelectricity can exist in elemental phases as a result of charge transfers between atoms occupying inequivalent Wyckoff positions. We investigate the emergence of ferroelectricity in two-dimensional elemental materials with buckled honeycomb lattices. Various multi-bilayer structures hosting ferroelectricity are designed by stacking-engineering. Ferroelectric materials candidates formed by group IV and V elements are predicted theoretically. Ultrathin Bi films show layer-stacking-dependent physical properties of ferroelectricity, topology, and metallicity. The two-bilayer Bi film with a polar stacking sequence is found to be an elemental topological ferroelectric material. Three and four bilayers Bi films with polar structures are ferroelectric-like elemental polar metals with topological nontrivial edge states. For Ge and Sn, trivial elemental polar metals are predicted. Our work reveals the possibility of design two-dimensional elemental topological ferroelectrics and polar metals by stacking-engineering.
## I Introduction
Elemental ferroelectric materials contain only one element and thus may have very simple crystal structures without inversion symmetry. In a work published in |
2308.00555 | Shortcut Partitions in Minor-Free Graphs: Steiner Point Removal,
Distance Oracles, Tree Covers, and More | The notion of shortcut partition, introduced recently by Chang, Conroy, Le,
Milenkovi\'c, Solomon, and Than [CCLMST23], is a new type of graph partition
into low-diameter clusters. Roughly speaking, the shortcut partition guarantees
that for every two vertices $u$ and $v$ in the graph, there exists a path
between $u$ and $v$ that intersects only a few clusters. They proved that any
planar graph admits a shortcut partition and gave several applications,
including a construction of tree cover for arbitrary planar graphs with stretch
$1+\varepsilon$ and $O(1)$ many trees for any fixed $\varepsilon \in (0,1)$.
However, the construction heavily exploits planarity in multiple steps, and is
thus inherently limited to planar graphs.
In this work, we breach the "planarity barrier" to construct a shortcut
partition for $K_r$-minor-free graphs for any $r$. To this end, we take a
completely different approach -- our key contribution is a novel deterministic
variant of the cop decomposition in minor-free graphs [And86, AGG14]. Our
shortcut partition for $K_r$-minor-free graphs yields several direct
applications. Most notably, we construct the first optimal distance oracle for
$K_r$-minor-free graphs, with $1+\varepsilon$ stretch, linear space, and
constant query time for any fixed $\varepsilon \in (0,1)$. The previous best
distance oracle [AG06] uses $O(n\log n)$ space and $O(\log n)$ query time, and
its construction relies on Robertson-Seymour structural theorem and other
sophisticated tools. We also obtain the first tree cover of $O(1)$ size for
minor-free graphs with stretch $1+\varepsilon$, while the previous best
$(1+\varepsilon)$-tree cover has size $O(\log^2 n)$ [BFN19]. | Hsien-Chih Chang, Jonathan Conroy, Hung Le, Lazar Milenkovic, Shay Solomon, Cuong Than | 2023-07-31T17:51:00Z | http://arxiv.org/abs/2308.00555v1 | # Shortcut Partitions in Minor-Free Graphs:
###### Abstract
The notion of _shortcut partition_, introduced recently by Chang, Conroy, Le, Milenkovic, Solomon, and Than [2], is a new type of graph partition into low-diameter clusters. Roughly speaking, the shortcut partition guarantees that for every two vertices \(u\) and \(v\) in the graph, there exists a path between \(u\) and \(v\) that intersects only a few clusters. They proved that any planar graph admits a shortcut partition and gave several applications, including a construction of tree cover for arbitrary planar graphs with stretch \(1+e\) and \(O(1)\) many trees for any fixed \(e\in(0,1)\). However, the construction heavily exploits planarity in multiple steps, and is thus inherently limited to planar graphs.
In this work, we breach the "planarity barrier" to construct a shortcut partition for \(K_{r}\)-minor-free graphs for any \(r\). To this end, we take a completely different approach -- our key contribution is a novel deterministic variant of the _cop decomposition_ in minor-free graphs [1, 2]. Our shortcut partition for \(K_{r}\)-minor-free graphs yields several direct applications. Most notably, we construct the first _optimal_ distance oracle for \(K_{r}\)-minor-free graphs, with \(1+e\) stretch, linear space, and constant query time for any fixed \(e\in(0,1)\). The previous best distance oracle [2] uses \(O(n\log n)\) space and \(O(\log n)\) query time, and its construction relies on Robertson-Seymour structural theorem and other sophisticated tools. We also obtain the first tree cover of \(O(1)\) size for minor-free graphs with stretch \(1+e\), while the previous best \((1+e)\)-tree cover has size \(O(\log^{2}n)\)[1].
As a highlight of our work, we employ our shortcut partition to resolve a major open problem -- the _Steiner point removal (SPR)_ problem: Given any set \(K\) of _terminals_ in an arbitrary edge-weighted planar graph \(G\), is it possible to construct a minor \(M\) of \(G\) whose vertex set is \(K\), which preserves the shortest-path distances between all pairs of terminals in \(G\) up to a _constant_ factor? Positive answers to the SPR problem were only known for very restricted classes of planar graphs: trees [1], outerplanar graphs [1], and series-parallel graphs [1]. We resolve the SPR problem in the affirmative for any planar graph, and more generally for any \(K_{r}\)-minor-free graph for any fixed \(r\). To achieve this result, we prove the following general reduction and combine it with our new shortcut partition: For any graph family closed under taking subgraphs, the existence of a shortcut partition yields a positive solution to the SPR problem.
Introduction
Partitioning a graph into clusters is a fundamental primitive for designing algorithms. Perhaps the most basic requirement of a partition is that every cluster would have a small diameter. However, to be useful, most partitions require one or more additional constraints, and achieving these constraints is the key to the power of those partitions. For example, _probabilistic partition_[1], a principal tool in the metric embedding literature, guarantees that the probability of any two vertices being placed into different clusters is proportional to their distance. _Sparse partition_[1] guarantees that each cluster has neighbors in only a few other clusters. _Scattering partition_[13] guarantees that each shortest path up to a certain length only intersects a small number of clusters. These partitions have found a plethora of applications in a wide variety of areas, such as metric embeddings, distributed computing, routing, and algorithms for network design problems, to name a few.
Recently, Chang, Conroy, Le, Milenkovic, Solomon, and Than [14] introduced a new notion of partition called _shortcut partition_. Roughly speaking, a shortcut partition guarantees that for every two vertices \(u\) and \(v\) in the graph, there exists a low-hop path in the cluster graph between \(C_{u}\) and \(C_{v}\), where \(C_{u}\) and \(C_{v}\) are the clusters containing \(u\) and \(v\), respectively. More formally, a _clustering_ of a graph \(G\) is a partition of the vertices of \(G\) into connected _clusters_. The _cluster graph_\(\tilde{G}\) of a clustering \(\mathcal{C}\) of \(G\) is the graph where each vertex of \(\tilde{G}\) corresponds to a cluster in \(\mathcal{C}\), and there is an edge between two vertices in \(\tilde{G}\) if there is an edge in \(G\) whose endpoints are in the two corresponding clusters.
**Definition 1.1**.: _An \((\varepsilon,h)\)-shortcut partition is a clustering \(\mathcal{C}=\{C_{1},\ldots,C_{m}\}\) of \(G\) such that:_
* [Diameter.] _the strong_1 _diameter of each cluster_ \(C_{i}\) _is at most_ \(\varepsilon\cdot\operatorname{diam}(G)\)_;_ Footnote 1: The _strong_ diameter of cluster \(C\) is the one of induced subgraph \(G[\mathcal{C}]\). In contrast, the _weak_ diameter of \(C\) is \(\max_{u,v\in\mathcal{C}}\delta_{G}(u,v)\). Here, and throughout this paper, \(\delta_{G}(u,v)\) denotes the distance between \(u\) and \(v\) in graph \(G\).
* [Low-hop.] _for any vertices_ \(u\) _and_ \(v\) _in_ \(G\)_, there is a path_ \(\hat{\pi}\) _in the cluster graph_ \(\tilde{G}\) _between the clusters containing_ \(u\) _and_ \(v\) _such that:_
* \(\hat{\pi}\) _has hop-length at most_ \(\varepsilon h\cdot\left\lceil\frac{\delta_{G}(u,v)}{\varepsilon\cdot \operatorname{diam}(G)}\right\rceil\)_,_
* \(\hat{\pi}\) _only contains a subset of clusters that have nontrivial intersections with a given shortest path in_ \(G\) _between_ \(u\) _and_ \(v\)_._
_The hop-length of \(\hat{\pi}\) is measured in units of \(\Delta\coloneqq\varepsilon\cdot\operatorname{diam}(G)\); if \(u\) and \(v\) have distance at most \(\alpha\cdot\Delta\), then the hop-length of \(\hat{\pi}\) is \(\alpha\cdot\varepsilon h\), where \(\alpha\) ranges between \(0\) and \(1/\varepsilon\). In particular, the hop-length of \(\hat{\pi}\) is always at most \(O(h)\) as \(\delta_{G}(u,v)\) is at most \(\operatorname{diam}(G)\)._
The shortcut partition is similar to the scattering partition introduced by Filtser [13]. A key difference is that in a scattering partition, _every_ shortest path of length \(\alpha\varepsilon\cdot\operatorname{diam}(G)\) intersects at most \(O(\alpha)\) clusters, while in a shortcut partition, it only requires that there is a low-hop path in the cluster graph between the two clusters containing the path's endpoints. The fact that scattering partition requires a stronger guarantee on shortest paths makes it very difficult to construct; it remains an open problem whether scattering partition for every planar graph exists [13, Conjecture 1]. Although shortcut partition provides a weaker guarantee, it is already sufficient for many applications as shown in previous work [14], including the first tree cover in planar graphs with stretch \(1+\varepsilon\) using \(O(1)\) many trees for any fixed \(\varepsilon\in(0,1)\), a simpler proof to the existence of a \(+\varepsilon\cdot\operatorname{diam}(G)\) additive embedding of planar graph into bounded-treewidth graph, distance oracles, labeling schemes, (hop-)emulators, and more.
For any given \(\varepsilon\in(0,1)\), the authors of [14] constructed an \((\varepsilon,O(\varepsilon^{-2}))\)-shortcut partition for any planar graph. This naturally motivates the question of constructing a shortcut partition for broader classes
of graphs, specifically \(K_{r}\)-minor-free graphs.2 This will open the door to seamlessly extend algorithmic results from planar graphs to \(K_{r}\)-minor-free graphs. However, the construction of [CCL\({}^{+}\)23] heavily exploits planarity in multiple steps. It starts from the outerface of \(G\), and works toward the interior of \(G\) in a recursive manner, similar in spirit to Busch, LaFortune, and Tirthapura [BLT14]. Specifically, the construction first finds a collection of subgraphs of \(G\) call _columns_ such that every vertex near the outerface of \(G\) belongs to one of the columns. The construction then recurs on subgraphs induced by vertices that are not in any of the columns. The overall construction produces a structure called the _grid-tree hierarchy_, which is then used to construct a shortcut partition. The construction relies on the fact that each column contains a shortest path between two vertices on the outer face, which splits the graph into two subgraphs using Jordan curve theorem. As a result, constructing a shortcut partition for \(K_{r}\)-minor-free graphs requires breaking away from the planarity-exploiting framework of [CCL\({}^{+}\)23].
Footnote 2: We sometimes drop the prefix “\(K_{r}\)-” in \(K_{r}\)-minor-free graphs when the clique minor has constant size \(r\).
In this work, we overcome this barrier and construct a shortcut partition for \(K_{r}\)-minor-free graphs.
**Theorem 1.2**.: _Any edge-weighted \(K_{r}\)-minor-free graph admits an \((\varepsilon,2^{O(r\log r)}/\varepsilon)\)-shortcut partition for any \(\varepsilon\in(0,1)\)._
**Remark.** _Definition 1.1 is slightly stronger than the corresponding definition of shortcut partition for planar graphs in [CCL\({}^{+}\)23] (Definition 2.1 in their paper). Specifically, their definition states that the hop-length of \(\tilde{\pi}\) is at most \(h\), regardless of \(\delta_{G}(u,v)\), while our definition allows smaller hop-lengths for smaller distances. (For example, when \(\delta_{G}(u,v)=\varepsilon\cdot\operatorname{diam}(G)\), the hop-length of \(\tilde{\pi}\) is \(O(\varepsilon h)\) instead of \(h\).) Another difference is that, in the current definition, the nontrivial intersections of clusters contained by \(\tilde{\pi}\) stated in condition (2) of the "low-hop" property are with respect to a shortest path in the graph, whereas in [CCL\({}^{+}\)23] they are with respect to an approximate shortest path; we discuss this point further in Section 5. In particular, the shortcut partition provided by Theorem 1.2 for minor-free graphs subsumes the one in [CCL\({}^{+}\)23] for planar graphs._
_The hop length of \(2^{O(r\log r)}/\varepsilon\) of the shortcut partition in Theorem 1.2 is optimal for every constant \(r\) up to a constant factor: any shortcut partition of a path would have hop length \(1/\varepsilon\) between the two endpoints of the path. Also, in the particular case of planar graphs, our shortcut partition in fact improves over [CCL\({}^{+}\)23]; the hop length of their partition is \(O(\varepsilon^{-2})\)._
Techniques.We base our construction on a modified _cop decomposition_ for minor-free graphs, first introduced by Andreas [And86] in the context of the _cops-and-robbers_ game. Loosely speaking, a cop decomposition is a rooted tree decomposition where vertices of every bag belong to at most \(r-2\) single-source shortest path (SSSP) trees, called _skeletons_. Abraham, Gavoille, Gupta, Neiman, and Talwar [AGG\({}^{+}\)14], in their construction of a padded decomposition for \(K_{r}\)-minor-free graphs, adapted the cop decomposition by allowing each bag to contain up to \(r-2\) clusters3, each of which is the set of vertices within radius \(\sigma\cdot\Delta\) from a skeleton in the bag; here \(\sigma\) is a parameter in \((0,1)\) randomly sampled from a truncated exponential distribution, and \(\Delta\coloneqq\varepsilon\cdot\operatorname{diam}(G)\). One of their goals is to ensure that for every vertex \(v\) in the graph, the process of constructing the cop decomposition guarantees that the number of (random)
Figure 1: A cluster \(X\) “cut off” by \(\eta\) from part of graph \(G\). There is a buffer of width \(\gamma\) between \(X\) and the part of the graph that it is cut off from.
clusters that could contain \(v\) is small _in expectation_; this is the bulk of their analysis, through clever and sophisticated use of potential function and setting up a (sub)martingale. Their result can be interpreted as guaranteeing what we call a _buffer property4_:
Footnote 4: There is a technical difference between our buffer property and that of [1], which we will clarify in Section 3.
If one cluster \(X\) is "cut off" from a piece of the graph by another cluster, then any path from \(X\) to that piece has length at least \(\gamma\), which we call the _buffer width_.
In particular, the construction of [1] intuitively implies that the _expected_ buffer width is about \(\gamma\). (Their end goal is a stochastic partition, and hence they could afford the buffer property in expectation.) Their expected bound on the buffer width is insufficient for our shortcut partition, as well as for all other applications considered in our paper. One either has to guarantee that the buffer width holds with high probability or, ideally, holds deterministically.
In this work, we achieve the buffer property _deterministically_. To do this, we add a layer of recursion on top of the cop decomposition by [1] to directly fix the buffer property whenever it is violated, thereby bypassing the need for the complicated analysis of the potential function. In more detail, we build a cop decomposition by iteratively creating clusters. At each point in the construction, we create an SSSP tree connecting to some existing clusters, and initialize a new cluster with that tree as the cluster's skeleton. Our idea to enforce the buffer property is natural: we (recursively) assign those vertices that violate the property to join previously-created clusters. Specifically, whenever a cluster \(X\) is cut off by some cluster \(\eta\), we assign every vertex within distance \(\gamma\) of \(X\) to be a part of some existing clusters. However, enforcing the buffer property directly comes at the cost of increasing the radius of some existing clusters -- recall that we want all points in a cluster to be at most \(O(\Delta)\) distance away from its skeleton. Therefore, our implementation of vertex assignment is very delicate; otherwise, the diameter of a cluster could continue growing, passing the diameter bound prescribed by the shortcut partition. Our key insight is to show that during the course of our construction, each cluster can only be expanded a single time for each of the \(O(r)\) clusters that it can "see". This lets us achieve a deterministic buffer width of \(\gamma=O(\Delta/r)\).
### Steiner Point Removal Problem
In the Steiner Point Removal (SPR) problem, we are given an undirected weighted graph \(G=(V,E,w)\) with vertex set \(V\), edge set \(E\), nonnegative weight function \(w\) over the edges, and a subset \(K\) of \(V\). The vertices in \(K\) are called _terminals_ and the vertices in \(V\setminus K\) are called _non-terminal_ or _Steiner_ vertices. The goal in the SPR problem is to find a graph _minor_\(M\) of \(G\) such that \(V(M)=K\), and for every pair \(t_{1},t_{2}\) of terminals in \(K\), \(\delta_{M}(t_{1},t_{2})\leq\alpha\cdot\delta_{G}(t_{1},t_{2})\), for some constant \(\alpha\geq 1\); such a graph minor \(M\) of \(G\) is called a _distance-preserving minor_ of \(G\) with _distortion_\(\alpha\).5
Footnote 5: In the literature [13] the term _distance-preserving minor_ allows the existence of Steiner vertices as well, with the goal to minimize their usage. For our purpose we do not allow any Steiner vertices.
Gupta [14] was the first to consider the problem of removing Steiner points to preserve all terminal distances. He showed that every (weighted) _tree_ can be replaced by another tree on the terminals where the shortest path distances are preserved up to a factor of \(8\); another proof of this result is given by [11]. Chan, Xia, Konjevod, and Richa [15] observed that Gupta's construction in fact produces a distance-preserving minor of the input tree, and showed a matching lower bound: there exists a tree and a set of terminals, such that any distance-preserving minor of that tree must have distortion at least \(8(1-o(1))\). Both Chan _et al._[15] and Basu and Gupta [16] considered the following question:
**Question 1.3**.: _Does every \(K_{r}\)-minor-free graph for any fixed \(r\) admit a distance-preserving minor with constant distortion?_
Question 1.3 has attracted significant research attention over the years, and numerous works have attempted to attack it from different angles. Some introduced new frameworks [11, 12, 13] that simplify known results; others considered the problem for general graphs, establishing the distortion bound of \(O(\log|K|)\) after a sequence of works [11, 12, 13]; there are also variants where Steiner points are allowed, but their number should be minimized [11, 1, 12]; and yet another achieved a constant _expected_ distortion [1].
Nevertheless, Question 1.3 remains widely open: a positive solution for \(K_{r}\)-minor-free graphs is not known for any \(r\geq 5\). Gupta's result for trees [13] can be seen as providing a solution for \(K_{3}\)-minor-free graphs. Basu and Gupta [14] gave a positive answer for outerplanar graphs (which is \((K_{2,3},K_{4})\)-minor-free). Recently, Hershkowitz and Li [12] provided a solution for \(K_{4}\)-minor-free graphs, also known as _series-parallel graphs_. Even for planar graphs, a subclass of \(K_{5}\)-minor-free graphs, the answer is not known. Both outerplanar and series-parallel graphs are very restricted classes of planar graphs: they have treewidth at most \(2\). For slightly larger graph classes, such as treewidth-\(3\) planar graphs or \(k\)-outerplanar graphs for any constant \(k\), the SPR problem has remained open to date.
We resolve Question 1.3 in the affirmative, thus solving the SPR problem for minor-free graphs in its full generality:
**Theorem 1.4**.: _Let \(G=(V,E,w)\) be an arbitrary edge-weighted \(K_{r}\)-minor-free graph and let \(K\subseteq V\) be an arbitrary set of terminals. Then, there is a solution to the SPR problem on \(G\) with distortion \(2^{O(r\log r)}\)._
We prove Theorem 1.4 by devising a general reduction from SPR to shortcut partition. Specifically:
**Theorem 1.5**.: _If every subgraph of \(G\) admits an \((\varepsilon,f(r)/\varepsilon)\)-shortcut partition for every \(\varepsilon\in(0,1)\), then \(G\) admits a solution to the SPR problem with distortion \(O(f(r)^{13})\)._
The proof of Theorem 1.5 builds on a reduction by Filtser [13], from the SPR problem to that of finding _scattering partitions_, which require every shortest path between two vertices to intersect only a small number of clusters. We introduce an inherently relaxed notion which we call the _approximate scattering partition_ (Definition 2.1) -- which among other changes uses _approximate_ shortest paths rather than exact shortest paths -- and adapt Filtser's reduction using the new notion. The first challenge underlying this adaptation is that, unlike shortest paths, an approximate shortest path does not have the optimal substructure property (any subpath of a shortest path is also a shortest path). The second and perhaps more significant challenge stems from the fact that the partition only guarantees the existence of _some_ low-hop path in the cluster graph, and the distortion to its length is _not_ with respect to the distance between the two endpoints. We explain the differences in detail in Section 2. Consequently, we have to make some crucial changes in the reduction, and more so in its analysis.
We observe that Theorem 1.5 together with a shortcut partition in Theorem 1.2 gives us a solution to the SPR problem with \(O(1)\) distortion in \(K_{r}\)-minor-free graphs, since in this case, \(f(r)=r^{O(r)}\) and \(r\) is fixed.
### Other Applications of Our Results
Distance oracle.An _\(\alpha\)-approximate distance oracle_ is a compact data structure for graph \(G\) that given any two vertices \(u\) and \(v\), return the distance between \(u\) and \(v\) in \(G\) up to a factor of \(\alpha\). In constructing a distance oracle, we would like to minimize the _distortion_ parameter \(\alpha\), the _space_ usage, and the _time_ it takes to answer a query; there is often a tradeoff between the three parameters.
Constructing \((1+\varepsilon)\)-approximate distance oracles for planar graphs has been extensively studied. A long line of work [13, 14, 15, 16, 17] recently culminated in an optimal distance oracle with linear space and constant query time by Le and Wulff-Nilsen [14]. On the other hand, the only known \((1+\varepsilon)\)-approximate distance oracle for \(K_{r}\)-minor-free graphs achieving \(O(n\log n)\) space and \(O(\log n)\) query time (for any constant \(\varepsilon\in(0,1)\) and constant \(r\)) was by Abraham and Gavoille [1]. The main reason is that the topology of \(K_{r}\)-minor-free graphs is much more complicated, and many techniques from planar graphs -- such as reduction to additive distance oracles [10, 14] or more sophisticated use of planar shortest path separators [17] -- do not extend to \(K_{r}\)-minor-free graphs. Even the shortest path separator [1] in \(K_{r}\)-minor-free graphs does not behave as well as its planar counterpart [11, 13]: each path in the separator in \(K_{r}\)-minor-free graphs is not a shortest path of the _input graph_, but a shortest path of its _subgraph_ after some previous paths were removed. As a result, despite significant recent progress on approximate distance oracles for planar graphs, the following problem remains open:
**Problem 1.6**.: _Design a \((1+\varepsilon)\)-approximate distance oracle for \(K_{r}\)-minor-free graphs with linear space and constant query time for fixed \(\varepsilon\) and \(r\)._
In this work, we resolve Theorem 1.7 affirmatively. Our oracle can also be implemented in the pointer-machine model, matching the best-known results for planar graphs [13].
**Theorem 1.7**.: _Given any parameter \(\varepsilon\in(0,1)\), and any edge-weighted undirected \(K_{r}\)-minor-free graphs with \(n\) vertices, we can design a \((1+\varepsilon)\)-approximate distance oracle with the following guarantees:_
* _Our distance oracle has space_ \(n\cdot 2^{r^{O(r)}/\varepsilon}\) _and query time_ \(2^{r^{O(r)}/\varepsilon}\) _in the word RAM model with word size_ \(\Omega(\log n)\)_. Consequently, for fixed_ \(\varepsilon\) _and_ \(r\)_, the space is_ \(O(n)\) _and query time is_ \(O(1)\)_._
* _Our distance oracle has space_ \(O(n\cdot 2^{r^{O(r)}/\varepsilon})\) _and query time_ \(O(\log\log n\cdot 2^{r^{O(r)}/\varepsilon})\) _in the pointer machine model._
Our oracle is constructed via tree covers, which we will discuss next.
Tree cover.An _\(\alpha\)-tree cover_\(\mathcal{T}\) of a metric space \((X,\delta_{X})\) for some \(\alpha\geq 1\) is a collection of trees such that: (1) every tree \(T\in\mathcal{T}\) has \(X\subseteq V(T)\) and \(d_{T}(x,y)\geq\delta_{X}(x,y)\) for every two points \(x,y\in X\), and (2) for every two points \(x,y\in X\), there exists a tree \(T\in\mathcal{T}\) such that \(d_{T}(x,y)\leq\alpha\cdot\delta_{X}(x,y)\). We call \(\alpha\) the _distortion_ of the tree cover \(\mathcal{T}\). The size of the tree cover is the number of trees in \(\mathcal{T}\).
Tree covers have been extensively studied for many different metric spaces [1, 1, 1, 2, 10, 11, 12, 13]. Gupta, Kumar, and Rastogi [1] showed among other things that planar metrics admit a tree cover of distortion \(3\) and size \(O(\log n)\). Bartal, Fandina, and Neiman [1] reduced the distortion to \(1+\varepsilon\) for any fixed \(\varepsilon\in(0,1)\) at the cost of a higher number of trees, \(O(\log^{2}n)\). Their result also holds for any \(K_{r}\)-minor-free graphs with a fixed \(r\); however, because of the usage of shortest path separator [1], the final tree cover size contains a hidden dependency on \(r\) which is the Robertson-Seymour constant [12], known to be bigger than the tower function of \(r\). Their work left several questions open: (a) Can we construct a \((1+\varepsilon)\) tree cover of \(O(1)\) size for planar graphs, and more generally \(K_{r}\)-minor-free graphs? (b) Can we avoid Robertson-Seymour decomposition and achieve a more practical construction?
The shortcut partition introduced by Chang _et al._[13] partially resolved the first question: they constructed a \((1+\varepsilon)\)-tree cover for _planar graphs_ of \(O(1)\) size. Using our new shortcut partition in Theorem 1.2, we resolve the question of Bartal _et al._ for all \(K_{r}\)-minor-free graphs. As our construction is rooted in the cop decomposition, the construction might behave reasonably well even when the graph is
not strictly \(K_{r}\)-minor-free, as the performance ultimately depends on the width of the buffer and the number of times a cluster can expand. This provides a more practical alternative to the Robertson-Seymour decomposition.
**Theorem 1.8**.: _Let \(G\) be any edge-weighted undirected \(K_{r}\)-minor-free graph with \(n\) vertices. For any parameter \(\varepsilon\in(0,1)\), there is a \((1+\varepsilon)\)-tree cover for the shortest path metric of \(G\) using \(2^{r^{O(r)}/\varepsilon}\) trees._
Given a tree cover \(\mathcal{T}\) in Theorem 1.8, we can obtain a \((1+\varepsilon)\)-approximate distance oracle in Theorem 1.7 as follows. The distance oracle consists of \(\mathcal{T}\) and an LCA data structure for each tree in \(\mathcal{T}\). For each query pair \((u,v)\), we iterate through each tree, compute the distance on the tree using LCA data structure, and then return \(\min_{T\in\mathcal{T}}d_{T}(u,v)\). The query time and space are as described in Theorem 1.7 because \(|\mathcal{T}|=2^{r^{O(r)}/\varepsilon}\); the distortion is \(1+\varepsilon\) since the distortion of the tree cover is \(1+\varepsilon\).
Additive embeddings for apex-minor-free graphs.Graph \(A\) is an _apex graph_ if there exists a vertex \(a\in V(A)\), called the _apex_, such that \(A\setminus\{a\}\) is a planar graph. A graph \(G\) is _apex-minor-free_ if it excludes some apex graph \(A\) of \(O(1)\) size as a minor. We note that apex-minor-free graphs include planar graphs and, more generally, _bounded-genus graphs_ as subclasses. We show that our shortcut partition also gives the first deterministic additive embeddings of apex-minor-free graphs into bounded-treewidth graphs.
Given a weighted graph \(G\) of diameter \(\Delta\), we say that a (deterministic) embedding \(f:V(G)\to H\) of \(G\) into \(H\) has _additive distortion_\(+\varepsilon\Delta\) if \(d_{G}(x,y)\leq d_{H}(f(x),f(y))\leq d_{G}(x,y)+\varepsilon\Delta\) for every \(x,y\in V(G)\). The goal is to construct an embedding \(f\) such that the treewidth of \(H\), denoted by \(\operatorname{tw}(H)\), is minimized. Ideally, we would like \(\operatorname{tw}(H)\) to depend only on \(\varepsilon\) and not on the number of vertices of \(G\).
Additive embeddings have been studied recently for planar graphs [11, 12, 23] and for minor-free graphs [10]. A key result in this line of work is an additive embedding for planar graphs where the treewidth of \(H\) is polynomially dependent on \(\varepsilon\)[11]; specifically, they achieved \(\operatorname{tw}(H)=O(1/\varepsilon^{c})\) for some constant \(c\geq 58\), which was recently improved to \(\operatorname{tw}(H)=O(1/\varepsilon^{4})\)[23]. Cohen-Addad _et al._[10] constructed a family of apex graphs and showed that any deterministic embedding with additive distortion \(+\Delta/12\) (\(\varepsilon=1/12\)) for the family must have treewidth \(\Omega(\sqrt{n})\). Their result left an important question regarding additive embeddings of apex-minor-free graphs. Here we use the shortcut partition in Theorem 1.2 to resolve this problem, and thereby completing our understanding of deterministic additive embeddings of graphs excluding a fixed minor into bounded-treewidth graphs.
**Theorem 1.9**.: _Let \(G\) be any given edge-weighted graph of \(n\) vertices excluding a fixed apex graph as a minor. Let \(\Delta\) be the diameter of \(G\). For any given parameter \(\varepsilon\in(0,1)\), we can construct in polynomial time a deterministic embedding of \(G\) into a graph \(H\) such that the additive distortion is \(+\varepsilon\Delta\) and \(\operatorname{tw}(H)=2^{O(\varepsilon^{-1})}\)._
In addition to the aforementioned results, we also obtain generalizations to minor-free graphs of results from [23]; we simply use our tree cover from Theorem 1.8 in place of their tree cover theorem for planar graphs. The results include (1) the first \((1+\varepsilon)\)-emulator of linear size for minor-free graphs, (2) low-hop emulators for minor-free metrics, (3) a compact distance labeling scheme for minor-free graphs, and (4) routing in minor-free metrics. We refer readers to [23] for more details.
Organization.In Section 2 we resolve the SPR problem by constructing approximate scattering partition using the shortcut partition in Theorem 1.2. In Section 3, we introduce and describe in full detail the construction of the buffered cop decomposition, which we will use in Section 4 to construct the shortcut partition. In Section 5, we give the details of the applications of shortcut partition in constructing tree cover, distance oracle, and additive embedding into bounded treewidth graphs.
Reduction to Shortcut Partition
As mentioned, Filtser [14] presented a reduction from the SPR problem to that of finding _scattering partitions_. To prove Theorem 1.5, we introduce an inherently relaxed notion of _approximate_ scattering partition (refer to Definition 2.1), and adapt the reduction of [14, Theorem 1] using that notion. Due to the usage of our inherently relaxed notion of partition, we have to make two crucial changes in the reduction, and alter various parts of the analysis; we will point out specific changes along the way.
**Definition 2.1** (Approximate Scattering Partition).: _Let \(G=(V,E,w)\) be an edge-weighted graph. A \(\beta\)-approximate \((\tau,\Delta)\)-scattering partition of \(G\) is a partition \(\mathcal{C}\) of \(V\) such that:_
* [Diameter.] _For each cluster_ \(C\) _in_ \(\mathcal{C}\)_, the induced subgraph_ \(G[C]\) _has weak diameter at most_ \(\Delta\)_; that is,_ \(\delta_{G}(u,v)\leq\Delta\) _for any vertices_ \(u\) _and_ \(v\) _in_ \(C\)_._
* [Scattering.] _For any two vertices_ \(u\) _and_ \(v\) _in_ \(V\) _such that_ \(\delta_{G}(u,v)\leq\Delta\)_, there exists a path_ \(\pi\) _in_ \(G\) _between_ \(u\) _and_ \(v\) _where (1)_ \(\pi\) _has length at most_ \(\beta\cdot\Delta\)_, (2) every edge in_ \(\pi\) _has length at most_ \(\Delta\)_, and (3)_ \(\pi\) _intersects at most_ \(\tau\) _clusters in_ \(\mathcal{C}\)_. We say_ \(\pi\) _is a_ \(\beta\)_-approximate_ \((\tau,\Delta)\)_-scattered path_._
We remark that scattering properties (2) and (3) together imply property (1): the length of \(\pi\) is at most \(O(\tau)\cdot\Delta\). Nevertheless, we prefer to keep property (1) separately from properties (2) and (3) in the definition to emphasize the fact that \(\pi\) is an approximate path.
Notice that the notion of approximate scattering partition is more relaxed than the original notion of scattering partition [14]. A scattering partition requires _every_ shortest path with length at most \(\Delta\) to be \(\tau\)-scattered. However in an approximate scattering partition there are three relaxations:
1. we only require that _one_ such path exists;
2. that path may be an approximate shortest path (rather than an exact shortest path);
3. the \(\beta\)-approximation to the length of such path \(\pi\) is _not_ with respect to the distance between the endpoints; rather, the length of \(\pi\) is bounded by \(\beta\) times \(\Delta\), the diameter bound of clusters.
The following lemma, which we prove in SS2.1 and SS2.2, is analogous to Theorem 1 by Filtser [14], except for the key difference that we employ approximate scattering partitions. It implies that, somewhat surprisingly, despite the three aforementioned relaxations introduced by our notion of approximate scattering partitions -- especially the third one that significantly relaxes the meaning of \(\beta\)-approximation -- such partitions still suffice for solving the SPR problem.
**Lemma 2.2**.: _Let \(G\) be a graph such that for every \(\Delta>0\), every induced subgraph of \(G\) admits a \(\beta\)-approximate \((\tau,\Delta)\)-scattering partition, for some constants \(\beta,\tau\geq 1\). Then, there is a solution to the SPR problem on \(G\) with distortion \(O(\tau^{8}\cdot\beta^{5})=O(1)\)._
To construct approximate scattering partitions, we use _shortcut partitions_. Recall the _cluster graph_ of \(G\) with respect to \(\mathcal{C}\), denoted \(\tilde{G}\), is the graph obtained by contracting each cluster in \(\mathcal{C}\) into a _supernode_. The _hop-length_ of a path is the number of edges in the path.
**Lemma 2.3**.: _Let \(G\) be a graph and let \(\Delta>0\) be a parameter. If any subgraph of \(G\) has an \((\varepsilon,h)\)-shortcut partition for any \(\varepsilon\in(0,1)\) and some number \(h\), then \(G\) has a \(2\varepsilon h\)-approximate \((\varepsilon h,\Delta)\)-scattering partition._
**Proof:** Construct graph \(G^{\prime}\) from \(G\) by removing all edges of length greater than \(\Delta\). Notice that if any pair \(u,v\) of vertices satisfies \(\delta_{G}(u,v)\leq\Delta\), then it also satisfies \(\delta_{G^{\prime}}(u,v)\leq\Delta\). Thus, any partition of vertices that satisfies the approximate scattering property for \(G^{\prime}\) also satisfies that property for \(G\).
Let \(\mathcal{C}\) be an \((\varepsilon,h)\)-shortcut partition for the graph \(G^{\prime}\), with parameter \(\varepsilon\coloneqq\Delta/\operatorname{diam}(G^{\prime})\). Notice that \(\mathcal{C}\) is a clustering of the vertices of \(G\), where for any cluster \(C\) in \(\mathcal{C}\), the induced subgraphs \(G[\mathcal{C}]\) and \(G^{\prime}[\mathcal{C}]\) have strong diameter at most \(\varepsilon\cdot\operatorname{diam}(G^{\prime})=\Delta\); thus, \(\mathcal{C}\) satisfies the diameter property of approximate scattering partition.
We now show that \(\mathcal{C}\) satisfies the scattering property. Let \(u\) and \(v\) be two vertices in \(G\) with \(\delta_{G}(u,v)\leq\Delta\). Note that \(\delta_{G^{\prime}}(u,v)\leq\Delta\). By the properties of shortcut partition, there is a path \(\breve{\pi}\) in the cluster graph \(\breve{G}\) between the clusters containing \(u\) and \(v\), such that \(\breve{\pi}\) has hop-length at most \(\varepsilon h\cdot\left\lceil\frac{\delta_{G^{\prime}}(u,v)}{\varepsilon \operatorname{diam}(G^{\prime})}\right\rceil\). In other words, the hop-length of \(\breve{\pi}\) is \(t\), for some \(t\) that is upper-bounded by \(\varepsilon h\). Write \(\breve{\pi}=(C_{1},C_{2},\ldots,C_{t})\) as a sequence of \(t\) adjacent clusters in \(\breve{G}\). Notice that two clusters \(C\) and \(C^{\prime}\) in \(\breve{G}\) are _adjacent_ if and only if there is an edge in \(G^{\prime}\) between a vertex in \(C\) and a vertex in \(C^{\prime}\). For every pair of consecutive clusters \(C_{i}\) and \(C_{i+1}\) in \(\breve{\pi}\), let \(x^{\prime}_{i}\) be a vertex in \(C_{i}\) and \(x_{i+1}\) be a vertex in \(C_{i+1}\) such that there is an edge \(e_{i}\) in \(G^{\prime}\) between \(x^{\prime}_{i}\) and \(x_{i+1}\). To simplify notation, define \(x_{1}\coloneqq u\) and define \(x^{\prime}_{t}\coloneqq v\). With this definition, \(x_{i}\) and \(x^{\prime}_{i}\) are defined for all \(i\) in \(\{1,\ldots,t\}\). Notice that for every \(i\) in \(\{1,\ldots,t\}\), vertices \(x_{i}\) and \(x^{\prime}_{i}\) are both in cluster \(C_{i}\). By the strong diameter property of \(\mathcal{C}\), there is a path \(P_{i}\) in \(G^{\prime}\) between \(x_{i}\) and \(x^{\prime}_{i}\), such that \(P_{i}\) is contained in \(C_{i}\) and has length at most \(\Delta\).
We define the path \(\pi\) in \(G^{\prime}\) (and thus also in \(G\)) between \(u\) and \(v\) to be the concatenation \(P_{1}\circ e_{1}\circ P_{2}\circ e_{2}\circ\ldots\circ P_{t}\). Notice that (1) \(\pi\) has length at most \(2t\cdot\Delta\leq 2\varepsilon h\cdot\Delta\); indeed, each subpath \(P_{i}\) has length at most \(\Delta\) (by the strong diameter property), and each edge \(e_{i}\) has length at most \(\Delta\) (as \(e_{i}\) is in \(G^{\prime}\)). Further, (2) every edge of \(\pi\) has length at most \(\Delta\), and (3) \(\pi\) intersects at most \(\varepsilon h\) clusters (namely, the clusters \(C_{1},\ldots,C_{t}\) along \(\breve{\pi}\)).
Theorem 1.5 follows from Lemma 2.2 and Lemma 2.3. As a direct corollary of Theorem 1.2 and Lemma 2.3, we obtain the following.
**Corollary 2.4**.: _There are constants \(\beta\) and \(\tau\) such that, for any \(K_{r}\)-minor-free graph \(G\) and any \(\Delta>0\), there exists a \(\beta\)-approximate \((\tau,\Delta)\)-scattering partition of \(G\). Specifically, \(\beta=\tau=2^{O(r\log r)}\)._
In what follows we prove Lemma 2.2.
### Algorithm
Our construction for proving Lemma 2.2 is similar to that of [11], but deviates from it in several crucial points (see Remark 2.5 for details). For completeness, we next provide the entire construction of [11], adapted appropriately to our purposes.
We will assume without loss of generality that the minimum pairwise distance is \(1\). We shall partition \(V\) into \(|K|\) connected subgraphs, each of which corresponds to a single terminal in \(K\). Each vertex in \(V\) will be _assigned_ to a connected subgraph by the _assignment function_\(f:V\to K\), such that at the end of the process, we can create a graph minor \(M\) of \(G\) by contracting each connected subgraph \(f^{-1}(t)\) into a supernode for every terminal \(t\in K\). By setting \(w_{M}(t,t^{\prime})\coloneqq\delta_{G}(t,t^{\prime})\) for each edge \((t,t^{\prime})\in E(M)\), the edge-weighted graph \(M=(K,E(M),w_{M})\) is our solution to the SPR problem on \(G\). For a path \(P\), we denote by \(||P||\) the length of \(P\).
We compute the assignment function \(f\) in iterations. In iteration \(i\) we shall compute a function \(f_{i}:V\to K\cup\{\bot\}\), where \(\bot\) symbolizes that the vertex remains unassigned. The function \(f\) will be obtained as the function \(f_{i}\) computed at the last iteration of the algorithm. We will maintain the set of _relevant vertices_\(\mathcal{R}_{i}\coloneqq\left\{v\in V\mid\zeta^{i-1}\leq\delta_{G}(v,K)<\zeta ^{i}\right\}\) and the set of _assigned vertices_\(V_{i}\) by the function \(f_{i}\) to some terminals, for each iteration \(i\), where \(\zeta\coloneqq c\cdot\beta\cdot\tau\), for \(\beta\) and \(\tau\) being the constants provided by Corollary 2.4 and \(c\) being some large constant. Initialize \(f_{0}(t)\coloneqq t\) for each \(t\in K\), and \(f_{0}(v)\coloneqq\bot\) for each \(v\in V\setminus K\). Define both \(\mathcal{R}_{0}\) and \(V_{0}\) to be \(K\). Inductively, we maintain the properties that \(V_{i-1}\subseteq V_{i}\) and \(\bigcup_{j\leq i}\mathcal{R}_{j}\subseteq V_{i}\), hence the algorithm terminates when all vertices have been assigned.
At the \(i\)-th iteration of the algorithm, we compute \(\beta\)-approximate \((\tau,\zeta^{i-1})\)-scattering partition \(\mathcal{P}_{i}\), provided by Corollary 2.4, on the subgraph induced on the unassigned vertices \(G_{i}\coloneqq G[V\setminus V_{i-1}]\). Let \(\mathcal{C}_{i}\) be the set of clusters in \(\mathcal{P}_{i}\) that contain at least one vertex in \(\mathcal{R}_{i}\). All vertices in the clusters of \(\mathcal{C}_{i}\) will be assigned by \(f_{i}\) at iteration \(i\).
We classify the clusters in \(\mathcal{C}_{i}\) into _levels_, starting from level 0, viewing \(V_{i-1}\) as a _level-0_ cluster. We say that a cluster \(C\in\mathcal{C}_{i}\) is at _level \(j\)_ if \(j\) is the minimum index such that there is an edge of weight at most \(\zeta^{i}\) connecting a vertex \(u\) in \(C\) and another vertex \(v\) in some level-\((j-1)\) cluster \(C^{\prime}\). If there are multiple such edges, we fix one of them arbitrarily; we call vertex \(v\) in \(C^{\prime}\) the _linking vertex_ of \(C\). Let \(lv_{i}(C)\) denote the level of \(C\). Observe that every \(\mathcal{C}_{i}\) contains a vertex in \(\mathcal{R}_{i}\), i.e., there exists a vertex \(v\in\mathcal{C}_{i}\) such that \(\zeta^{i-1}\leq\delta_{G}(v,K)<\zeta^{i}\). Hence, it is readily verified that every cluster \(\mathcal{C}_{i}\) has a linking vertex, and thus \(lv_{i}(C)\) is a valid level.
For every vertex \(v\in V_{i-1}\), we set \(f_{i}(v)\coloneqq f_{i-1}(v)\). For every vertex not in \(\bigcup\mathcal{C}_{i}\) (or \(V_{i-1}\)), we set \(f_{i}(v)=\perp\). Next, we scan all clusters in \(\mathcal{C}_{i}\) by non-decreasing order of level, starting from level 1. For each vertex \(u\) in each cluster \(C\), we set \(f_{i}(u)\) to be \(f_{i}(v_{C})\), where \(v_{C}\) is the linking vertex of \(C\). If some unassigned vertices remain, we proceed to the next iteration; otherwise, the algorithm terminates.
**Remark 2.5**.: The algorithm presented here is different than that of [11] in two crucial points:
* First, as mentioned, we use approximate scattering partitions (as in Definition 2.1) rather than the scattering partitions of [11]. This change poses several technical challenges in the argument.
* To cope with approximate scattering partitions, we do not use constant 2 as in [11] but rather use a bigger constant \(\zeta\) (as defined above).
### Distortion Analysis
From the algorithm, any vertex within distance between \(\zeta^{i-1}\) to \(\zeta^{i}\) from \(K\) is assigned at iteration at most \(i\). However, the following claim narrows the possibilities down to two choices. The claim is analogous to Claim 5 in [11], where we use \(\zeta\) instead of 2, and its proof is similar.
**Claim 2.6** ([11, Claim 5]).: _Any vertex \(v\) satisfying \(\zeta^{i-1}\leq\delta_{G}(v,K)<\zeta^{i}\) is assigned during iteration \(i-1\) or \(i\). Consequently, any vertex \(v\) assigned during iteration \(i\) must satisfy \(\zeta^{i-1}\leq\delta_{G}(v,K)<\zeta^{i+1}\)._
Proof.: If \(v\) remains unassigned until iteration \(i\), it will be assigned during iteration \(i\) by construction. Suppose that \(v\) was assigned during iteration \(j\). Then \(v\) belongs to a cluster \(C\in\mathcal{C}_{j}\), and there is a vertex \(u\in C\) with \(\delta_{G}(u,K)\leq\zeta^{j}\). As \(C\) has strong diameter at most \(\zeta^{j-1}\) and \(\zeta>2\), we obtain
\[\zeta^{i-1}\leq\delta_{G}(v,K)\leq\delta_{G}(u,K)+\delta_{G}(u,v)\leq\zeta^{j }+\zeta^{j-1}<\zeta^{2}\cdot\zeta^{j-1},\]
Figure 2: An SPR instance with 3 Steiner points. Values of assignment function \(f\) are shown next to vertices.
implying that \(i-1<2+(j-1)\), or equivalently \(j\geq i-1\).
The following claim is analogous to Corollary 1 in [20], but we introduce a few changes in the proof.
**Claim 2.7** ([20, Corollary 1]).: _For every \(v\in V\), \(\delta_{G}(v,f(v))\leq 3\tau\cdot\zeta^{2}\cdot\delta_{G}(v,K)\)._
**Proof:** Let \(i\) be the iteration in which \(v\) is assigned, and let \(C_{v}\) be the cluster in \(\mathcal{C}_{i}\) containing \(v\). We shall prove that
\[\delta_{G}(v,f(v))\leq 3\tau\cdot\zeta^{i+1}. \tag{1}\]
Combining this bound with Claim 2.6 yields
\[\delta(v,f(v))\leq 3\tau\cdot\zeta^{i+1}\leq 3\tau\cdot\zeta^{2}\cdot\delta_{G} (v,K),\]
as required. The proof is by induction on the iteration \(i\) in which \(v\) is assigned. The base case \(i=0\) is trivial, as then \(v\) is a terminal, and we have \(\delta_{G}(v,f(v))=0\leq 3\tau\cdot\zeta^{0+1}\). We henceforth consider the induction step when \(i\geq 1\).
First, we argue that \(lv_{i}(C_{v})\leq\zeta\cdot\tau\). Since cluster \(C_{v}\) is in \(\mathcal{C}_{i}\), there exists a vertex \(u\in C_{v}\) such that \(\delta_{G}(u,K)<\zeta^{i}\). Let \(P_{u}\coloneqq(u_{1},u_{2},\ldots u_{s})\) be a shortest path from \(u=u_{1}\) to \(K\) (with \(\|P_{u}\|<\zeta^{i}\)), let \(\ell\) be the largest index such that \(u_{1},u_{2},\ldots u_{\ell}\in V\setminus V_{i-1}\), and define the prefix \(Q\coloneqq(u_{1},u_{2},\ldots u_{\ell})\) of \(P_{u}\); note that \(\ell<s\) and \(u_{\ell+1}\in V_{i-1}\). Since \(\|Q\|<\|P_{u}\|<\zeta^{i}\), we can greedily partition \(Q\) into \(\zeta^{\prime}\leq\zeta\) sub-paths \(Q_{1},\ldots,Q_{\zeta^{\prime}}\), each of length at most \(\zeta^{i-1}\), connected via edges of weight less than \(\zeta^{i}\); that is, \(Q\) is obtained as the concatenation \(Q_{1}\circ e_{1}\circ Q_{2}\circ e_{2}\ldots\circ e_{\zeta^{\prime}-1}\circ Q_ {\zeta^{\prime}}\), where \(\|Q_{j}\|<\zeta^{i-1}\) and \(\|e_{j}\|<\zeta^{i}\) for each \(j\). Consider the \(\beta\)-approximate \((\tau,\zeta^{i-1})\)-scattering partition \(\mathcal{P}_{i}\) (provided by Corollary 2.4), used in the \(i\)th iteration, on the subgraph \(G_{i}=G[V\setminus V_{i-1}]\) induced on the unassigned vertices. For each \(j\), the sub-path \(Q_{j}\) of \(Q\) is contained in \(G_{i}\) and it satisfies \(\|Q_{j}\|\leq\zeta^{i-1}\), thus there exists a \(\beta\)-approximate path \(Q^{\prime}_{j}\) between the endpoints of \(Q_{j}\) that is scattered by \(\tau^{\prime}\) clusters, with \(\tau^{\prime}\leq\tau\), and each edge of \(Q^{\prime}_{j}\) is of weight at most \(\zeta^{i-1}\). The path \(Q^{\prime}_{1}\circ e_{1}\circ Q^{\prime}_{2}\circ e_{2}\ldots\circ e_{\zeta^ {\prime}-1}\circ Q^{\prime}_{\zeta^{\prime}}\) obtained from \(Q\) by replacing each sub-path \(Q_{j}\) by its scattered path \(Q^{\prime}_{j}\), is a path from \(u_{1}\) to \(u_{\ell}\) intersecting at most \(\zeta\cdot\tau\) clusters in \(\mathcal{C}_{i}\). Since \(u_{\ell}\) is in a cluster of level \(1\) (because \(u_{\ell+1}\) is in \(V_{i-1}\), which is of level \(0\)), \(lv_{i}(C_{v})\leq\zeta\cdot\tau\), as required.
We then show that \(\delta_{G}(v,f(v))\leq lv_{i}(C_{v})\cdot 2\cdot\zeta^{i}+3\tau\cdot\zeta^{i}\) by induction on the (\(i\)th-iteration) level of \(C_{v}\). We employ a double induction, one on the iteration \(i\) and the other on the level of \(C_{v}\); aiming to avoid confusion, we shall refer to the former as the "outer induction" and to the latter as the "inner induction".
Let \(x\) be the linking vertex of \(C_{v}\); in particular, we have \(f(v)=f(x)\). Let \(x_{v}\) be the vertex in \(C_{v}\) such that \((x,x_{v})\in E\) and \(w(x,x_{v})\leq\zeta^{i}\). For the basis \(lv_{i}(C_{v})=1\) of the inner induction, \(x\) is assigned during iteration \(i^{\prime}<i\). By the outer induction hypothesis for iteration \(i^{\prime}\) (i.e., substituting \(i\) with \(i^{\prime}\) in Eq. 1), we obtain \(\delta_{G}(x,f(x))\leq 3\tau\cdot\zeta^{i^{\prime}+1}\leq 3\tau\cdot\zeta^{i}\). By the triangle inequality and since \(\zeta>1\):
\[\begin{split}\delta_{G}(v,f(v))&\leq\delta_{G}(v,x_{ v})+\delta_{G}(x_{v},x)+\delta_{G}(x,f(v))\\ &\leq\zeta^{i-1}+\zeta^{i}+\delta_{G}(x,f(x))\leq\zeta^{i-1}+\zeta^ {i}+3\tau\cdot\zeta^{i}\leq 2\cdot\zeta^{i}+3\tau\cdot\zeta^{i}.\end{split} \tag{2}\]
For the inner induction step, consider the case \(lv_{i}(C_{v})>1\). Let \(C_{x}\) be the cluster in \(\mathcal{P}_{i}\) containing \(x\); in particular, we have \(lv_{i}(C_{x})=lv_{i}(C_{v})-1\). By the inner induction hypothesis on the level of \(C_{x}\), we have \(\delta(x,f(x))\leq lv_{i}(C_{x})\cdot 2\cdot\zeta^{i}+3\tau\cdot\zeta^{i}\). Using the triangle inequality again, we have:
\[\begin{split}\delta_{G}(v,f(v))&\leq\delta_{G}(v,x_{ v})+\delta_{G}(x_{v},x)+\delta_{G}(x,f(v))\\ &\leq\zeta^{i-1}+\zeta^{i}+\delta_{G}(x,f(x))\leq\zeta^{i-1}+\zeta^ {i}+lv_{i}(C_{x})\cdot 2\cdot\zeta^{i}+3\tau\cdot\zeta^{i}\\ &\leq 2\cdot\zeta^{i}+(lv_{i}(C_{v})-1)\cdot 2\cdot\zeta^{i}+3\tau\cdot \zeta^{i}=lv_{i}(C_{v})\cdot 2\cdot\zeta^{i}+3\tau\cdot\zeta^{i},\end{split} \tag{3}\]
which completes the inner induction step.
Since \(lv_{i}(C_{v})\leq\zeta\cdot\tau\) and as \(\zeta>3\), it follows that \(\delta(v,f(v))\leq 3\tau\cdot\zeta^{i+1}\), which completes the outer induction step. The claim follows.
Now we are ready to prove Lemma 2.2.
**Proof (of Lemma 2.2):** We prove that our algorithm returns a minor of \(G\) that satisfies the SPR conditions. By the description of the algorithm, it is immediate that the subgraph induced by the vertex set \(f^{-1}(t)\) is connected, for each \(t\in K\). Thus, it remains to prove that the minor \(M\) induced by \(f\) is a distance preserving minor of \(G\) with distortion \(O(\tau^{8}\cdot\beta^{5})\).
Consider an arbitrary pair of terminals \(t\) and \(t^{\prime}\). Let \(P\coloneqq(v_{1},v_{2},\ldots,v_{|P|})\) be a shortest path between \(v_{1}\coloneqq t\) and \(v_{|P|}\coloneqq t^{\prime}\). For each subpath \(I\coloneqq(v_{\ell},v_{\ell+1},\ldots v_{r})\) of \(P\), let \(I^{+}\) denote the _extended subpath_\((v_{\ell-1},v_{\ell},v_{\ell+1},\ldots v_{r},v_{r+1})\); we define \(v_{0}\coloneqq v_{1}\) and \(v_{|P|+1}\coloneqq v_{|P|}\) for technical convenience. Partition \(P\) into a set \(\mathcal{I}\) of subpaths called _intervals_ such that for each subpath \(I\in\mathcal{I}\) between \(v_{\ell}\) and \(v_{r}\):
\[\|I\|\leq\eta\cdot\delta_{G}(v_{\ell},K)\leq\|I^{+}\|, \tag{4}\]
where \(\eta\coloneqq\frac{1}{4\zeta}\). It is easy to verify that \(\mathcal{I}\) can be constructed greedily from \(P\).
Consider an arbitrary interval \(I=(v_{\ell},v_{\ell+1},\ldots v_{r})\in\mathcal{I}\). Let \(u\in I\) be a vertex that is assigned in iteration \(i\), and assume no vertex of \(I\) was assigned prior to iteration \(i\). Since \(u\) is assigned in iteration \(i\), \(u\) belongs to a cluster \(C\) in \(\mathcal{C}_{i}\), which is the subset of clusters that contain at least one vertex in \(\mathcal{R}_{i}\), among the \(\beta\)-approximate \((\tau,\zeta^{i-1})\)-scattering partition \(\mathcal{P}_{i}\) computed at the \(i\)th iteration. Hence, by definition, \(C\) has strong diameter at most \(\zeta^{i-1}\) and there exists a vertex \(u^{\prime}\in C\) such that \(\delta_{G}(u^{\prime},K)<\zeta^{i}\), implying that
\[\delta_{G}(u,K)\leq\delta_{G}(u,u^{\prime})+\delta_{G}(u^{\prime},K)<\zeta^{i- 1}+\zeta^{i}<2\zeta^{i}. \tag{5}\]
By Eq. 4 and the triangle inequality,
\[\delta_{G}(v_{\ell},K)\leq\delta_{G}(v_{\ell},u)+\delta_{G}(u,K)\leq\|I\|+ \delta_{G}(u,K)\leq\eta\cdot\delta_{G}(v_{\ell},K)+\delta_{G}(u,K),\]
which together with Eq. 5 and the fact that \(\eta<1/2\) yields
\[\delta_{G}(v_{\ell},K)\leq\frac{\delta_{G}(u,K)}{1-\eta}<\frac{2\zeta^{i}}{1- \eta}<4\zeta^{i}. \tag{6}\]
By Eq. 4 and Eq. 6,
\[\delta_{G}(v_{\ell},v_{r})=\|I\|\leq\eta\cdot\delta_{G}(v_{\ell},K)<\eta\cdot 4 \zeta^{i}=\zeta^{i-1}, \tag{7}\]
where the last inequality holds as \(\eta=\frac{1}{4\zeta}\).
At the beginning of iteration \(i\), all vertices of \(I\) are unassigned, i.e., \(I\) is in \(G_{i}=G[V\setminus V_{i-1}]\), and Eq. 7 yields \(\delta_{G_{i}}(v_{\ell},v_{r})=\delta_{G}(v_{\ell},v_{r})<\zeta^{i-1}\). At the \(i\)th iteration a \(\beta\)-approximate \((\tau,\zeta^{i-1})\)-scattering partition \(\mathcal{P}_{i}\) on \(G_{i}\) is computed, thus there exists a \(\beta\)-approximate \((\tau,\zeta^{i-1})\)-scattered path \(I^{\prime}\) in \(G_{i}\) from \(v_{\ell}\) to \(v_{r}\) that is scattered by at most \(\tau\) clusters in \(\mathcal{P}_{i}\), with \(\|I^{\prime}\|\leq\beta\cdot\zeta^{i-1}\). A path is called a _detour_ if its first and last vertices are assigned to the same terminal. Since vertices in the same cluster will be assigned to the same terminal, at the end of iteration \(i\), \(I^{\prime}\) can be greedily partitioned into at most \(\tau\) detours and \(\tau+1\) subpaths that contain only unassigned vertices; in other words, we can write \(I^{\prime}\coloneqq P_{1}\circ Q_{1}\circ\ldots\circ P_{\rho}\circ Q_{\rho} \circ P_{\rho+1}\), where \(\rho\leq\tau\), \(Q_{1},Q_{2},\ldots Q_{\rho}\) are detours, and each of the (possibly empty) sub-paths \(P_{1},P_{2},\ldots P_{\rho+1}\) contains only unassigned vertices at the end of iteration \(i\).
Fix an arbitrary index \(j\in[1..\rho+1]\). Let \(a_{j}\) and \(b_{j}\) be the first and last vertices of \(P_{j}\); it is possible that \(a_{j}=b_{j}\). Since \(\|I^{\prime}\|\leq\beta\cdot\zeta^{i-1}\) and as \(\beta<\zeta\), we have
\[\delta_{G}(a_{j},b_{j})\leq\|P_{j}\|\leq\|I^{\prime}\|\leq\beta\cdot\zeta^{i-1 }<\zeta^{i}. \tag{8}\]
At the beginning of iteration \(i+1\), all vertices of \(P_{j}\) are unassigned by definition, hence \(P_{j}\) is in \(G_{i+1}=G[V\setminus V_{i}]\) and by Eq. 8, \(\delta_{G_{i+1}}(a_{j},b_{j})\leq\|P_{j}\|<\zeta^{i}\). At the \((i+1)\)th iteration a \(\beta\)-approximate \((\tau,\zeta^{i})\)-scattering partition \(\mathcal{P}_{i+1}\) on \(G_{i+1}\) is computed, thus there exists a \(\beta\)-approximate \((\tau,\zeta^{i})\)-scattered path \(P_{j}^{\prime}\) in \(G_{i+1}\) from \(a_{j}\) to \(b_{j}\) that is scattered by at most \(\tau\) clusters in \(\mathcal{P}_{i+1}\), with \(\|P_{j}^{\prime}\|\leq\beta\cdot\zeta^{i}\).
Next, consider the path \(I^{\prime\prime}\coloneqq P_{1}^{\prime}\circ Q_{1}\circ\ldots\circ P_{\rho}^{ \prime}\circ Q_{\rho}\circ P_{\rho+1}^{\prime}\). By Eq. 7 we have
\[\|I^{\prime\prime}\|\leq\|I\|+\sum_{j=1}^{\rho+1}\|P_{j}^{\prime}\|\leq\zeta^{i- 1}+(\tau+1)\beta\cdot\zeta^{i}\leq(\tau+2)\beta\cdot\zeta^{i} \tag{9}\]
Since no vertex in \(I\) (in particular, \(v_{\ell}\)) was assigned prior to iteration \(i\), Claim 2.6 yields \(\delta_{G}(v_{\ell},K)\geq\zeta^{i-1}\). Eq. 4 yields \(\|I^{+}\|\geq\eta\cdot\delta_{G}(v_{\ell},K)\geq\eta\cdot\zeta^{i-1}\), and as \(\eta=\frac{1}{4\zeta}\) we obtain
\[\|I^{\prime\prime}\|\leq(\tau+2)\beta\cdot\zeta^{i}\leq 4\zeta^{2}(\tau+2) \beta\cdot\|I^{+}\|. \tag{10}\]
Next, we argue that all vertices in \(I^{\prime\prime}\) are assigned at the end of iteration \(i+1\). Let \(w\) be an arbitrary vertex in \(I^{\prime\prime}\); by Claim 2.6, it suffices to show that \(\delta_{G}(w,K)<\zeta^{i+1}\). Recall that \(u\) is a vertex of \(I\) that is assigned in iteration \(i\). By Eq. 5, Eq. 7, Eq. 9 and the triangle inequality,
\[\begin{split}\delta_{G}(w,K)&\leq\delta_{G}(v_{ \ell},K)+\delta_{G}(v_{\ell},w)\leq\delta_{G}(v_{\ell},u)+\delta_{G}(u,K)+ \delta_{G}(v_{\ell},w)\\ &\leq\|I\|+\delta_{G}(u,K)+\|I^{\prime\prime}\|<\zeta^{i-1}+2 \zeta^{i}+(\tau+2)\beta\cdot\zeta^{i}<\zeta^{i+1},\end{split} \tag{11}\]
where the last inequality holds since \(\zeta=c\cdot\beta\cdot\tau\) for a sufficiently large constant \(c\).
Hence, every vertex in \(P_{j}^{\prime}\) is assigned by iteration \(i+1\), for every \(j\in[1\,..\,\rho+1]\). Then, \(P_{j}^{\prime}\) could be greedily partitioned into at most \(\tau\) detours, as before with \(I^{\prime}\), but we have no subpaths of unassigned vertices in \(I^{\prime\prime}\), since every vertex in \(I^{\prime\prime}\) must be assigned by the end of iteration \(i+1\). We have thus shown that \(I^{\prime\prime}\) can be partitioned into at most \(O(\tau^{2})\) detours \(D_{1},D_{2},\ldots D_{g}\), with \(g=O(\tau^{2})\). For each \(j\in[1\,..\,g]\), let \(x_{j}\) and \(y_{j}\) be the first and last vertices in \(D_{j}\). Because \(I^{\prime\prime}\) are partitioned greedily into _maximal_ detours, one has \(f(y_{j})\neq f(x_{j+1})\) for all \(j\). Observe that there exists an edge between \(f(x_{j})\) and \(f(x_{j+1})\) in the SPR minor \(M\) for each \(j\in[1\,..\,g-1]\), since \(f(x_{j})=f(y_{j})\in K\) and \((y_{j},x_{j+1})\in E\). Consequently, by the triangle inequality, Corollary 2.7 and Eq. 10,
\[\begin{split}\delta_{M}(f(v_{\ell}),f(v_{r}))&\leq \sum_{j=1}^{g-1}\delta_{M}(f(x_{j}),f(x_{j+1}))=\sum_{j=1}^{g-1}\delta_{G}(f(x _{j}),f(x_{j+1}))\\ &\leq\sum_{j=1}^{g-1}\bigl{[}\delta_{G}(x_{j},f(x_{j}))+\delta_{ G}(x_{j},x_{j+1})+\delta_{G}(x_{j+1},f(x_{j+1}))\bigr{]}\\ &\leq 2\sum_{j=1}^{g}\delta_{G}(x_{j},f(x_{j}))+\sum_{j=1}^{g-1} \delta_{G}(x_{j},x_{j+1})\leq 2\sum_{j=1}^{g}\delta_{G}(x_{j},f(x_{j}))+\|I^{ \prime\prime}\|\\ &\leq 6\tau\zeta^{2}\sum_{j=1}^{g}\delta_{G}(x_{j},K)+4\zeta^{2}(\tau +2)\beta\cdot\|I^{+}\|.\end{split} \tag{12}\]
For every vertex \(v^{\prime\prime}\in I^{\prime\prime}\), we have
\[\begin{split}\delta_{G}(v^{\prime\prime},K)&\leq \delta_{G}(v^{\prime\prime},v_{\ell})+\delta_{G}(v_{\ell},K)\leq\|I^{\prime \prime}\|+\delta_{G}(v_{\ell},K)\\ &\leq 4\zeta^{2}(\tau+2)\beta\cdot\|I^{+}\|+\frac{\|I^{+}\|}{\eta}\leq 4 \zeta^{2}(\tau+3)\beta\cdot\|I^{+}\|,\end{split} \tag{13}\]
where the penultimate inequality holds by Eq. 4 and Eq. 10 and the last inequality holds since \(\eta=\frac{1}{4\zeta}\). We remark that Eq. 13 also holds for any vertex \(v^{\prime}\in I\), which will be used below for deriving Eq. 17. Hence, for every \(j\in[1\,..\,g]\), \(\delta_{G}(x_{j},K)\leq 4\zeta^{2}(\tau+3)\beta\cdot\|I^{+}\|\); plugging this in Eq. 12 yields:
\[\delta_{M}(f(v_{\ell}),f(v_{r}))\leq 24\zeta^{4}\tau(\tau+3)\beta\cdot\|I^{+}\|+4 \zeta^{2}(\tau+2)\beta\cdot\|I^{+}\|=O(\zeta^{4}\cdot\tau^{4}\cdot\beta)\cdot \|I^{+}\|. \tag{14}\]
Next, we bound the distance between \(t\) and \(t^{\prime}\) in \(M\). So far we fixed an arbitrary interval \(I=(v_{\ell},v_{\ell+1},\ldots v_{r})\in\mathcal{I}\). Writing \(\mathcal{I}=\{I_{1},I_{2},\ldots I_{s}\}\), we have \(\sum_{j=1}^{s}\|I_{j}\|=\|P\|=\delta_{G}(t,t^{\prime})\), hence
\[\sum_{j=1}^{s}\|I_{j}^{+}\|\leq 2\|P\|=2\cdot\delta_{G}(t,t^{\prime}). \tag{15}\]
For each \(I_{j}\), let \(v_{\ell}^{j}\) and \(v_{r}^{j}\) be the first and last vertices of \(I_{j}\). For each \(j\in[1..s-1]\), since \((v_{r}^{j},v_{\ell}^{j+1})\in E\), either \(\big{(}f(v_{r}^{j}),f(v_{\ell}^{j+1})\big{)}\in E(M)\) or \(f(v_{r}^{j})=f(v_{\ell}^{j+1})\), thus we have \(\delta_{M}(f(v_{r}^{j}),f(v_{\ell}^{j+1}))=\delta_{G}(f(v_{r}^{j}),f(v_{\ell} ^{j+1}))\). Hence, using the triangle inequality:
\[\begin{split}\delta_{M}(t,t^{\prime})&\leq\sum_{j= 1}^{s-1}\Big{(}\,\delta_{M}(f(v_{\ell}^{j}),f(v_{r}^{j}))+\delta_{M}(f(v_{r}^{j }),f(v_{\ell}^{j+1}))\Big{)}+\delta_{M}(f(v_{\ell}^{s}),f(v_{r}^{s}))\\ &\leq O(\zeta^{4}\cdot\tau^{4}\cdot\beta)\cdot\sum_{j=1}^{s}\|I_ {j}^{+}\|+\sum_{j=1}^{s-1}\delta_{M}(f(v_{r}^{j}),f(v_{\ell}^{j+1}))\qquad \text{(by Eq.~{}\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq
**Definition 3.1**.: _A \((\Delta,\gamma,w)\)-buffered cop decomposition for \(G\) is a buffered cop decomposition \(\mathcal{T}\) that satisfies the following properties:_
* [Supernode radius.] _Every supernode_ \(\eta\) _has radius at most_ \(\Delta\)_._
* [Shortest-path skeleton.] _For every supernode_ \(\eta\)_, the skeleton_ \(T_{\eta}\) _is an SSSP tree in_ \(\operatorname{dom}(\eta)\)_, with at most_ \(w\) _leaves._
* [Supernode buffer.] _Let_ \(\eta\) _be a supernode, and let_ \(X\) _be another supernode that is an ancestor of_ \(\eta\) _in the partition tree_ \(\mathcal{T}\)_. Then either_ \(\eta\) _and_ \(X\) _are adjacent in_ \(G\)_, or for every vertex_ \(v\) _in_ \(\operatorname{dom}(\eta)\)_, we have_ \(\delta_{\operatorname{dom}(X)}(v,X)\geq\gamma\)_._
Our definition of buffered cop decomposition is different from the cop decomposition in the prior work [1, 1]. Recall that the cop decomposition in prior work is a tree decomposition, where each bag of the tree decomposition can be partitioned into \(r-1\) supernodes. Here each node of the partition tree \(\mathcal{T}\) in our definition is exactly one supernode. We find that this alternative definition helps to simplify the presentation of our buffer-creating algorithm significantly.
Given a partition tree \(\mathcal{T}\), we construct another tree \(\hat{\mathcal{T}}\) from (and isomorphic as a graph to) \(\mathcal{T}\) as follows: for each supernode \(\eta\in\mathcal{T}\), we create a corresponding node \(B_{\eta}\), called the _bag of_\(\eta\), containing \(\eta\) and all the ancestor supernodes adjacent to \(\eta\) in \(G\). Intuitively \(B_{\eta}\) corresponds to the existing supernodes that \(\eta\) can "see". Notice that \(B_{\eta}\) is the highest bag that contains \(\eta\). By identifying each supernode with its vertex set, each bag in \(\hat{\mathcal{T}}\) naturally corresponds to a set of vertices in \(G\). We call \(\hat{\mathcal{T}}\) the _expansion_ of \(\mathcal{T}\). The expansion \(\hat{\mathcal{T}}\) of \(\mathcal{T}\) is the cop decomposition in the sense of prior work [1] discussed above, and that's why we call nodes of \(\hat{\mathcal{T}}\)_bags_. While it is not immediately clear that \(\hat{\mathcal{T}}\) is a tree decomposition based on its definition, our construction guarantees that \(\hat{\mathcal{T}}\) indeed satisfies all the properties of a tree decomposition.
* [Tree decomposition.] There exists an expansion \(\hat{\mathcal{T}}\) of \(\mathcal{T}\) such that (1) \(\hat{\mathcal{T}}\) is a tree decomposition of \(G\), and (2) every bag of \(\hat{\mathcal{T}}\) contains at most \(w\) supernodes.
\begin{table}
\begin{tabular}{c|l} _notation_ & _meaning_ \\ \hline \(\mathcal{T}\) & _partition tree_: nodes of \(\mathcal{T}\) are supernodes \\ \(\mathcal{T}\) & _expansion of \(\mathcal{T}\)_ \\ \(\eta\) & _supernode_: _induced subgraph_ on its vertices (one may identify \(\eta\) with these vertices); \\ & \(\eta\) contains \(T_{n}\) initially and may only grow \\ dom(\(\eta\)) & _domain of \(\eta\)_: subgraph induced by the union of all supernodes in the subtree of \(\mathcal{T}\) rooted at \(\eta\) \\ \(T_{\eta}\) & tree skeleton: SSSP tree in \(\operatorname{dom}(\eta)\) (remain fixed) \\ \(\hat{\mathcal{S}}\) & _set of supernodes_: \(\hat{\mathcal{S}}\) starts empty and grows, and each supernode may grow \\ witness \(v_{\hat{\mathcal{S}}}\) & vertex adjacent to some vertex in supernode \(\hat{\mathcal{S}}\) \\ \(H\) sees \(S\) & subgraph \(H\) has a witness vertex \(v_{\hat{\mathcal{S}}}\) to \(\hat{\mathcal{S}}\) \\ \(\hat{\mathcal{S}}|_{H}\) & set of supernodes in \(\hat{\mathcal{S}}\) subgraph \(H\) can see \\ dom\({}_{\hat{\mathcal{S}}}(\eta)\) & _domain of \(\eta\) with respect to \(\hat{\mathcal{S}}\)_: may shrink; the final dom\({}_{\hat{\mathcal{S}}}(\eta)\) is dom(\(\eta\)) \\ \(\hat{\mathcal{A}}H^{{}^{\prime}}_{|\mathcal{X}}\) & _boundary vertices_: vertices in \(G\setminus H^{\prime}\) that are (1) adjacent to \(H^{\prime}\), and (2) in \(\operatorname{dom}_{\hat{\mathcal{S}}}(X)\) \\ \(\mathcal{N}H^{\prime}_{X}\) & _buffer vertices_: unassigned vertices in \(H^{\prime}\) within distance (in dom\({}_{\hat{\mathcal{S}}}(X)\)) \(\Delta/r\) of \(\hat{\mathcal{A}}H^{\prime}_{|\mathcal{X}}\) \\ \(\eta_{\mathcal{S}}\) & vertices assigned to \(\eta\) by the current \(\hat{\mathcal{S}}\) \\ \end{tabular}
\end{table}
Table 1: Glossary for the construction of buffered cop decompositions.
We say that such a buffered cop decomposition \(\mathcal{T}\) has _radius_\(\Delta\), _buffer_\(\gamma\), and _width_\(w\). See Figure 3 for an illustration, and Table 1 for a glossary of terminologies for the buffered cop decompositions.
Given a \(K_{r}\)-minor-free graph and a parameter \(\Delta\), we will construct a buffered cop decomposition with radius \(\Delta\), buffer \(\Delta/r\), and width \(r-1\). We emphasize that the most interesting property is the supernode buffer property, which says that if a supernode \(X\) gets "cut off" from part of the graph, there is a "buffer region" of at least \(\gamma\) between \(X\) and that part of the graph. More precisely, let \(G^{\prime}\) be the subgraph of \(G\) induced by vertices in descendant supernodes of \(X\) that are not adjacent to \(X\). (That is, \(X\) is "cut off" from \(G^{\prime}\) by the descendant supernodes that are adjacent to \(X\).) The supernode buffer property in Definition 3.1 implies that \(\delta_{\text{dom}(X)}(v,X)\geq\gamma\) for every \(v\in V(G^{\prime})\). The construction of [1] produces a cop decomposition with the other three properties; that is, a buffered cop decomposition with radius \(\Delta\) and width \(r-1\). A delicate argument shows that their construction achieves something similar to a supernode buffer of \(\Delta/r\)_in expectation_.
Review of the construction of [1].The construction of [1] iteratively builds a collection \(\mathcal{S}\) of supernodes of a graph \(G\). At each point in the algorithm, they process a subgraph \(H\) of \(G\) by creating a new supernode \(\eta\) in \(H\), and then recursing on the connected components of \(H\setminus\eta\).
To describe how to create each new supernode \(\eta\), we introduce some terminology. A subgraph \(H\)_sees_ a supernode \(S\) if (1) \(S\) is disjoint from \(H\), and (2) there exists some _witness vertex_\(v_{S}\) in \(H\) that is adjacent to a vertex in \(S\). For any subgraph \(H\), let \(\mathcal{S}|_{H}\) be the set of supernodes that \(H\) sees. The algorithm of [1] guarantees that, if \(G\) excludes a \(K_{r}\)-minor, the subgraph \(H\) (at any point in the algorithm) sees at most \(r-2\) previously-created supernodes. Their algorithm has the following steps:
1. _Initialize a new supernode_\(\eta\). Choose an arbitrary vertex \(v\) in \(H\). Build a shortest-path tree \(T_{\eta}\) in \(H\) that connects \(v\) to an arbitrary witness vertex for every supernode seen by \(H\). Initialize supernode \(\eta\gets T_{\eta}\) with skeleton \(T_{\eta}\).
2. _Expand_\(\eta\)_, to guarantee supernode buffer property in expectation. Let \(\gamma\) be a random number between \(0\) and \(C\cdot\Delta\) (for some constant \(C\)) drawn from a truncated exponential distribution with rate \(O(r/\Delta)\), meaning that \(\mathbb{E}[\gamma]=O(\Delta/r)\). Assign every vertex within distance \(\gamma\) of \(T_{\eta}\) to be a part of supernode \(\eta\) (where distances are with respect to \(H\)).
3. _Recurse._
Figure 3: Left: A _non-planar_ graph \(G\) with a partition into supernodes. Notice that the purple cluster is connected and goes behind the dark blue supernode. Middle: The partition tree \(\mathcal{T}\) of a buffered cop decomposition for \(G\). The supernode buffer property guarantees that any path between the brown and pink supernodes is of length at least \(\gamma\). Right: The expansion of \(\mathcal{T}\), where each bag contains at most \(5\) supernodes.
The subgraph \(H\) is initially selected to be \(G\). The buffered cop decomposition is implicit from the recursion tree. They show that at any point in the algorithm, the set of supernodes seen by \(H\) forms a model of a complete graph (see Lemma 3.5); this proves the bag width property. The tree decomposition, radius, and shortest-path skeleton properties are all straightforward to verify. The proof of the "expected" supernode buffer property is quite complicated, and requires dealing with the fact that \(\gamma\) is drawn from a truncated exponential distribution rather than a normal exponential distribution.
Here we remark that, while their buffer guarantee is in expectation, the nature of their buffer property is somewhat stronger than ours: whenever a new skeleton \(T_{\eta}\) is added and cuts off the shortest path from a vertex \(v\) to another skeleton \(X\) (which could still be adjacent to \(T_{\eta}\)), the distance from \(v\) to \(T_{\eta}\) is smaller than the distance from \(v\) to \(X\) by \(O(\Delta/r)\) in expectation. Here we only guarantee the distance reduction from \(v\) to \(X\) when \(X\) and \(T_{\eta}\) are not adjacent.
### Construction
We modify the algorithm of Abraham _et al._[1] to obtain the (deterministic) supernode buffer property. Throughout our algorithm, we maintain the global variables \(\mathcal{S}\), indicating the set of supernodes, and \(\mathcal{T}\), indicating the partition tree. At any moment during the execution of our algorithm, some vertices of graph \(G\) will already be assigned to supernodes, and some vertices will be unassigned. At the end of the execution, all vertices will be assigned by \(\mathcal{S}\). At each stage of the algorithm, we (1) select some unassigned vertices to become a new supernode \(\eta\), (2) assign some unassigned vertices to existing supernodes (_not necessarily_\(\eta\)) to guarantee the supernode buffer property, and (3) recurse on connected components induced by the remaining unassigned vertices.
Our main procedure is \(\textsc{BuildTree}(\mathcal{S},H)\), which takes as input a connected subgraph \(H\) induced by unassigned vertices in \(G\). It assigns vertices in \(H\) to supernodes in \(\mathcal{S}\), and returns a buffered cop decomposition. Figure 4 gives an example; Figure 5 gives the complete pseudocode. The algorithm consists of the following steps:
1. _Initialize a new supernode._ Choose an arbitrary vertex \(v\) in \(H\). Build a shortest path tree \(T_{\eta}\) in \(H\) that connects \(v\) to an arbitrary witness vertex for every supernode seen by \(H\). Initialize supernode \(\eta\) to be the subgraph of \(G\) induced by all vertices of \(T_{\eta}\); set \(T_{\eta}\) to be the skeleton of \(\eta\); and add \(\eta\) to \(\mathcal{S}\). Define the domain of \(\eta\) with respect to \(\mathcal{S}\), \(\operatorname{dom}_{\mathcal{S}}(\eta)\), to be the set of all vertices in \(H\) that are not assigned (by \(\mathcal{S}\)) to any supernode above \(\eta\) in the partition tree \(\mathcal{T}\); initially \(\operatorname{dom}_{\mathcal{S}}(\eta)=H\), and at the end of the algorithm it will hold that \(\operatorname{dom}_{\mathcal{S}}(\eta)=\operatorname{dom}(\eta)\). (Notice that \(\eta\) will grow and \(\operatorname{dom}_{\mathcal{S}}(\eta)\) will shrink over the course of the algorithm as \(\mathcal{S}\) changes, though \(T_{\eta}\) will remain unchanged. See Claim 3.4(3).)
2. _Assign vertices to existing supernodes, to guarantee the supernode buffer property._ For each connected component \(H^{\prime}\) of \(H\setminus\eta\), consider the set of supernodes \(\mathcal{X}\) that _can_ be seen by \(H\) but _cannot_ be seen by \(H^{\prime}\). These supernodes are "cut off" from \(H^{\prime}\). In this step, we identify every _currently unassigned_ vertex that could be close to a cut-off supernode, and assign those vertices to some existing supernode (possibly to the newly-created \(\eta\)). In more detail: For each \(X\) in \(\mathcal{X}\), define the _boundary vertices_\(\partial H^{\prime}_{\mathcal{X}}\) to be the set of vertices in \(G\setminus H^{\prime}\) that are (1) adjacent to \(H^{\prime}\), and (2) in \(\operatorname{dom}_{\mathcal{S}}(X)\). Our algorithm will maintain the invariant that all vertices adjacent to \(H^{\prime}\) (in particular, all vertices in \(\partial H^{\prime}_{\mathcal{X}}\)) have already been assigned to a supernode by \(\mathcal{S}\); see Invariant 3.2 for the formal statement. Define the set of _buffer vertices_\(\mathcal{N}H^{\prime}_{X}\) to be the set of unassigned vertices in \(H^{\prime}\) within distance \(\Delta/r\) of \(\partial H^{\prime}_{\mathcal{X}}\), where distance is measured
with respect to \(\operatorname{dom}_{\mathbb{S}}(X)\). Assign each vertex in \(\mathcal{N}H^{\prime}_{X}\) to the same supernode as a closest vertex in \(\partial H^{\prime}_{\downarrow X}\), breaking ties consistently and measuring distance with respect to \(\operatorname{dom}_{\mathbb{S}}(X)\); notice that "the supernode of a vertex in \(\partial H^{\prime}_{\downarrow X}\)" is well-defined because of Invariant 3.2. This procedure may cut off \(H^{\prime}\) from another supernode, even if \(H^{\prime}\) may originally have been able to see that supernode (even \(\eta\) itself could become cut off at this point); and it may break \(H^{\prime}\) into multiple connected components. Repeat this assignment process on each connected component until we have dealt with all supernodes that have been cut off. In Lemma 3.10, we show that this procedure guarantees that the supernode buffer property holds. It will suffice to show that, in this step, we assign every vertex in \(H^{\prime}\) that could become close to some cut-off supernode \(X\), _even if \(X\) grows in the future_. Crucially, in this step we assign every vertex in \(H^{\prime}\) that is within \(\Delta/r\) distance of the boundary \(\partial H^{\prime}_{\downarrow X}\). It would _not_ suffice to just assign vertices within \(\Delta/r\) distance of \(X\) in the current step, because \(X\) could potentially grow in the future, and the distance from a vertex in \(H^{\prime}\) to \(X\) could shrink. We show that even if \(X\) expands in the future, it remains disjoint from \(X^{\prime}\); further, we show that every path in \(\operatorname{dom}(X)\) from a vertex in \(H^{\prime}\) to a vertex outside of \(H\) passes through some boundary vertex in \(\partial H^{\prime}_{\downarrow X}\). Thus, every vertex in \(H^{\prime}\) is closer6 to \(\partial H^{\prime}_{\downarrow X}\) than to \(X\) (which are always outside \(H^{\prime}\)), even if \(X\) expands in the future. This means that the vertices of \(\mathcal{N}H^{\prime}_{X}\) form a buffer of \(\Delta/r\) between \(X\) and the unassigned vertices of \(H^{\prime}\). Note that we assign each vertex in \(\mathcal{N}H^{\prime}_{X}\) to some supernode that is not \(X\) (as \(H^{\prime}\) does not see \(X\), no vertex in \(\mathcal{N}H^{\prime}_{\downarrow X}\) is in \(X\)). This procedure is called \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H^{\prime})\); see pseudocode in Figure 6. It takes as input a subgraph \(H^{\prime}\) and a list \(\mathcal{X}\) of supernodes that have been cut off from \(H^{\prime}\). It assigns some vertices in \(H^{\prime}\) to existing supernodes in \(\mathcal{S}\). Footnote 6: We assume the weight of every edge in \(G\) is nonzero.
3. _Recurse._ For each connected component \(H^{\prime}\) in the graph induced by unassigned vertices, recursively call \(\textsc{BuildTree}(\mathcal{S},H^{\prime})\).
To initialize, let \(\mathcal{S}\leftarrow\emptyset\), and call \(\textsc{BuildTree}(\mathcal{S},G)\) to produce a buffered cop decomposition \(\mathcal{T}\) for \(G\). Throughout the algorithm, we maintain the following invariant:
**Invariant 3.2**.: _Suppose that call \(C\), whether it is \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) or \(\textsc{BuildTree}(\mathcal{S},H)\), is made at some point in the algorithm. At the time call \(C\) is made, every vertex in \(H\) is unassigned, and every vertex in \(G\setminus H\) that is adjacent to \(H\) is already assigned to some supernode._
(We say that a vertex \(v\) of graph \(G\) is _adjacent_ to a subgraph \(H\) of \(G\) if (1) \(v\) is in \(G\setminus H\), and (2) there is an edge in \(G\) between \(v\) and some vertex in \(H\).) The invariant is clearly true when we make the initial call \(\textsc{BuildTree}(\emptyset,G)\), and it is preserved throughout the algorithm: a recursive call to \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) or \(\textsc{BuildTree}(\mathcal{S},H)\) is only made if \(H\) is a maximal connected component induced by unassigned vertices. Because we maintain this invariant, the procedure \(\textsc{GrowBuffer}\) is well-defined.
A remark on the global variable.The procedure \(\textsc{BuildTree}(\mathcal{S},H)\) is recursive: it initializes a supernode, calls \(\textsc{GrowBuffer}\), and then recursively calls \(\textsc{BuildTree}(\mathcal{S},H^{\prime}_{i})\) on disjoint subgraphs \(H^{\prime}_{i}\). Each of these recursive calls modifies the same global variable \(\mathcal{S}\). However, the modifications to \(\mathcal{S}\) that are made by each call \(C_{i}\coloneqq\textsc{BuildTree}(\mathcal{S},H^{\prime}_{i})\)_do not_ affect the execution of any _sibling_ calls \(C_{j}\coloneqq\textsc{BuildTree}(\mathcal{S},H^{\prime}_{j})\). Only the ancestors of \(C_{i}\) in the recursion tree affect the execution of \(C_{i}\).
Before proving this observation, we introduce the following important terminology. We say that a call \(C:\operatorname{\mathsf{BuildTree}}(\mathcal{S},H)\) occurs _above_ (resp. _below_) a call \(C^{\prime}:\operatorname{\mathsf{=BuildTree}}(\mathcal{S},H^{\prime})\) if \(C\) is an ancestor (resp. descendent) of \(C^{\prime}\) in the recursion tree. If two calls to BuildTree are in different branches of the recursion tree, then they are neither above nor below each other. Intuitively, "\(C\) is above \(C^{\prime}\)" means that \(C\) was a _relevant_ call that happened _before_\(C^{\prime}\). Similarly, we say "supernode \(\eta_{1}\) is initialized _above_ supernode \(\eta_{2}\)" if the instance of BuildTree that initialized \(\eta_{1}\) occurred above the instance of BuildTree that initialized \(\eta_{2}\). Finally, we say that "supernode \(\eta\) is initialized _above_ a call \(C:\operatorname{\mathsf{=GrowBuffer}}(\mathcal{S},\mathcal{X},H)\)" if the call to BuildTree that initialized \(\eta\) is above _or is the same as_ the call to BuildTree that caused \(C\) to be called. Note that the algorithm never initializes a new supernode during a \(\operatorname{\mathsf{GrowBuffer}}\) call.
With this terminology, we can state a stronger version of Invariant 3.2.
**Invariant 3.3**.: _Suppose that call \(C\), whether it is \(\operatorname{\textsc{GrowBuffer}}(\mathcal{S},\mathcal{X},H)\) or \(\operatorname{\textsc{BuildTree}}(\mathcal{S},H)\), is made at some point in the algorithm. At the time call \(C\) is made, every vertex in \(H\) is unassigned, and every vertex in \(G\setminus H\) that is adjacent to \(H\) was already assigned during a call above \(C\) to some supernode initialized above \(C\)._
Indeed, when some call \(\tilde{C}\) (whether it is \(\operatorname{\textsc{GrowBuffer}}\) or BuildTree) makes call \(C\) on some subgraph \(H\), the graph \(H\) is a maximal connected component of unassigned vertices -- and crucially, this connected component is with respect to the assignments \(\mathcal{S}\)_before any calls to \(\operatorname{\textsc{GrowBuffer}}\) or BuildTree are made by \(\tilde{C}\)_. Thus, every vertex adjacent to \(H\) has been assigned to some existing supernode (before any sibling calls of \(C\) are made), meaning it was initialized above \(C\).
We now prove that the execution of a call \(C\), whether it is \(\operatorname{\textsc{GrowBuffer}}(\mathcal{S},\mathcal{X},H)\) or \(\operatorname{\textsc{BuildTree}}(\mathcal{S},H)\), depends only on \(H\) and the vertices assigned to supernodes above \(C\). Indeed, a call to \(\operatorname{\textsc{BuildTree}}(\mathcal{S},H)\) uses \(\mathcal{S}\) to determine the supernodes seen by \(H\), which is determined by the vertices adjacent to \(H\) (and thus, by Invariant 3.3, by calls above \(C\) and not by sibling calls). A call to \(\operatorname{\textsc{GrowBuffer}}(\mathcal{S},\mathcal{X},H)\) uses \(\mathcal{S}\) in two places. First, it uses \(\mathcal{S}\) to determine \(\partial H_{|\mathcal{X}}\) (where \(X\) denotes the supernode selected from \(\mathcal{X}\) to be processed during the execution of \(C\)), which is a subset of the vertices adjacent to \(H\) (and thus determined by calls above \(C\)). Second, it uses \(\mathcal{S}\) to determine, for every vertex \(v\) in \(H\), a closest vertex \(\partial H_{|\mathcal{X}}\) with respect to \(\operatorname{dom}_{\mathcal{S}}(X)\). Notice that a shortest path \(P\) in \(\operatorname{dom}_{\mathcal{S}}(X)\) from \(v\) to \(\partial H_{|\mathcal{X}}\) is contained in \(H\cup\partial H_{|\mathcal{X}}\): the first vertex along \(P\) that leaves \(H\) is in \(\partial H_{|\mathcal{X}}\), and thus is an endpoint of \(P\). This means that \(P\) is determined by the graph \(H\cup\partial H_{|\mathcal{X}}\) (and as argued earlier, Invariant 3.3 implies that \(\partial H_{|\mathcal{X}}\) is determined by calls above \(C\)).
Figure 4: Example of one iteration of BuildTree(\(H\)). From left to right: (1) The graph \(G\) before an iteration, with \(H\) being a connected component of unassigned vertices. (2) Pick an arbitrary vertex \(v\) in \(H\) and compute \(T_{\eta}\) by taking shortest paths from \(v\) to \(X_{1}\) and \(X_{2}\). (3) Supernode \(X_{1}\) is cut off from \(H^{\prime}\), so find the set \(\operatorname{\textsc{\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $\text{$\text{$}}}${{{}}}}{{{{{{}}}}{{{{}}{{{}{{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{{}}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{ {}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{ {}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ {}{{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{ {}{{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{ {}{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{ {}{}{{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{ {}{{}{}{{}{{}{}{}{{}{}{}{{}{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{ {}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{}{}{{}{}{}{{}{{}{}{}{}{{}{}{{}{}{ {}{{}{}{}{{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{{}{ {}{{}{{}{{}{}{{}{{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{{}{}{ {}{{}{}{{}{{}{{}{}{}{}{{}{{}{{}{}{{}{}{}{{}{{}{}{{}{{}{}{{}{}{}{{ {}{}{{{}{{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{ {}{}{{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{{}{}{}{}{{}{}{ {}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{{} {}{{{}{}{}{{}{{}{{}{}{}{}{}{{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{} {}{{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{} {{}{{}{{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{} {{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{}{{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{} {{}{{}{}{{}{}{}{{}{{}{}{{}{{}{{}{}{{}{}{{}{}{{}{{}{}{{}{{}{}{{}{}{{}{{}{} {{}{{}{{}{{}{{}{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{}
This shows that calls only affect each other if one is above the other; sibling calls do not affect each other. We will not explicitly use this fact in our proofs, instead depending solely on Invariant 3.3 -- but it is nevertheless important intuition.
### Analysis: Basic properties
Let \(\mathcal{T}\) be the tree produced by \(\textsc{BuildTree}(\emptyset,G)\). We will show that if \(G\) excludes a \(K_{r}\)-minor, then \(\mathcal{T}\) is a \((\Delta,\Delta/r,r-1)\)-buffered cop decomposition for \(G\). In this section, we prove a collection of basic properties about \(\mathcal{T}\), including the shortest-path skeleton and tree decomposition properties. The proofs of the supernode buffer and supernode radius properties are deferred to the next two sections.
Notation for supernodes changing over time.When we write "supernode \(\eta\)" without any subscript or description, we refer to the supernode in \(\mathcal{T}\), at the end of the execution of the entire algorithm. In some proofs, we will need to refer to the global variable \(\mathcal{S}\) at a specific point in the algorithm's execution. We adopt the following convention: If we say a call \(C^{\prime}\coloneqq\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) is made during the algorithm, we use the variable \(\mathcal{S}^{\prime}\) to denote the global variable at the _start_ of call \(C^{\prime}\). We use the notation "_supernode \(\eta_{\mathcal{S}}\)_" to refer to the vertices of \(G\) that have already been assigned to \(\eta\) by \(\mathcal{S}^{\prime}\). It does _not_ refer to those vertices that will be assigned to \(\eta\) in the future. The phrase "the set of supernodes _in \(\mathcal{S}\)_" refers to the set of supernodes \(\eta_{\mathcal{S}^{\prime}}\) that are assigned by \(\mathcal{S}^{\prime}\).
Terminology for \(\textsc{GrowBuffer}\).Suppose that some call \(C\coloneqq\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) occurs during the algorithm. This call begins by selecting an arbitrary supernode \(X\) from \(\mathcal{X}\); we say that \(X\) is the supernode _processed during \(C\)_. The call \(C\) then defines the set \(\mathcal{N}H_{X}\); we say that every point in \(\mathcal{N}H_{X}\) is _assigned during \(C\)_.
**Claim 3.4**.: _The following properties hold._
1. _For every_ \(\mathcal{S}\) _that appears in the algorithm, every supernode_ \(\eta_{\mathcal{S}}\) _induces a connected subgraph of_ \(G\)_._
2. _Suppose that call_ \(C\)_, whether it is_ \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) _or_ \(\textsc{BuildTree}(\mathcal{S},H)\)_, is made at some point in the algorithm. Over the course of the algorithm, every vertex in_ \(H\) _is assigned either to a
Figure 5: Pseudocode for procedure \(\textsc{BuildTree}(\mathcal{S},H)\)
supernode initialized by \(C\), or to a supernode initialized below \(C\), or to a supernode in \(\mathcal{S}\) that \(H\) sees (at the time \(C\) is called).
3. _Supernode \(\eta_{\mathcal{S}}\) will grow and \(\operatorname{dom}_{\mathcal{S}}(\eta)\) will shrink over the course of the algorithm as \(\mathcal{S}\) changes._ Further, after the algorithm terminates, we have \(\operatorname{dom}_{\mathcal{S}}(\eta)=\operatorname{dom}(\eta)\).
**Proof: (1)** When supernode \(\eta\) is initialized, it is connected (because the skeleton \(T_{\eta}\) is connected). Whenever a vertex \(v\) is assigned to \(\eta\) by a call to \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\), we claim that connectivity is preserved. Let \(X\) denote the supernode processed during \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\), let \(\partial H_{\downarrow X}\) denote the set of boundary vertices, and let \(\mathcal{N}H_{X}\) denote the vertices assigned during \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\). Let \(P\) be a shortest path in \(\operatorname{dom}_{\mathcal{S}}(X)\) from \(v\) to the closest point \(\partial H_{\downarrow X}\). Every vertex in \(P\) is closer to \(\partial H_{\downarrow X}\) than \(v\). Further, we claim that every vertex in \(P\) (excluding the endpoint, which is a boundary vertex) is in \(H\). Indeed, every vertex in \(P\) is in \(\operatorname{dom}_{\mathcal{S}}(X)\), so the first vertex along \(x\) that leaves \(H\) is in \(\partial H_{\downarrow X}\), and thus is the endpoint of \(P\). Thus, every point in \(P\) (excluding the endpoint) is in \(\mathcal{N}H_{X}\). As every vertex in \(\mathcal{N}H_{X}\) is assigned according to the closest vertex in \(\partial H_{\downarrow X}\) (and ties are broken consistently), every vertex in path \(P\) is assigned to the same supernode \(\eta\), and the connectivity of \(\eta\) is preserved.
**(2)** Let \(v\) be a vertex in \(H\) assigned to supernode \(\eta\). Suppose that \(\eta\subseteq H\) (and suppose that \(C\) itself did not initialize \(\eta\)). In this case, we claim that \(\eta\) was initialized below \(C\). This follows from the fact that, for any call to \(\textsc{BuildTree}\), the children calls to \(\textsc{BuildTree}\) in the recursion tree operate on pairwise disjoint subgraphs. This implies that \(\eta\) is initialized either above or below \(C\); as \(H\) consists of unassigned nodes at the time \(C\) is called, \(\eta\) is initialized below \(C\).
Now suppose that \(\eta\) is not contained in \(H\). As \(\eta\) is connected (Item 1), there is a path \(P\) in \(\eta\) between \(v\) and a vertex outside of \(H\). By Invariant 3.2, the first vertex along \(P\) that leaves \(H\) has already been assigned in \(\mathcal{S}\), at the time \(C\) is called. Thus, \(H\) sees \(\eta_{\mathcal{S}}\).
**(3)** The fact that supernodes only grow over time is immediate from the algorithm. Let \(H\) denote the domain of \(\eta\) at the time \(\eta\) is initialized. By definition, \(\operatorname{dom}_{\mathcal{S}}(\eta)\) is the set of vertices in \(H\) that are _now_ assigned to supernodes above \(\eta\); as supernodes only grow over time, \(\operatorname{dom}_{\mathcal{S}}(\eta)\) only shrinks. It remains
Figure 6: Pseudocode for procedure \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\)
to show that the final \(\operatorname{dom}_{\mathbb{S}}(\eta)\) is equal to \(\operatorname{dom}(\eta)\). Indeed, by Item 2 (applied to the call to BuildTree that initialized \(\eta\)), every vertex in \(H\) is assigned to a supernode initialized either above or below \(\eta\) (or is assigned to \(\eta\) itself). A supernode is below \(\eta\) in the partition tree \(\mathcal{T}\) if and only if it is initialized below \(\eta\). Thus, after the algorithm terminates, \(\operatorname{dom}_{\mathbb{S}}(\eta)\) is the set of vertices that are in \(\eta\) or in supernodes below \(\eta\), which is precisely \(\operatorname{dom}(\eta)\). \(\Box\)
**Lemma 3.5**.: _Suppose that BuildTree\((\mathcal{S},H)\) is called during the algorithm. Let \(\mathcal{S}|_{H}\) be the set of supernodes in \(\mathcal{S}\) seen by \(H\). Then \(\mathcal{S}|_{H}\) contains at most \(r-2\) supernodes; furthermore, the supernodes in \(\mathcal{S}|_{H}\) are pairwise adjacent._
**Proof:** We first prove that the supernodes in \(\mathcal{S}|_{H}\) are pairwise adjacent. Consider a pair \((X,Y)\) of supernodes in \(\mathcal{S}|_{H}\), and assume without loss of generality7 that \(Y\) is initialized below \(X\). Let \(x\) and \(y\) be the vertices chosen to be the roots of the skeletons of \(X\) and \(Y\), respectively. Since \(H\) sees both \(X_{\mathcal{S}}\) and \(Y_{\mathcal{S}}\), and as \(X_{\mathcal{S}}\) and \(Y_{\mathcal{S}}\) are connected individually, there exists a path \(P\) from \(x\) to \(y\) containing only vertices in \(H\), \(X_{\mathcal{S}}\), and \(Y_{\mathcal{S}}\).
Footnote 7: By Invariant 3.3, both \(X\) and \(Y\) are initialized above BuildTree\((\mathcal{S},H)\), so one of \(X\) or \(Y\) was initialized below the other.
Consider the time just before \(Y\) is initialized. Let \(\tilde{\mathcal{S}}\) denote the assignments of vertices at that time, and let \(\tilde{H}\) be the connected component of the subgraph induced by the unassigned vertices that contains \(y\) at that time. Observe that, although \(X_{\tilde{\mathcal{S}}}\) may expand later, all vertices in \(P\) are either unassigned (and thus belong to \(\tilde{H}\)) or belong to \(X_{\tilde{\mathcal{S}}}\). Hence, there is a path from \(y\) to \(X_{\tilde{\mathcal{S}}}\) containing only unassigned vertices, meaning that \(\tilde{H}\) sees \(X_{\tilde{\mathcal{S}}}\). Thus, when \(Y_{\tilde{\mathcal{S}}}\) is initialized, it must be adjacent to \(X_{\tilde{\mathcal{S}}}\) by construction. By Claim 3.4(3), supernodes only grow, and thus \(X\) and \(Y\) must be adjacent.
By Claim 3.4(1), every supernode is connected. Thus, the above claim implies that \(\mathcal{S}|_{H}\cup\{\eta\}\) forms a model for a \(K_{k+1}\)-minor, where \(k=|\mathcal{S}|_{H}|\). As \(G\) excludes \(K_{r}\)-minors, we have \(|\mathcal{S}|_{H}|\leq r-2\). \(\Box\)
**Lemma 3.6** (Shortest-path skeleton property).: _Every supernode \(\eta\) has a skeleton that is an SSSP tree in \(\operatorname{dom}(\eta)\) with \(r-2\) leaves._
**Proof:** Notice that, throughout the course of the algorithm, \(\operatorname{dom}_{\mathbb{S}}(\eta)\) may shrink but it never expands by Claim 3.4(3). As \(T_{\eta}\) is an SSSP tree in its original domain \(\operatorname{dom}_{\mathbb{S}}(\eta)\), it is also an SSSP tree in \(\operatorname{dom}(\eta)\). (We remark that \(\eta\) only grows over the course of the algorithm, so \(T_{\eta}\) is a subgraph of \(\eta\), and thus is a subgraph of \(\operatorname{dom}(\eta)\)). By Lemma 3.5, tree \(T_{\eta}\) has at most \(r-2\) leaves. \(\Box\)
**Claim 3.7**.: _Let \(C\) be a call, whether to BuildTree\((\mathcal{S},H)\) or to_GrowBuffer\((\mathcal{S},\mathcal{X},H)\)_, and let \(\eta\) be a supernode initialized above \(C\). Let \(H^{\prime}\) be a subgraph of \(H\). If \(H^{\prime}\) is adjacent to a vertex in \(\eta\) (after the algorithm terminates), then \(H\) sees \(\eta_{\mathcal{S}}\)._
**Proof:** Let \(v\) be a vertex in \(\eta\setminus H^{\prime}\) adjacent to \(H^{\prime}\). As \(\eta\) is connected, there is a path \(P\) from \(v\) to \(T_{\eta}\) contained in \(\{v\}\cup\eta\). As \(\eta\) is initialized above \(C\) (and, at the time \(C\) is called, subgraph \(H\) contains only unassigned vertices), the skeleton \(T_{\eta}\) is disjoint from \(H\). Thus, \(P\) starts at a vertex in \(H\) and ends at a vertex outside of \(H\). Consider the first vertex along \(P\) that leaves \(H\). By Invariant 3.2, this vertex had already been assigned (to \(\eta\)) at the time \(C\) was called. We conclude that \(H\) sees \(\eta_{\mathcal{S}}\). \(\Box\)
**Lemma 3.8** (Tree decomposition property).: \(\hat{\mathcal{T}}\) _satisfies the tree decomposition property._
**Proof:** First, note that Lemma 3.5 directly implies that each supernode sees at most \(r-2\) ancestor supernodes, meaning that each bag in \(\hat{T}\) contains at most \(r-1\) supernodes. Next, we prove that \(\hat{\mathcal{T}}\) is a tree decomposition.
**(1)** The union of all vertices in all bags of \(\hat{\mathcal{T}}\) is \(V\) by construction.
**(2)** Let \((x,y)\) be an edge in \(G\). We need to show that there is a bag in \(\hat{\mathcal{T}}\) that contains both \(x\) and \(y\). Let \(X\) be the supernode containing \(x\), and let \(Y\) be the supernode containing \(y\). We will prove that either \(X=Y\) or one of them is an ancestor of the other (recall that, by definition, the bag of \(X\) contains all supernodes above \(X\) that are adjacent to \(X\)).
Assume that \(X\neq Y\). We claim that \(X\) and \(Y\) are in an ancestor-descendent relationship in \(\mathcal{T}\). Otherwise, consider the lowest common ancestor \(\eta\) of \(X\) and \(Y\), initialized by a call \(C\coloneqq\textsc{BuildTree}(\mathcal{S},H)\). As \(X\) and \(Y\) are in different subtrees of \(\eta\), vertices \(x\) and \(y\) are both unassigned and belong to different connected components of unassigned vertices, at the time when \(C\) begins to recursively make calls to BuildTree. But this is impossible, as there is an edge between \(x\) and \(y\).
**(3)** We prove that for any supernode \(\eta\), if there are two bags \(B_{X}\) and \(B_{Y}\) containing \(\eta\), every bag in the path between them in \(\hat{\mathcal{T}}\) contains \(\eta\).
Let \(P\) be the path between \(B_{X}\) and \(B_{Y}\) in \(\hat{\mathcal{T}}\). Assume that there exists some bag in \(P\) not containing \(\eta\). Observe that the bag \(B_{\eta}\) is a common ancestor of both \(B_{X}\) and \(B_{Y}\). Consider two paths: \(P_{X}\) from \(B_{\eta}\) to \(B_{X}\) and \(P_{Y}\) from \(B_{\eta}\) to \(B_{Y}\). One of them, say \(P_{X}\), must have a bag that does not contain \(\eta\). Let \(B_{\eta^{\prime}}\) be the lowest bag in \(P_{X}\) such that \(B_{\eta^{\prime}}\) does not contain \(\eta\), and let \(B_{\eta^{\prime\prime}}\) be the child of \(B_{\eta^{\prime}}\) in \(P_{X}\). Notice that \(B_{\eta^{\prime\prime}}\) contains \(\eta\). We remark that \(B_{\eta^{\prime}}\) is a descendent of \(B_{\eta}\). From the construction of \(\hat{\mathcal{T}}\), we get that supernode \(\eta^{\prime\prime}\) is adjacent to \(\eta\) but supernode \(\eta^{\prime}\) is not. Suppose that \(\eta^{\prime}\) is initialized during the call \(C^{\prime}\coloneqq\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\), and \(\eta^{\prime\prime}\) is initialized during the call \(C^{\prime\prime}\coloneqq\textsc{BuildTree}(\mathcal{S}^{\prime\prime},H^{ \prime\prime})\). As \(\eta\) is initialized above \(C^{\prime}\), and \(H^{\prime\prime}\subseteq H^{\prime}\), and \(H^{\prime\prime}\) is adjacent to \(\eta\), Claim 3.7 implies that \(H^{\prime}\) sees \(\eta_{\mathcal{S}^{\prime}}\) at the time \(C^{\prime}\) is called. Thus, by construction \(\eta^{\prime}\) is adjacent to \(\eta\), a contradiction.
### Analysis: Supernode buffer property
The following observation is almost immediate from the construction. It says that, if some subgraph \(H^{\prime}\) is cut off from an old supernode \(X\), there was some call to GrowBuffer that processed \(X\).
**Observation 3.9**.: _Suppose that call \(C^{\prime}\coloneqq\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) is made during the algorithm. If \(X\) is a supernode initialized above \(C^{\prime}\), and if \(H^{\prime}\) does not see \(X_{\mathcal{S}^{\prime}}\) at the time \(C^{\prime}\) is called, then there is some call \(C\coloneqq\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) such that (1) \(H\supseteq H^{\prime}\), and in particular \(C\) is above \(C^{\prime}\), (2) \(H\) does not see \(X_{\mathcal{S}}\), and (3) \(X\) was processed during \(C\)._
To see why the observation holds, denote by \(\tilde{C}\coloneqq\textsc{BuildTree}(\tilde{\mathcal{S}},\tilde{H})\) the lowest call above \(C^{\prime}\) such that \(\tilde{H}\) sees \(X_{\mathcal{S}}\) (or, if no such call exists, let \(\tilde{C}\) be the call that initializes \(X\)). After making some calls to GrowBuffer, the call \(\tilde{C}\) must recurse on some subgraph that does not see \(X\). Since the algorithm calls GrowBuffer whenever a supernode gets cut off, there must be some (recursive) call to GrowBuffer caused by \(\tilde{C}\) that processed \(X\), as claimed by the observation. We shall use this observation to prove the supernode buffer property.
**Lemma 3.10** (Supernode buffer property).: _Let \(X\) and \(\eta\) be supernodes, with \(\eta\) initialized below \(X\). If \(\eta\) is not adjacent to \(X\) in \(G\), then for every vertex \(v\) in \(\mathrm{dom}(\eta)\), we have \(\delta_{\mathrm{dom}(X)}(v,X)>\Delta/r\)._
**Proof:** We prove the following claim by induction (starting immediately below the call that initialized \(X\), and working downward in the recursion tree):
Let \(C^{\prime}\coloneqq\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) be a call that is below the call that initialized \(X\). Either \(H^{\prime}\) sees \(X_{\mathcal{S}^{\prime}}\), or \(\delta_{\mathrm{dom}(X)}(v,X)>\Delta/r\) for every vertex \(v\) in \(H^{\prime}\).
We emphasize that the guarantee \(\delta_{\operatorname{dom}(X)}(v,X)>\Delta/r\) refers to the _final_\(X\), after all expansions are made. This suffices to prove the lemma. Indeed, the call to \(\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) that initialized \(\eta\) comes below the call that initialized \(X\), and \(\operatorname{dom}(\eta)\subseteq H^{\prime}\) by Claim 3.4(3); thus, either \(H^{\prime}\) sees \(X_{\mathcal{S}^{\prime}}\) (in which case \(\eta\) is adjacent to \(X\) by definition of \(T_{\eta}\)), or every point \(v\) in \(\operatorname{dom}(\eta)\) satisfies \(\delta_{\operatorname{dom}(X)}(v,X)>\Delta/r\).
Inductive step.Suppose that \(H^{\prime}\) does not see \(X_{\mathcal{S}^{\prime}}\). As we are in the inductive step, we may assume that the parent of \(C^{\prime}\) in the recursion tree, \(\tilde{C}\coloneqq\textsc{BuildTree}(\tilde{\mathcal{S}},\tilde{H})\), is below the call that initialized \(X\). If \(\tilde{H}\) does not see \(X_{\tilde{\mathcal{S}}}\), then we are done: since graph \(\tilde{H}\) is a supergraph of \(H^{\prime}\), the inductive hypothesis implies that \(\delta_{\operatorname{dom}(X)}(v,X)>\Delta/r\) for every vertex \(v\) in \(H^{\prime}\).
The interesting case occurs when \(\tilde{H}\) sees \(X_{\tilde{\mathcal{S}}}\), but \(H^{\prime}\) does not see \(X_{\mathcal{S}^{\prime}}\): that is, \(X\) becomes "cut off" from \(H^{\prime}\) some time in between. In this case, by Observation 3.9 there is some call \(C\coloneqq\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) above \(C^{\prime}\) that processes \(X\), with \(H\supseteq H^{\prime}\) and \(H\) does not see \(X_{\mathcal{S}}\). Consider any vertex \(v\) in \(H\) such that \(\delta_{\operatorname{dom}(X)}(v,X)\leq\Delta/r\) (where, again, we emphasize that \(X\) refers to the _final_\(X\), after all expansions). If no such vertex exists, we are done.
We argue that _vertex_\(v\)_is at most_\(\Delta/r\)_away from_\(\partial H_{|X}\)_with respect to_\(\operatorname{dom}_{\mathcal{S}}(X)\); see Figure 7. Let \(P\) be a shortest path from \(v\) to \(X\) in \(\operatorname{dom}(X)\), where by assumption \(\|P\|\leq\Delta/r\). As the domain of \(X\) only shrinks over time (Claim 3.4(3)), path \(P\) is in \(\operatorname{dom}_{\mathcal{S}}(X)\).8 By Claim 3.4(2) on the call \(C\), every vertex in \(H\) is assigned either to a supernode initialized below \(C\), or to a supernode in \(\mathcal{S}\) that \(H\) sees. Because \(X\) already existed in \(\mathcal{S}\) (and thus \(X\) is not initialized below \(C\)) and \(H\) does not see \(X_{\mathcal{S}}\), the other endpoint of \(P\) which is eventually assigned to \(X\) cannot be in \(H\). So \(P\) passes through some vertex \(x\) outside of \(H\) that is adjacent to \(H\). As \(P\) is contained in \(\operatorname{dom}_{\mathcal{S}}(X)\), vertex \(x\) is in \(\partial H_{|X}\). Thus, \(\delta_{\operatorname{dom}_{\mathcal{S}}(X)}(v,\partial H_{|X})\leq\|P\|\leq \Delta/r\).
Footnote 8: However, notice that at the time when call \(C\) was made, \(X\) might not have grown into its final shape and \(X_{\mathcal{S}}\) could be much smaller; in particular, \(P\) may not be a path from \(v\) to \(X_{\mathcal{S}}\) and the distance from \(v\) to \(X_{\mathcal{S}}\) can be larger than \(\Delta/r\).
This means that \(v\) is assigned to some supernode in \(\mathcal{S}\) by the \(\textsc{GrowBuffer}\) algorithm. Recall that call \(C^{\prime}\) is below call \(C\) by Observation 3.9; thus, as calls are only made on connected components of unassigned vertices, we conclude that \(H^{\prime}\) is a subgraph of \(H\) that includes only unassigned vertices. Thus, vertex \(v\) is not in \(H^{\prime}\). This completes the proof of the induction step.
Base case.In the base case, the parent of \(C^{\prime}\) in the recursion tree is the call \(\tilde{C}\coloneqq\textsc{BuildTree}(\tilde{\mathcal{S}},\tilde{H})\) that initialized \(X\). If \(H^{\prime}\) sees \(X_{\mathcal{S}^{\prime}}\), then we are done. If \(H^{\prime}\) does not see \(X_{\mathcal{S}^{\prime}}\), then we are in the "interesting case" described above (except that here \(\tilde{H}\) doesn't see \(X_{\tilde{\mathcal{S}}}\)): by Observation 3.9, there is some call \(C\coloneqq\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) above \(C^{\prime}\) and below \(\tilde{C}\), during which \(X\) is processed. The argument for this case is identical to the one in the inductive case.
Figure 7: From left to right: (1) During call \(\tilde{C}\), subgraph \(\tilde{H}\) sees supernode \(X\). The grey supernode is _above_\(X\), and is not in \(\operatorname{dom}_{\tilde{\mathcal{S}}}(X)\). (2–3) During call \(C\), supernode \(X\) is cut off from \(H\), and every point in \(\mathcal{N}H_{|X}\) (i.e. every point close to \(\partial H_{|X}\)) is assigned. (4) For every subgraph \(H^{\prime}\) of \(H\), every path in \(\operatorname{dom}(X)\) from \(H^{\prime}\) to \(X\) passes through \(\partial H^{\prime}_{|X}\).
### Analysis: Supernode radius property
We now prove that every supernode \(\eta\) satisfies the radius property. To this end we prove three claims:
* Every time a supernode is cut off from a subgraph, the radius of \(\eta\) expands by at most \(\Delta/r\) (Claim 3.11).
* There are at most \(r-2\) supernodes that can cause \(\eta\) to expand (Claim 3.13).
* Each of the \(r-2\) supernodes can cause \(\eta\) to expand at most once (Claim 3.12).
Combining these three claims in an inductive argument shows that the total expansion of \(\eta\) is bounded by \(\Delta\) (Lemma 3.14).
**Claim 3.11**.: _Suppose that \(\nu\) is assigned to a supernode \(\eta\) during a call \(C\coloneqq\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\). Let \(X\) denote the supernode processed during \(C\), and let \(\partial H_{\mid X}\) denote the boundary vertices. Let \(\tilde{\nu}\) be the closest vertex in \(\partial H_{\mid X}\) to \(\nu\) (with respect to \(\mathrm{dom}_{\mathcal{S}}(X)\)). Then \(\delta_{\eta}(\nu,\tilde{\nu})\leq\Delta/r\) (with respect to the final \(\eta\))._
**Proof:** Let \(\mathcal{N}H_{X}\) denote the set of points assigned during \(C\). Let \(P\) be a shortest path between \(\nu\) and \(\tilde{\nu}\) in \(\mathrm{dom}_{\mathcal{S}}(X)\). Every vertex in \(P\) (other than \(\tilde{\nu}\)) is in \(\mathcal{N}H_{X}\). Because we assign every vertex in \(\mathcal{N}H_{X}\) according to the closest vertex in \(\partial H_{\mid X}\), every vertex in \(P\) is assigned to \(\eta\). Further, \(P\) has length at most \(\Delta/r\), because every vertex in \(\mathcal{N}H_{X}\) is within distance \(\Delta/r\) of some vertex in \(\partial H_{\mid X}\) (with respect to \(\mathrm{dom}_{\mathcal{S}}(X)\)), and \(\tilde{\nu}\) is the closest vertex in \(\partial H_{\mid X}\) to \(\nu\). Thus, \(\delta_{\eta}(\nu,\tilde{\nu})\leq\Delta/r\). \(\Box\)
We next show that each supernode seen by supernode \(\eta\) may cause \(\eta\) to be expanded at most once: if supernode \(\tilde{X}\) causes \(\eta\) to expand because \(\tilde{X}\) is cut off, supernode \(\tilde{X}\) cannot be cut off _again_ later on in the recursion. Later (in Claim 3.13) we will show that only supernodes seen by \(\eta\) may cause it to expand. Let \(\tilde{X}\) be a supernode, and let \(H\) be a subgraph. We say that \(\tilde{X}\) is _spent_ with respect to \(H\) if there exists some call \(\textsc{GrowBuffer}(\tilde{\mathcal{S}},\tilde{\mathcal{X}},\tilde{H})\) where \(\tilde{H}\supseteq H\), and \(\tilde{X}\) is processed during the call. In other words, \(\tilde{X}\) is cut off from \(\tilde{H}\) and \(H\) (even as \(\tilde{X}\) grows), and it has already been "dealt with" during the previous call \(\tilde{C}\).
**Claim 3.12**.: _Suppose that call \(\textsc{GrowBuffer}(\mathcal{S},\mathcal{X},H)\) is made during the algorithm. If supernode \(\tilde{X}\) is spent with respect to \(H\), then \(\tilde{X}\) is not in \(\mathcal{X}\)._
**Proof:** By definition of "spent", there is some call \(\tilde{C}\coloneqq\textsc{GrowBuffer}(\tilde{\mathcal{S}},\tilde{\mathcal{X}}, \tilde{H})\) where \(\tilde{H}\supseteq H\), and \(\tilde{X}\) is processed during \(\tilde{C}\). Notice that, because \(\tilde{X}\) is in \(\tilde{\mathcal{X}}\), subgraph \(\tilde{H}\) does not see \(\tilde{X}_{\tilde{\mathcal{S}}}\). Observe that:
* Call \(\tilde{C}\) makes some calls to \(\textsc{GrowBuffer}(\mathcal{S}^{\prime},\mathcal{X}^{\prime},H^{\prime})\). For each of these calls made by \(\tilde{C}\), notice that the set \(\mathcal{X}^{\prime}\) contains only supernodes in \(\tilde{\mathcal{X}}\setminus\{\tilde{X}\}\) (the "leftover" ones from \(\tilde{C}\)) or supernodes in \(\tilde{\mathcal{S}}\) that can be seen by \(\tilde{H}\) but not by \(H^{\prime}\) (those newly added ones). In particular, \(\mathcal{X}^{\prime}\) does not contain \(\tilde{X}\). Further, \(H^{\prime}\)_does not see \(\tilde{X}_{\mathcal{S}^{\prime}}\)_. This follows from Claim 3.7: if \(H^{\prime}\) could see \(\tilde{X}_{\mathcal{S}^{\prime}}\), then (because \(\tilde{X}\) was initialized above \(\tilde{C}\), and \(H^{\prime}\supseteq\tilde{H}\)), Claim 3.7 would imply that \(\tilde{H}\) could see \(\tilde{X}_{\tilde{\mathcal{S}}}\), a contradiction. An inductive argument shows that, for every call to \(\textsc{GrowBuffer}(\mathcal{S}^{\prime},\mathcal{X}^{\prime},H^{\prime})\) made recursively as a result of \(\tilde{C}\), the set \(\mathcal{X}^{\prime}\) does not contain \(\tilde{X}\).
* After the recursion from \(\tilde{C}\) terminates, the algorithm calls BuildTree on subgraphs of \(\tilde{H}\), which may recursively result in more calls to BuildTree. Let \(\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) be one of these calls, where \(H^{\prime}\subseteq\tilde{H}\). We claim that \(H^{\prime}\)_does not see \(\tilde{X}_{\mathcal{S}^{\prime}}\)_. As in the previous bullet point, this follows from Claim 3.7: if \(H^{\prime}\) could see \(\tilde{X}_{\mathcal{S}^{\prime}}\), then Claim 3.7 would imply that \(\tilde{H}\) could see \(\tilde{X}_{\tilde{\mathcal{S}}}\), a contradiction. This means that whenever \(\textsc{BuildTree}(\mathcal{S}^{\prime},H^{\prime})\) makes a call \(\textsc{GrowBuffer}(\mathcal{S}^{\prime\prime},\mathcal{X}^{\prime\prime},H^{ \prime\prime})\), the set \(\mathcal{X}^{\prime\prime}\) does not include \(\tilde{X}\); indeed, the set \(\mathcal{X}^{\prime\prime}\) only includes supernodes that are seen by \(H^{\prime}\).
It follows from these two cases that, for every call to \(\textsc{GrowBuffer}(\$^{\prime},\mathcal{X}^{\prime},H^{\prime})\) with \(H^{\prime}\subseteq\tilde{H}\), the supernode \(\tilde{X}\) is not in \(\mathcal{X}^{\prime}\). In particular, the call \(\textsc{GrowBuffer}(\$,\tilde{X},H)\) satisfies \(H\subseteq\tilde{H}\), and so \(\tilde{X}\) is not in \(\mathcal{X}\).
The following claim, in conjunction with Lemma 3.5, implies that for any supernode \(\eta\), at most \(r-2\) supernodes can cause it to expand. We crucially rely on the fact that when supernode \(X\) is cut off, we only expand supernodes initialized _below_\(X\); we do this because we only need to guarantee the supernode buffer property with respect to \(\mathrm{dom}(X)\).
**Claim 3.13**.: _Suppose that \(v\) is assigned to supernode \(\eta\) during a call \(C\coloneqq\textsc{GrowBuffer}(\$,\mathcal{X},H)\), and let \(X\) be the supernode in \(\mathcal{X}\) processed during \(C\). Suppose that \(\eta\) was initialized by a call \(\hat{C}\coloneqq\textsc{BuildTree}(\$,\hat{H})\). Then \(\hat{H}\) sees \(X_{\hat{\$}}\)._
* We first show that \(\eta\) is initialized below \(X\). As \(v\) is assigned to supernode \(\eta\) during \(C\), there is some vertex in \(\partial H_{\mid X}\subseteq\mathrm{dom}_{\$}(X)\) that was assigned to \(\eta\) (in \(\$\)). Suppose that \(X\) was initialized during the call \(\tilde{C}\coloneqq\textsc{BuildTree}(\$,\tilde{H})\), and notice that \(\mathrm{dom}_{\$}(X)\subseteq\tilde{H}\). Applying Claim 3.4(2) to call \(\tilde{C}\) shows that every vertex in \(\tilde{H}\) is (eventually) assigned to a supernode above \(X\), or to \(X\), or to a supernode below \(X\). By definition, \(\mathrm{dom}_{\$}(X)\) contains the vertices in \(\tilde{H}\) that are _not_ assigned to supernodes above \(X\) (in \(\$\)). Thus, every vertex in \(\mathrm{dom}_{\$}(X)\) is assigned to \(X\) or to a supernode below \(X\), and so either \(\eta=X\) or \(\eta\) is below \(X\). It cannot be that \(\eta=X\) (indeed, \(H\) sees \(\eta\) because \(v\in\partial H_{\mid X}\), and \(H\) does not see \(X\) because \(X\) is in \(\mathcal{X}\)), so \(\eta\) is below \(X\). We also observe that \(C\) is below \(\hat{C}\): because \(H\) is adjacent to a vertex in \(\eta\), Invariant 3.3 implies that \(\eta\) was initialized above \(C\). Thus, \(\hat{H}\supseteq H\). Now, for the sake of contradiction suppose that \(\hat{H}\) does not see \(X_{\hat{\$}}\). As \(\eta\) is below \(X\) and \(\hat{H}\) does not see \(X_{\hat{\$}}\), Observation 3.9 implies that there is some call \(\tilde{C}\coloneqq\textsc{GrowBuffer}(\$,\tilde{X},\tilde{H})\) such that (1) \(\tilde{H}\supseteq\hat{H}\), and (2) \(X\) is processed by \(\tilde{C}\). As \(\hat{H}\supseteq H\), this means that \(X\) is spent with respect to \(H\), and Claim 3.12 implies that \(X\) is not in \(\mathcal{X}\), a contradiction.
**Lemma 3.14**.: _Every supernode \(\eta\) has radius \(\Delta\) with respect to skeleton \(T_{\eta}\)._
* Let \(\textsc{BuildTree}(\$,\hat{H})\) be the call that initialized \(\eta\), and let \(\$\rvert_{\hat{H}}\) denote the set of supernodes in \(\hat{\$}\) that can be seen by \(\hat{H}\). In other words, by Claim 3.13, \(\$\rvert_{\hat{H}}\) is the set of supernodes that can cause \(\eta\) to expand. We prove the following statement by induction on \(k\). Let \(v\) be a vertex assigned to \(\eta\) during a call \(C\coloneqq\textsc{GrowBuffer}(\$,\mathcal{X},H)\). If there are at most \(k\) supernodes in \(\$\rvert_{\hat{H}}\) that are spent with respect to \(H\), then \(\delta_{\eta}(v,T_{\eta})\leq(k+1)\cdot\Delta/r\). Let \(X\) be the supernode processed during the call \(C\). Let \(\tilde{v}\) be the closest vertex to \(v\) in \(\eta_{\$}\) (with respect to \(\mathrm{dom}_{\$}(X)\)), as defined in Claim 3.11. **Inductive step (\(k>0\)).*
* If \(\tilde{v}\) is in \(T_{\eta}\), then Claim 3.11 implies that \(v\) is within distance \(\Delta/r\) of \(T_{\eta}\) (in the final subgraph \(\eta\)), satisfying the claim. Otherwise, \(\tilde{v}\) was assigned to \(\eta\) by some call \(\tilde{C}\coloneqq\textsc{GrowBuffer}(\$,\tilde{X},\tilde{H})\). We now show that the number of supernodes spent with respect to \(H\) is strictly greater than the number of supernodes spent with respect to \(\hat{H}\), aiming to apply the induction hypothesis on \(\hat{H}\).
* _Every supernode in \(\$\rvert_{\hat{H}}\) that is spent with respect to \(\tilde{H}\) is also spent with respect to \(H\)._ Indeed, for every such supernode \(\tilde{X}\) spent with respect to \(\tilde{H}\), there is some call \(\tilde{C}\coloneqq\textsc{GrowBuffer}(\$,\tilde{X},\tilde{H})\) such that \(\tilde{X}\supseteq\tilde{X}\) and \(\tilde{C}\) processes \(\tilde{X}\); as \(\tilde{H}\supseteq H\), call \(\tilde{C}\) also serves as a witness that \(\tilde{X}\) is spent with respect to \(H\). Now, consider the supernode \(\tilde{X}\) that was processed during \(\tilde{C}\). Observe that:
* _Every supernode in \(\$\rvert_{\hat{H}}\) that is spent with respect to \(\tilde{H}\) is also spent with respect to \(H\)._
* _Every supernode in \(\$\rvert_{\hat{H}}\) that is spent with respect to \(\tilde{H}\) is also spent with respect to \(H\).
* \(\tilde{X}\) _is not spent with respect to_ \(\tilde{H}\)_. This follows from Claim_ 3.12 _and the fact that_ \(\tilde{X}\in\tilde{X}\)_._
* \(\tilde{X}\) _is spent with respect to_ \(H\)_. First we argue that_ \(\tilde{H}\supsetneq H\)_. Observe that call_ \(\tilde{C}\) _is above call_ \(C\)_, because_ \(\tilde{v}\) _was already assigned when_ \(C\) _is called. Thus, vertices that are unassigned when_ \(C\) _is called are also unassigned when_ \(\tilde{C}\) _was called. In particular, when_ \(\tilde{C}\) _was called, every vertex in_ \(H\cup\{\tilde{v}\}\) _is unassigned. As_ \(\tilde{H}\) _is a maximal connected component of unassigned vertices and_ \(\tilde{v}\) _is adjacent to_ \(H\) _by definition,_ \(\tilde{H}\supsetneq H\)_._ _The existence of call_ \(\tilde{C}\) _(which processes_ \(\tilde{X}\)_) together with the fact that_ \(\tilde{H}\supseteqneq H\) _implies that_ \(\tilde{X}\) _is spent with respect to_ \(H\)_._
_Moreover, Claim_ 3.13 _implies that_ \(\tilde{X}\) _is in_ \(\hat{\delta}|_{\hat{H}}\)_, because a vertex was assigned to_ \(\eta\) _during a call to_ \(\textsc{GrowBuffer}\) _in which_ \(\tilde{X}\) _is processed. We conclude that there is at least one more supernode in_ \(\hat{\delta}|_{\hat{H}}\) _that is spent with respect to_ \(H\) _than those with respect to_ \(\tilde{H}\)_. Thus, we can apply the inductive hypothesis to conclude that_ \(\tilde{v}\) _is within distance_ \(k\cdot\Delta/r\) _of_ \(T_{\eta}\)_. By Claim_ 3.11_,_ \(v\) _is within distance_ \(\Delta/r\) _of_ \(\tilde{v}\)_, and so_ \(v\) _is within distance_ \((k+1)\cdot\Delta/r\) _of_ \(T_{\eta}\)_._
**Base case (\(k=0\))**: In this case, we claim \(\tilde{v}\) must be in \(T_{\eta}\), and so \(\delta_{\eta}(v,T_{\eta})\leq\Delta/r\). Suppose otherwise. Then \(\tilde{v}\) is assigned by a call to \(\textsc{GrowBuffer}\), and the argument above implies that there is at least one supernode in \(\hat{\delta}|_{\hat{H}}\) that is spent with respect to \(H\). This contradicts our assumption that \(k=0\).
By Lemma 3.5, there are at most \(r-2\) supernodes in \(\hat{\delta}|_{\hat{H}}\), and so we conclude that every vertex in \(\eta\) is within distance \(\Delta\) of \(T_{\eta}\).
We conclude:
**Theorem 3.15**.: _Let \(G\) be a \(K_{r}\)-minor-free graph, and let \(\Delta\) be a positive number. Then \(G\) admits a \((\Delta,\Delta/r,r-1)\)-buffered cop decomposition._
## 4 Shortcut partition from buffered cop decomposition
We first rephrase the definition of shortcut partition. Let \(G\) be a graph, let \(\varepsilon\) be a number in \((0,1)\), and let \(\mathcal{C}\) be a partition of the vertices of \(G\) into clusters of strong diameter \(\varepsilon\cdot\operatorname{diam}(G)\). Recall that the cluster graph \(\tilde{G}\) is obtained from the original graph \(G\) by contracting each cluster in \(\mathcal{C}\) into a single supernode. Let \(P\) be an arbitrary path in \(G\). We define \(\operatorname{cost}_{\mathcal{C}}(P)\) to be the minimum hop-length of a path \(\tilde{P}\) in \(\tilde{G}\), where (1) \(\tilde{P}\) is a path between the clusters containing the endpoints of \(P\), and (2) \(\tilde{P}\) only contains clusters with nontrivial intersection with \(P\). When \(\mathcal{C}\) is clear from context, we omit the subscript and simply write \(\operatorname{cost}(P)\). Notice that \(\mathcal{C}\) is an \((\varepsilon,h)\)-shortcut partition if, for every path \(P\) in \(G\), we have \(\operatorname{cost}(P)\leq\varepsilon h\cdot\left\lceil\frac{\|P\|}{\varepsilon \cdot\operatorname{diam}(G)}\right\rceil\); indeed, for any pair \(u,v\) of vertices in \(G\), applying this condition for any _shortest_ path \(P\) between \(u\) and \(v\) yields \(\operatorname{cost}(P)\leq\varepsilon h\cdot\left\lceil\frac{\|P\|}{\varepsilon \cdot\operatorname{diam}(G)}\right\rceil=h\cdot\left\lceil\frac{\delta_{ \mathcal{C}}(u,v)}{\varepsilon\cdot\operatorname{diam}(G)}\right\rceil\), as required.
In the rest of this section, we prove the following lemma:
**Lemma 4.1**.: _Let \(G\) be a \(K_{r}\)-minor-free graph, and let \(\Delta\) be a positive number. Then there is a partition \(\mathcal{C}\) of \(G\) into connected clusters, such that (1) each cluster has strong diameter at most \(4\Delta\), and (2) every path \(P\) in \(G\) with \(\|P\|<\Delta/r\) has \(\operatorname{cost}(P)\leq r^{O(r)}\)._
We now show that Lemma 4.1 implies Theorem 1.2, which we restate below.
**Theorem 1.2**.: _Any edge-weighted \(K_{r}\)-minor-free graph admits an \((\varepsilon,2^{O(r\log r)}/\varepsilon)\)-shortcut partition for any \(\varepsilon\in(0,1)\)._
**Proof:** Let \(\mathcal{C}\) be the partition guaranteed by Lemma 4.1 for \(\Delta\coloneqq\frac{e\cdot\mathrm{diam}(G)}{4}\). Every cluster of \(\mathcal{C}\) has strong diameter at most \(4\Delta=\varepsilon\cdot\mathrm{diam}(G)\). To prove that \(\mathcal{C}\) is an \((\varepsilon,h)\)-shortcut partition with \(h=r^{O(r)}/\varepsilon=2^{O(r\log r)}/\varepsilon\), it suffices to show that \(\mathrm{cost}(P)\leq r^{O(r)}\cdot\left\lceil\frac{\|P\|}{\varepsilon\cdot \mathrm{diam}(G)}\right\rceil\), for an arbitrary path \(P\) in \(G\).
We greedily partition \(P\) into a sequence of \(O\left(\left\lceil\frac{r\|P\|}{\Delta}\right\rceil\right)\) vertex-disjoint subpaths, where each subpath has length at most \(\Delta/r\). That is, we can write \(P\) as the concatenation \(P_{1}\circ e_{1}\circ P_{2}\circ e_{2}\ldots\circ Q_{\tau}\) for some \(\tau=O\left(\left\lceil\frac{r\|P\|}{\Delta}\right\rceil\right)\), such that each \(P_{i}\) has length at most \(\Delta/r\). We can upper-bound the cost of \(P\):
\[\mathrm{cost}(P)\leq\sum_{1=1}^{\tau}\mathrm{cost}(P_{i})+\sum_{i=1}^{\tau-1} \mathrm{cost}(e_{i}).\]
Each edge has cost at most \(1\), and (by Lemma 4.1) each subpath \(P_{i}\) has cost at most \(r^{O(r)}\). It follows that \(\mathrm{cost}(P)\leq r^{O(r)}\cdot\left\lceil\frac{\|P\|}{\varepsilon\cdot \mathrm{diam}(G)}\right\rceil\), which concludes the proof. \(\Box\)
### Construction
Let \(\mathcal{T}\) be a \((\Delta,\Delta/r,r-1)\)-buffered cop decomposition for \(G\). We partition each supernode \(\eta\) into clusters as follows. Fix an arbitrary supernode \(\eta\).
Let \(N\) be a _\(\Delta\)-net_ of the skeleton \(T_{\eta}\) of \(\eta\), which is an SSSP tree in \(\mathrm{dom}(\eta)\); that is, \(N\) is a subset of vertices in \(T_{\eta}\), such that (1) every vertex \(v\) in \(T_{\eta}\) satisfies \(\delta_{T_{\eta}}(v,N)=\delta_{\mathrm{dom}(\eta)}(v,N)\leq\Delta\), and (2) for every pair of vertices \(x_{1}\) and \(x_{2}\) in \(N\), we have \(\delta_{T_{\eta}}(x_{1},x_{2})=\delta_{\mathrm{dom}(\eta)}(x_{1},x_{2})>\Delta\). (The net \(N\) can be constructed greedily.) For each net point in \(N\), we initialize a separate cluster.
We partition the rest of the vertices in \(\eta\) based on their closest point in the net \(N\). In more detail, consider each vertex \(v\) in \(\eta\) in increasing order of their distance to \(N\). Find the shortest path \(P_{v}\) from \(v\) to the closest point in \(N\) (if there are multiple such paths, we fix \(P_{v}\) arbitrarily). Let \(v^{\prime}\) be the vertex adjacent to \(v\) in \(P_{v}\). Set the cluster of \(v\) to be the same as the cluster of \(v^{\prime}\). Observe that each cluster has a single net point in \(N\), which we refer to as the _center_ of the cluster; the centers of clusters constitute \(N\).
**Lemma 4.2**.: _For each supernode, each of its clusters has strong diameter at most \(4\Delta\)._
**Proof:** Let \(\eta\) be an arbitrary supernode and \(N\) be the set of cluster centers of \(\eta\). First, we claim by induction on \(\delta_{\eta}(v,N)\) that for every \(v\in\eta\), the cluster \(C_{v}\) that contains \(v\) also contains a shortest path from \(v\) to \(N\). For the basis, the claim clearly holds if \(v\in N\). For the induction step, suppose that \(v\not\in N\). Let \(P_{v}\) be the shortest path from \(v\) to \(N\) that is fixed in our construction, and let \(v^{\prime}\) be the vertex after \(v\) in \(P_{v}\). Hence, \(v^{\prime}\) is assigned to \(C_{v}\). Since \(\delta_{\eta}(v^{\prime},N)<\delta_{\eta}(v,N)\), cluster \(C_{v}\) contains a shortest path from \(v^{\prime}\) to \(N\), denoted by \(Q^{\prime}_{v}\), by our induction hypothesis. Hence, the path \((v,v^{\prime})\circ Q_{v^{\prime}}\) is a shortest path from \(v\) to \(N\), which is contained in \(C_{v}\). This completes the induction step.
Consider an arbitrary cluster \(C\) in \(\eta\) and any vertex \(v\in C\). By the supernode radius property, there is a vertex \(v_{T}\) in \(T_{\eta}\) such that \(\delta_{\eta}(v,v_{T})\leq\Delta\); as \(N\) is a \(\Delta\)-net, we have \(\delta_{\eta}(v_{T},N)\leq\Delta\). By the triangle inequality, \(\delta_{\eta}(v,N)\leq 2\Delta\). By the above claim, \(C\) contains a shortest path from \(v\) to \(N\), and is thus of length at most \(2\Delta\). As \(C\) contains a single cluster center, the diameter of \(C\) must be at most \(4\Delta\). \(\Box\)
We now bound the cost of a path \(P\). We first prove that "the highest supernode \(\eta\) that \(P\) intersects" is well-defined, then show that \(P\) intersects few clusters in \(\eta\), and finally give an inductive argument to bound \(\mathrm{cost}(P)\).
**Claim 4.3**.: _Let \(P\) be a path in \(G\). \(P\) is contained in \(\mathrm{dom}(\eta)\) for some supernode \(\eta\) that \(P\) intersects._
**Proof:** Let \(\hat{\mathcal{T}}\) denote the expansion of the partition tree \(\mathcal{T}\). For every supernode \(\eta_{1}\), the tree decomposition property implies that the set of bags in \(\hat{\mathcal{T}}\) containing \(\eta_{1}\) induces a connected subtree of \(\hat{\mathcal{T}}\), which we denote \(\hat{\mathcal{T}}[\eta_{1}]\). Further, for every pair of supernodes \(\eta_{1}\) and \(\eta_{2}\) that are adjacent in \(G\), there is some bag shared by both \(\hat{\mathcal{T}}[\eta_{1}]\) and \(\hat{\mathcal{T}}[\eta_{2}]\); it follows that \(\hat{\mathcal{T}}[\eta_{1}]\cup\hat{\mathcal{T}}[\eta_{2}]\) is a connected subtree of \(\hat{\mathcal{T}}\). As \(P\) is connected, the set of bags containing _any_ supernode that \(P\) intersects induces a connected subtree of \(\hat{\mathcal{T}}\), which we denote \(\hat{\mathcal{T}}[P]\).
Let \(B_{\eta}\) denote the root bag of the subtree \(\hat{\mathcal{T}}[P]\), where \(B_{\eta}\) is in one-to-one correspondence with the supernode \(\eta\) in \(\mathcal{T}\). Bag \(B_{\eta}\) contains the supernode \(\eta\), as well as some supernodes above \(\eta\) in \(\mathcal{T}\). Path \(P\) does not intersect any supernode above \(\eta\), as otherwise \(\hat{\mathcal{T}}[P]\) would include a bag above \(B_{\eta}\). Thus, \(P\) is in \(\operatorname{dom}(\eta)\). Further, \(P\) intersects some supernode in \(B_{\eta}\), so \(P\) intersects \(\eta\). \(\Box\)
**Claim 4.4**.: _If \(P\) is a path of length less than \(\Delta\), and \(\eta\) is a supernode such that \(P\) is contained in \(\operatorname{dom}(\eta)\), then \(P\) intersects at most \(9r\) clusters in \(\eta\)._
**Proof:** Suppose for contradiction that \(P\) satisfies the conditions of the claim, yet it intersects at least \(9r+1\) clusters in \(\eta\). By the shortest-path skeleton property, the skeleton \(T_{\eta}\) of \(\eta\) is an SSSP tree in \(\operatorname{dom}(\eta)\) with at most \(r-1\) leaves, thus the vertices of \(T_{\eta}\) can be partitioned into at most \(r-1\leq r\) shortest paths. Since each cluster in \(\eta\) has its center chosen from one of at most \(r\) shortest paths, \(P\) intersects at least \(10\) clusters with centers in the same shortest path, denoted by \(Q\). Let \(C_{u}\) (respectively, \(C_{v}\)) be the first (resp., last) cluster (among those with centers in \(Q\)) that \(P\) intersects, let \(u\) (resp., \(v\)) be an intersection point of \(P\) and \(C_{u}\) (resp., \(C_{v}\)), and let \(c_{u}\) (resp., \(c_{v}\)) be the center of \(C_{u}\) (resp., \(C_{v}\)). Since \(Q\) is a shortest path in \(\operatorname{dom}(\eta)\) that intersects at least \(10\) centers and as the distance between any two cluster centers is at least \(\Delta\), we have \(\delta_{\operatorname{dom}(\eta)}(c_{u},c_{v})\geq 9\Delta\). By the triangle inequality, we have:
\[||P||\ \geq\ \delta_{\operatorname{dom}(\eta)}(u,v)\ \geq\ \delta_{ \operatorname{dom}(\eta)}(c_{u},c_{v})-\underbrace{(\delta_{\operatorname{dom }(\eta)}(c_{u},u)+\delta_{\operatorname{dom}(\eta)}(c_{v},v))}_{\leq 4\Delta+4 \Delta}\ \geq\ 9\Delta-8\Delta\ \geq\ \Delta,\]
yielding a contradiction. \(\Box\)
We say that a path \(P\) is _\(k\)-constrained_ if, for every supernode \(\eta\) that \(P\) intersects, there are at most \(k\) supernodes in the bag \(B_{\eta}\) corresponding to \(\eta\).
**Lemma 4.5**.: _Let \(P\) be a \(k\)-constrained path with \(\|P\|<\Delta/r\). Then \(\operatorname{cost}(P)\leq(54r)^{k}\)._
**Proof:** The proof is by induction on \(k\). The basis is trivially true, as only the empty path is \(0\)-constrained. We next prove the induction step. By Claim 4.3, there is some supernode \(\eta\) that \(P\) intersects, such that \(P\) is in \(\operatorname{dom}(\eta)\). Choose an arbitrary vertex \(v_{\eta}\in P\cap\eta\) and split \(P\) at \(v_{\eta}\) into two subpaths, \(P^{\prime}\) and \(P^{\prime\prime}\); \(v_{\eta}\) is an endpoint of both \(P^{\prime}\) and \(P^{\prime\prime}\). We will show \(\operatorname{cost}(P^{\prime})\leq 27r\cdot(54r)^{k-1}=\frac{1}{2}(54r)^{k}\), so by symmetry, \(\operatorname{cost}(P)=\operatorname{cost}(P^{\prime})+\operatorname{cost}(P^{ \prime\prime})\leq(54r)^{k}\).
We partition \(P^{\prime}\) into a sequence of vertex-disjoint subpaths \(P_{1},P_{1:2},P_{2},P_{2:3},\ldots\) as follows. Let \(\mathcal{C}[\eta]\) denote the set of clusters in \(\eta\) that \(P^{\prime}\) intersects. Define \(C_{1}\) to be the cluster (in \(\mathcal{C}[\eta]\)) that contains \(v_{\eta}\), define \(P_{1}\) to be the maximal prefix of \(P^{\prime}\) that ends in a vertex in \(C_{1}\), and define \(P[C_{1}:]\coloneqq P^{\prime}\setminus P_{1}\) to be the suffix of \(P^{\prime}\) starting after \(P_{1}\). For all \(i\geq 1\) with \(C_{i}\neq\varnothing\), we recursively define (see Figure 8):
* Define \(C_{i+1}\) to be the first cluster in \(\mathcal{C}[\eta]\) that \(P[C_{i}:]\) intersects. If \(P[C_{i}:]\) intersects no clusters in \(\mathcal{C}[\eta]\), then define \(C_{i+1}\coloneqq\varnothing\).
* Define \(P_{i:i+1}\) to be the maximal prefix of \(P[C_{i}:]\) that contains no vertex in \(C_{i+1}\). If \(C_{i+1}=\varnothing\), then set \(P_{i:i+1}\coloneqq P[C_{i}:]\). Notice that \(P_{i:i+1}\) may be the empty path if the first vertex on \(P[C_{i}:]\) is in \(C_{i+1}\)
* Define \(P_{i+1}\) to be the maximal subpath of \(P[C_{i}:]\) with both endpoints in \(C_{i+1}\). Notice that \(P_{i+1}\) starts immediately after \(P_{i:i+1}\).
* Define \(P[C_{i+1}:]\) to be the suffix of \(P[C_{i}:]\) that starts after \(P_{i+1}\). Notice that \(P[C_{i+1}:]\) contains no vertices in \(C_{i+1}\).
By Claim 4.4, \(\mathcal{C}[\eta]\) contains at most \(9r\) clusters9; thus, there are at most \(9r\) subpaths \(P_{i}\) and \(9r\) subpaths \(P_{i:i+1}\) defined by the above procedure. There are at most \(18r\) edges in \(P^{\prime}\) that connect the subpaths. The cost of \(P^{\prime}\) is bounded by the sum of costs of the subpaths as well as the edges between the subpaths.
Footnote 9: Claim 4.4 is stronger than what we need here
* Each edge has cost at most \(1\).
* Each subpath \(P_{i}\) has cost \(0\), as either \(P_{i}\) is empty or its endpoints are in the same cluster.
* As we argue next, _each subpath \(P_{i:i+1}\) has cost at most \((54r)^{k-1}\)_. Observe that every supernode \(\eta^{\prime}\) that \(P^{\prime}\) intersects has \(\eta\) in its bag \(B_{\eta^{\prime}}\). Indeed, if \(B_{\eta^{\prime}}\) did not contain \(\eta\), then \(\eta^{\prime}\) and \(\eta\) would not be adjacent by definition; as \(\eta\) is above \(\eta^{\prime}\) in \(\mathcal{T}\) (by definition of \(\eta\)), the supernode buffer property implies that \(\|P\|\geq\delta_{\mathrm{dom}(\eta)}(\eta^{\prime},\eta)\geq\Delta/r\), a contradiction. Further, notice that \(P_{i:i+1}\) does not intersect \(\eta\). Thus, as \(P^{\prime}\) is \(k\)-constrained, each subpath \(P_{i:i+1}\) is \((k-1)\)-constrained. The inductive hypothesis implies that \(\mathrm{cost}(P_{i:i+1})\leq(54r)^{k-1}\), as argued.
Since \(k\geq 1\), we conclude that
\[\mathrm{cost}(P^{\prime})\;\leq\;18r\cdot 1+9r\cdot 0+9r\cdot(54r)^{k-1}\;\leq \;27r\cdot(54r)^{k-1}.\]
This proves the lemma.
Noting that every path is \((r-1)\)-constrained (as every bag contains at most \(r-1\) supernodes), Lemma 4.2 and Lemma 4.5 prove Theorem 1.2.
## 5 Other Applications
In this section, we describe several applications of our shortcut partition mentioned in the introduction. We will start with two direct applications (Section 5.1) and then proceed to the application on the embedding of apex-minor-free graphs (Section 5.2).
### Direct Applications
Tree cover.The authors of [CCL\({}^{+}\)23] provided a construction of \((1+\varepsilon)\)-tree cover for minor-free graphs via shortcut partition. Their definition of \((\varepsilon,h)\)-shortcut partition is slightly different than our Definition 1.1; it is strictly weaker. Recall that the low-hop property of Definition 1.1 guarantees:
For any pair of vertices \(u\) and \(v\) in \(G\), there exists some shortest path \(\pi\) in \(G\) between \(u\) and \(v\), and a path \(\tilde{\pi}\) in the cluster graph \(\tilde{G}\) such that (in addition to other properties) \(\tilde{\pi}\) only contains clusters that intersect \(\pi\).
The definition of [CCL\({}^{+}\)23] (Definition 2.1) instead only guarantees:
For any pair of vertices \(u\) and \(v\) in \(G\), there exists some path \(\pi^{\prime}\) in \(G\) between \(u\) and \(v\) with \(\|\pi^{\prime}\|\leq(1+\varepsilon)\delta_{G}(u,v)\), and a path \(\tilde{\pi}\) in the cluster graph \(\tilde{G}\) such that (in addition to other properties) \(\tilde{\pi}\) only contains clusters that intersect \(\pi^{\prime}\).
This is a weaker guarantee. Also, as mentioned already, the definition of [CCL\({}^{+}\)23] states that the hop-length of \(\tilde{\pi}\) is at most \(h\), regardless of \(\delta_{G}(u,v)\), while our definition takes \(\delta_{G}(u,v)\) into account, allowing smaller hop-lengths for smaller distances. Consequently, the shortcut partition we construct in Theorem 1.2 can be used in their construction of tree cover.
The tree cover construction of [CCL\({}^{+}\)23] consists of two steps.10 The first step is a reduction from a tree cover with multiplicative distortion \((1+\varepsilon)\) to a tree cover with additive distortion \(+\varepsilon\Delta\), where \(\Delta\) is the diameter, with a loss of a \(O(\log(1/\varepsilon))\) factor to the cover size. In the second step, it is shown that an \((\varepsilon,h)\)-shortcut partition for minor-free graphs implies a tree cover of size \(2^{O(h)}\) with additive distortion \(+\varepsilon\Delta\). Their result can be summarized as follows.
Footnote 10: The more efficient tree cover construction for _planar graphs_ in [CCL\({}^{+}\)23] does not follow the two-step framework; instead, the authors exploited planarity to get a better (and indeed polynomial) dependency on \(\varepsilon\) in the size of the cover.
**Lemma 5.1** (Lemma 1.7 and Theorem 1.8 in [CCL\({}^{+}\)23]).: _Let \(G\) be a minor-free graph, and \(\varepsilon\in(0,1)\). If every subgraph of \(G\) admits an \((\varepsilon,h)\)-shortcut partition, then \(G\) has a \((1+\varepsilon)\)-tree cover of size \(2^{O(h)}\)._
By Theorem 1.2, any \(K_{r}\)-minor-free graph has an \((\varepsilon,r^{O(r)}/\varepsilon)\)-shortcut partition. This proves Theorem 1.8.
Distance oracle.As discussed in Section 1.2, we construct our distance oracle from our tree cover (Theorem 1.8): given a query pair \((u,v)\), query the distance \(d_{T}(u,v)\) in \(T\), for each tree \(T\) in the tree cover \(\mathcal{T}\), and return \(\min_{T\in\mathcal{T}}d_{T}(u,v)\). Distance query in a tree is reduced to a lowest common ancestor (LCA) query. In the RAM model, there are LCA data structures [HT84, BFC00] with \(O(n)\) space and \(O(1)\) query time. In the pointer machine model, this can be carried out with \(O(n)\) space and \(O(\log\log n)\) query time [vL76]. Theorem 1.7 now follows.
### Embedding of apex-minor-free graphs
The authors of [CCL\({}^{+}\)23] show that any _planar_ graph \(G\) with diameter \(\Delta\) can be embedded into a graph of treewidth \(O(\varepsilon^{\to})\) with distortion \(+\varepsilon\Delta\). Their argument uses three properties of planar graphs, which carries over to any minor-free graph with these properties (Lemma 5.5). Loosely speaking, they show that if (P1) \(G\) has an \((\varepsilon,h)\)-shortcut partition, (P2) \(G\) has an \(+\varepsilon\Delta\) forest cover \(\mathcal{F}\) for \(G\) that "interacts nicely" with the shortcut partition (Lemma 5.4), and (P3) \(G\) has the _local-treewidth_ property, then \(G\) can be embedded into a graph of treewidth \(O(h\cdot|\mathcal{F}|)\) with distortion \(+\varepsilon\Delta\).
Theorem 1.2 gives us a shortcut partition for minor-free graphs and hence (P1). Apex-minor-free graphs have the local treewidth property [Epp00, DH04a, DH04b] (see Lemma 5.2 below) and hence
satisfy (P3). The main goal of this section is to show (P2) (by proving Lemma 5.4), and we do so by applying the framework of [23] to our shortcut partition to construct an appropriate forest cover.
**Lemma 5.2** (Diameter-treewidth property [14]).: _Let \(G\) be a graph excluding a fixed apex graph as a minor. Let \(D\) be its (unweighted) diameter. Then \(\operatorname{tw}(G)=O(D)\)._
We note that the big-O in Lemma 5.2 hides the dependency on the size of the minor; it is the Robertson-Seymour constant. As a result, the dependency on the minor of our Theorem 1.9 also has a Robertson-Seymour constant.
We recall the construction of the [23] tree cover for completeness. A forest \(F\) is _dominating_ if \(d_{F}(u,v)\geq d_{G}(u,v)\) for every two vertices \(u,v\in V(G)\). In the definition of the tree cover for \(G\), we allow _Steiner_ vertices, i.e., which do not belong to \(G\), in a tree. Here the forest we construct will contain no Steiner vertices, meaning that \(V(F)\subseteq V(G)\). In this case, we say that \(F\) is _Steiner-free_. Let \(\tilde{C}\) be a clustering of \(G\). Let \(\tilde{G}\) be the _cluster graph_ obtained from \(G\) by contracting clusters in \(\mathcal{C}\); \(\tilde{G}\) is a simple unweighted graph. Let \(\tilde{F}\) be a forest, subgraph of \(\tilde{G}\) such that every tree in \(\tilde{F}\) is rooted at some node. We define the _star expansion_ of \(\tilde{F}\) to be a forest \(F\) of \(G\) obtained by applying the following to each tree \(\tilde{T}\in\tilde{F}\) (see Figure 9):
Let \(V_{T}\) denote the set of vertices (of \(G\)) that belong to clusters in \(\tilde{T}\). Choose an arbitrary vertex \(r\) in the cluster that is the root of \(\tilde{T}\). Let \(T\) be a star rooted at \(r\) connected to every vertex in \(V_{T}\). We assign each edge \((r,u)\) of \(T\) a weight \(d_{G}(r,u)\). We then add \(T\) to \(F\).
Let \(\tilde{\mathcal{F}}\) be a collection of rooted forests of \(\tilde{G}\), each of which is a subgraph of \(\tilde{G}\), called a _spanning forest cover_. The _star expansion_ of \(\tilde{\mathcal{F}}\) is a collection of rooted forests, denoted by \(\mathcal{F}\), obtained by taking star expansion of each rooted forest \(\tilde{F}\) in \(\tilde{\mathcal{F}}\). We note that a forest in \(\tilde{\mathcal{F}}\) might not be a subgraph of \(G\), but it is Steiner-free. (That is, each forest might contain Steiner edges--edges not in \(G\)--but not Steiner vertices.) The following lemma was proven in [23].
**Lemma 5.3** (Adapted from Theorem 2.2 and Theorem 2.5 in [23]).: _Let \(G\) be an edge-weighted minor-free graph with diameter \(\Delta\), and let \(\varepsilon\in(0,1)\). Suppose that \(G\) has an \((\varepsilon,h(\varepsilon))\)-shortcut partition \(\mathcal{C}\). Let \(\tilde{G}\) be the cluster graph obtained from \(G\) by contracting clusters in \(\mathcal{C}\). Then there exists a spanning forest cover \(\tilde{\mathcal{F}}\) of \(\tilde{G}\) such that its star expansion \(\mathcal{F}\) satisfies the following properties:_
* [Root preservation.] _For every pair of vertices_ \(u\) _and_ \(v\) _in_ \(G\)_, there is a tree_ \(T\) _in some forest of_ \(\mathcal{F}\) _such that (1)_ \(\delta_{T}(u,v)\leq\delta_{G}(u,v)+\varepsilon\Delta\)_, and (2) a shortest path from_ \(u\) _to_ \(v\) _in_ \(T\) _passes through the root of_ \(T\)_._
* [Size.] \(\mathcal{F}\) _contains_ \(2^{O(h(\varepsilon))}\) _forests._
We remark that the second half of the root preservation property was not explicitly stated in [CCL\({}^{+}\)23]; however, it is immediate from the fact that \(\mathcal{F}\) is a set of star forests.
We now show the following structural result for apex-minor-free graphs (see Figure 9 for illustration).
**Lemma 5.4**.: _Let \(G\) be an edge-weighted graph with diameter \(\Delta\) excluding a fixed apex graph as a minor, and let \(\varepsilon\in(0,1)\). Then there is a partition \(\mathcal{C}\) of the vertices of \(G\) into clusters with strong diameter \(\varepsilon\Delta\), and a set \(\mathcal{F}\) of \(2^{O(1/\varepsilon)}\) forests with the same vertex set as \(G\), that satisfy the following properties:_
* [Low-hop.] _For every pair of vertices_ \(u\) _and_ \(v\)_, there is a path between_ \(u\) _and_ \(v\) _that intersects at most_ \(h\) _clusters, for some_ \(h=O(1/\varepsilon)\)_._
* [Root preservation.] _For every pair of vertices_ \(u\) _and_ \(v\) _in_ \(G\)_, there is a tree_ \(T\) _in some forest of_ \(\mathcal{F}\) _such that (1)_ \(\delta_{T}(u,v)\leq\delta_{G}(u,v)+\varepsilon\Delta\)_, and (2) a shortest path from_ \(u\) _to_ \(v\) _in_ \(T\) _passes through the root of_ \(T\)_._
_For each cluster \(C\) in \(\mathcal{C}\), choose an arbitrary vertex \(v_{C}\) to be the center vertex, and define the star \(S_{C}\) to be a star connecting \(v_{C}\) to every other point in \(C\). Define \(G^{\prime}\) to be the graph obtained by replacing every supernode \(C\) in cluster graph \(\check{G}\) with the star \(S_{C}\): every edge between two clusters in \(\check{G}\) is replaced with an edge in \(G^{\prime}\) between the centers of the two clusters. Notice \(G^{\prime}\) has the same vertex set as \(G\)._
* [Contracted treewidth.] _Graph_ \(G^{\prime}\) _has treewidth_ \(O(h)\)_._
* [Forest correspondence.] _For every forest_ \(F\) _in_ \(\mathcal{F}\)_, there is a corresponding spanning forest_ \(F^{\prime}\) _(a subgraph of_ \(G^{\prime}\)_) such that: For every tree_ \(T\) _in_ \(F\)_, there is a tree_ \(T^{\prime}\) _in_ \(F^{\prime}\) _such that_ \(V(T)=V(T^{\prime})\) _and_ \(\operatorname{root}(T)=\operatorname{root}(T^{\prime})\)_._
**Proof:** By Theorem 1.2, there is a partition \(\mathcal{C}\) of \(G\) that is an \((\varepsilon,h)\)-shortcut partition, for \(h=O(1/\varepsilon)\). By Lemma 5.3, we can construct a spanning forest cover \(\check{\mathcal{F}}\) of \(\check{G}\) and its star expansion \(\mathcal{F}\) satisfying the root preservation property. Furthermore, \(\mathcal{F}\) has \(2^{O(1/\varepsilon)}\) forests. We now show the other three properties of the theorem.
**[Low-hop.]** The low-hop property follows directly from the statement of Theorem 1.2.
**[Contracted treewidth.]** Let \(\check{G}\) denote the graph obtained by contracting every cluster in \(\mathcal{C}\) into a supernode. By the low-hop property, the (unweighted) diameter of \(\check{G}\) is at most \(h\). Graph \(\check{G}\) excludes the same minors as \(G\), so Lemma 5.2 implies that \(\check{G}\) has treewidth \(O(h)\). Now notice that we can obtain \(G^{\prime}\) from \(\check{G}\) by creating new (degree-1) vertices and adding an edge between each new vertex and a supernode in \(\check{G}\). We construct a tree decomposition for \(G^{\prime}\), starting from the tree decomposition for \(\check{G}\): For each new vertex \(v\) attached to an existing supernode \(u\), create a bag containing \(u\) and \(v\), and add it as a child to an arbitrary bag containing \(u\) in the tree decomposition of \(\check{G}\). This procedure does not change the treewidth; it is still \(O(h)\).
**[Forest correspondence.]** By definition, each forest \(F\in\mathcal{F}\) is a star expansion of a forest \(\check{F}\in\check{\mathcal{F}}\). To get the forest correspondence property, we simply transform \(\check{F}\) into a forest \(F^{\prime}\) on \(G^{\prime}\) in the natural way: For each tree \(\check{T}\) in \(\check{F}\), replace every vertex \(C\) in \(\check{T}\) with the corresponding star \(S_{C}\) in \(G^{\prime}\), and replace every edge in \(\check{T}\) with the corresponding edge between star centers in \(G^{\prime}\). We claim that \(F^{\prime}\) is a forest. Indeed,
this transformation maps each tree \(\mathring{T}\) to a tree \(T^{\prime}\) in \(G^{\prime}\), because \(T^{\prime}\) is connected and has one more vertex than it has edges. Recall that for each tree \(\mathring{T}\in\mathring{F}\), there is a corresponding tree \(T\in F\), which is obtained by star expansion. By construction, \(T\) and \(T^{\prime}\) has the same vertex set. We then can set the root of \(T^{\prime}\) to be the same as the root of \(T\).
Further, the trees \(T^{\prime}\) are vertex disjoint because the clusters \(\mathcal{C}\) are vertex disjoint, and there are no edges between the trees \(T^{\prime}\). Thus, \(F^{\prime}\) is a spanning forest of \(G^{\prime}\).
The following reduction is implicit in [12].11
Footnote 11: There are small differences in the phrasing of [12]. (1) They do not explicitly state the _contracted treewidth_ property; rather, they prove it as a lemma using properties of planar graphs. (2) They do not explicitly state the _forest correspondence_ property; rather, they state a different condition (the “disjoint cluster” condition), that they then use to prove the forest correspondence property in a lemma. The “disjoint cluster” condition is used only to prove the forest correspondence property.
**Lemma 5.5** ([12], Section 7).: _Let \(G\) be an edge-weighted minor-free graph with diameter \(\Delta\), and let \(\varepsilon\) be a number in \((0,1)\). Suppose there is a partition \(\mathcal{C}\) of \(G\) into clusters of strong diameter \(\varepsilon\Delta\), together with a set of forests \(\mathcal{F}\), such that \(\mathcal{C}\) and \(\mathcal{F}\) satisfy the low-hop property with parameter \(h\), the root preservation property, the contracted treewidth property, and the forest correspondence property. Then \(G\) can be embedded deterministically into a graph with treewidth \(O(h\cdot|\mathcal{F}|)\) and distortion \(+O(\varepsilon\Delta)\)._
Combining Lemma 5.4 and Lemma 5.5 proves Theorem 1.9.
**Acknowledgement.** Hung Le and Cuong Than are supported by the NSF CAREER Award No. CCF-2237288 and an NSF Grant No. CCF-2121952.
|
2309.07113 | Contrastive Deep Encoding Enables Uncertainty-aware
Machine-learning-assisted Histopathology | Deep neural network models can learn clinically relevant features from
millions of histopathology images. However generating high-quality annotations
to train such models for each hospital, each cancer type, and each diagnostic
task is prohibitively laborious. On the other hand, terabytes of training data
-- while lacking reliable annotations -- are readily available in the public
domain in some cases. In this work, we explore how these large datasets can be
consciously utilized to pre-train deep networks to encode informative
representations. We then fine-tune our pre-trained models on a fraction of
annotated training data to perform specific downstream tasks. We show that our
approach can reach the state-of-the-art (SOTA) for patch-level classification
with only 1-10% randomly selected annotations compared to other SOTA
approaches. Moreover, we propose an uncertainty-aware loss function, to
quantify the model confidence during inference. Quantified uncertainty helps
experts select the best instances to label for further training. Our
uncertainty-aware labeling reaches the SOTA with significantly fewer
annotations compared to random labeling. Last, we demonstrate how our
pre-trained encoders can surpass current SOTA for whole-slide image
classification with weak supervision. Our work lays the foundation for data and
task-agnostic pre-trained deep networks with quantified uncertainty. | Nirhoshan Sivaroopan, Chamuditha Jayanga, Chalani Ekanayake, Hasindri Watawana, Jathurshan Pradeepkumar, Mithunjha Anandakumar, Ranga Rodrigo, Chamira U. S. Edussooriya, Dushan N. Wadduwage | 2023-09-13T17:37:19Z | http://arxiv.org/abs/2309.07113v1 | # Contrastive Deep Encoding Enables Uncertainty-Aware Machine-Learning-Assisted histopathology
###### Abstract
Deep neural network models can learn clinically relevant features from millions of histopathology images. However generating high-quality annotations to train such models for each hospital, each cancer type, and each diagnostic task is prohibitively laborious. On the other hand, terabytes of training data --while lacking reliable annotations-- are readily available in the public domain in some cases. In this work, we explore how these large datasets can be consciously utilized to pre-train deep networks to encode informative representations. We then fine-tune our pre-trained models on a fraction of annotated training data to perform specific downstream tasks. We show that our approach can reach the state-of-the-art (SOTA) for patch-level classification with only 1-10% randomly selected annotations compared to other SOTA approaches. Moreover, we propose an uncertainty-aware loss function, to quantify the model confidence during inference. Quantified uncertainty helps experts select the best instances to label for further training. Our uncertainty-aware labeling reaches the SOTA with significantly fewer annotations compared to random labeling. Last, we demonstrate how our pre-trained encoders can surpass current SOTA for whole-slide image classification with weak supervision. Our work lays the foundation for data and task-agnostic pre-trained deep networks with quantified uncertainty.
Whole Slide Image (WSI), Uncertainty Awareness (UA), knowledge distillation, self-supervised learning.
## 1 Introduction
Computer-assisted histopathological diagnostics using deep learning is an emerging field. Especially for cancer, deep models have demonstrated their potential in the clinic by effectively streamlining labor-intensive and error-prone image-based detection procedures, thereby complementing pathologists in the diagnostic process. For instance, in the detection of breast cancer metastases, a prevalent form of cancer, the early work by Wang et al. [1] has made significant strides in reducing human error rates by integrating deep learning models with human pathologists' assessments. Similarly, during the last decade, deep learning has demonstrated its capability as a promising tool in digital pathology, with numerous studies highlighting its diverse clinical applications. These include clinical diagnosis [2, 3, 4, 5], prognosis/survival prediction [6, 7, 8, 9], treatment response forecasting [10, 11], and the identification of regions of interest (Rols) that exhibit substantial diagnostic value [12, 13, 14]. Thus, machine-learning-driven digital pathology can potentially enhance multiple facets of the clinical process including accuracy, analysis speed, and reproducibility [15, 16, 17].
Despite notable achievements of deep learning in histopathology, several challenges persist [18]. First, unlike conventional images where the target object occupies a substantial portion of the image, cancer-positive histopathology images typically feature small regions of positive activations over a large area of normal tissue. One should therefore image many slides of biopsy specimens to collect a sufficiently large training dataset. Some cancer types are rare and access to such samples is limited. Furthermore, patient data cannot be readily shared due to privacy concerns. This lack of access to training data discourages community from developing better models. Second, to train a fully supervised model, images should be finely annotated by expert pathologists identifying visually intricate patterns critical for accurate diagnosis. Such careful annotations are time-consuming. Return on expert time invested in annotating is also not guaranteed in terms of the final performance of the model. This uncertainty discourages
expert pathologists from spending time annotating large datasets to a standard of accuracy needed to train a reliable deep model in a fully supervised fashion. Third, most deep-learning models for computational pathology are not interpretable. The end user is not made aware of the uncertainty in model predictions. This lack of transparency makes it challenging to integrate these models into the existing clinical decision process. In this work we attempt to tackle all three of these challenges by: using publicly available large datasets; self-supervised pre-training followed by supervised fine-tuning with only a few annotations; and explicitly quantifying the level of uncertainty.
To this end, based on the seminal SimCLRv2 self-supervised framework [19], we introduce an uncertainty-aware learning method for digital pathology. Our work results in three significant advancements over the state-of-the-art (SOTA). First, we overcome annotation limitations by training an accurate model with minimal labeled training samples. Second, we establish our model as a versatile framework capable of adapting to various clinical tasks. Third, we quantify the uncertainty in our model predictions, empowering pathologists to make more informed decisions in the context of model confidence. With these contributions, our approach outperformed SOTA in both patch and slide-level predictions on multiple benchmark datasets while uniquely quantifying risks associated with uncertain model predictions.
## 2 Related Work
### Deep learning for digital pathology
Some seminal work that laid foundation for machine-learning-assisted histopathology include: [20] that applied logistic regression to investigate the correlation between histology of the surrounding connective tissue of tumor cells and prognosis in breast cancer; [21] that studied survival prediction for lung cancers using regularized machine learning and automatic feature extraction; and [22] that visually examined individual node responses in the last hidden layer of a convolutional neural network to uncover biologically insightful interpretable characteristics in histopathology images.
### Self-supervised representation learning
Self-supervised learning (SSL) [23, 24, 25, 26, 27, 28, 29, 30] has proven successful in computer vision to pre-train large models with unlabelled data. SimCLR [31], a contrastive learning framework, is an example of this approach to learn visual representations of images. SimCLRv2 [19] enhanced SimCLR in multiple ways: utilizing larger backbone networks during pre-training, increasing the capacity of the projection head, incorporating a memory mechanism from MoCo [32], and leveraging knowledge distillation [33] using unlabeled examples. Masked auto-encoding (MAE) [34] is another SSL approach that reconstructs signals from partial observations. It consists of an encoder that maps a partially masked image to a latent representation and a decoder that reconstructs the original image from it. By masking \(75\%\) of the input patches of the image, MAE enabled scalable training of large models using a reconstruction loss on masked patches. The efficacy of MAE has been demonstrated in various downstream tasks, including image classification.
Both SimCLR and MAE allow learning of rich representations from large amounts of unlabeled data and have also been effectively used in digital pathology. [35] adapted SimCLR for digital pathology. [36] used MAE in digital pathology and introduced a self-distillation loss on top of the masked patch reconstruction loss from original MAE. Nevertheless, more advanced SimCLRv2 has not been adopted for digital pathology. Moreover, none of these SSL models have been investigated in relation to the uncertainty of their predictions.
### Uncertainty quantification
Estimating uncertainty of deep learning models is an active area of research in machine learning. A straightforward approach is to use the model to output a distribution of predictions for each input rather than a single prediction. The variability of the predicted distribution can then be used to quantify uncertainty. Monte Carlo dropout method is one such approach, where each input is forward passed multiple times through the model (during both training and inference), while randomly dropping out different network nodes. Here, the standard deviation of the generated distribution of outputs is an estimator for uncertainty [37, 38]. Deep ensembles is another method to quantify uncertainty. It generates a distribution of predictions by forwarding each input through a set of deep learning models that are trained with different initializations and hyperparameters [39, 40]. Entropy of the model output is then used to estimate the uncertainty. Test-time-augmentation [41] is another uncertainty quantification method. It applies random transformations to each image and obtains predictions for each. Prediction variance among the perturbed versions of the same image is used as the uncertainty estimation.
Despite the critical clinical importance, most deep learning frameworks for histopathology do not quantify the confidence of model predictions. A notable exception is [41] that quantified histologic ambiguity of images using an uncertainty score. The uncertainty here is estimated during inference by sending each image tile through 30 forward passes in a dropout-enabled network, resulting in a distribution of predictions. The standard deviation of predictions for a single image patch
represents patch-level uncertainty. These uncertainty quantification methods rely on a parameter with inherent ambiguity, rather than providing a precise mathematical quantification of the uncertainty level.
In our work we augmented SSL frameworks to explicitly include uncertainty estimation. We adopted a Bayesian approach first proposed by [42] using the theory of evidence. [42] compared this Bayesian approach with others to show its utility for out-domain training. As we focus on task-agnostic out-domain learning from public datasets, we leverage this method in our work.
## 3 Results
### _SimCLRv2 and uncertainty-aware (UA-) SimCLRv2_
We experimented with two state-of-the-art self supervised pre-training approaches on two publicly available patch-level datasets. We tested masked auto-encoding (MAE) [34] using a transformer backbone and contrastive learning using SimCLR frameworks (v1 [31] and v2 [43]). Out of these SimCLR-v2 performed best and hence was selected for this work. The training procedure of SimCLRv2 involves three stages (see Fig. 1.A1-3). First, a ResNet backbone undergoes self-supervised pre-training, where it learns representations through a stack of projection layers (see Fig. 1.A1). This process utilizes a contrastive loss function to compare the embeddings generated by the projected head. Second, a classification head is appended to the pre-trained encoder, and the model is fine-tuned fully-supervised using a small fraction of annotated patches (Ref. Fig. 1.A2). Last, the learned classifier, i.e. the teacher model, transfers its knowledge to an architecturally identical student model through a process known as knowledge distillation (see Fig. 1.A3). Here the teacher's pseudo labels (provisional labels assigned to unlabeled data based on model predictions) on the entire large unlabelled dataset are used as ground-truths for the student model. Note that the models use annotations only in the fine-tuning stage (see Fig. 1.A2).
While the SimCLRv2 model with a ResNet-50 backbone demonstrates superior performance in various clinical tasks (see our results in the next section), it lacks interpretability. Medical artifical intelligence (AI) applications, however, require transparent and explainable results due to the potential risks associated with predictions. To address this issue, we propose a modified version of SimCLRv2 called Uncertainty-aware SimCLRv2 (UA-SimCLRv2) shown in Fig. 1.B. UA-SimCLRv2 follows a similar pre-training process as SimCLRv2; however, it undergoes fine-tuning and distillation using MSE loss in the Bayesian view. Notably, UA-SimCLRv2 not only provides class labels but also provides an associated uncertainty score for each patch prediction. We discuss UA-SimCLRv2 in detail in the methods section 5.
In the next section, we present SimCLRv2 and UA-SimCLRv2 results for patch-level classification tasks in the digital pathology domain.
### _Patch classification results using SimCLRv2 and UA-SimCLRv2_
Histopathology studies analyze data at multiple levels of hierarchy including patch-level, slide-level, and patient-level. We first employ SimCLRv2 and UA-SimCLRv2 as patch-level classifiers (later we adapt patch-level trained models to slide-level). In our study, we utilized two datasets: PatchCamelyon (PCam) and NCT-CRC-HE-100k (NCT100k). The experimental setup for evaluating model performances on these datasets included two variables: (1) the percentage of annotations used for fine-tuning from the training set, and (2) whether pre-training was performed on in-distribution data or out-distribution data. For example, in PCam experiments, out-distribution-pre-training means that pre-training was done on the NCT-CRC-HE-100k dataset (and vice versa in NCT-CRC-HE-100k experiments).
**Binary classification results on PCAM.** The PCam dataset [44] comprised a collection of 327,680 patches extracted from histopathology scans of lymph node sections in the CAMELYON16 dataset [45]. Each patch had a corresponding binary label indicating the presence of metastatic tissue. According to [44], a positive patch contained at least one pixel of tumor tissue in its center 32x32 pixel region. Patches were 96x96 pixels in size, but for consistency across experiments, we resized all patches to 224x224 pixels (see the methods section for more details). We divided the dataset into training (75%), validation (12.5%), and test (12.5%) splits, ensuring a balanced distribution of positive and negative examples with no overlap between the splits.
Fig. 2 and Table 1 show the accuracy, F1 score, and AUC score (Area Under the ROC Curve) for the PCam binary classification. To establish a baseline, we first fine-tuned our models with all training labels (i.e. the 100% setting). Here, our models outperformed the state-of-the-art (SOTA) approach, i.e., MAE [34]. Notably, out-distribution-pre-trained UA-SimCLRv2 performed best with 2.89% increase in accuracy, 2.74% in F1 score and 0.77% in AUC score compared to the SOTA.
Next, we fine-tuned our models on 10% training labels. 10%-fine-tuned models performed slightly worse than the 100% baseline. Nevertheless, the 10%-fine-tuned SimCLRv2 and UA-SimCLRv2 still performed on par with or better than the SOTA (see Fig. 2 and Ta
ble 1). Last, we fine-tuned our models on 1% training labels. Interestingly the SimCLRv2 and UA-SimCLRv2 models still performed comparable to the SOTA (see the 1% setting on Fig. 2 and Table 1). However, at the 1% setting UA-SimCLRv2 consistently underperformed compared to SimCLRv2, perhaps due to the limited evidence available for uncertainty awareness (see method section 5 for more details).
**Multi class classification on NCT100k.** The NCT100k dataset [51] comprised 100,000 non-overlapping image patches extracted from hematoxylin and eosin (H&E) stained histological images of human colorectal cancer and normal tissues. Each patch was 224x224 pixels in size, and annotated with one of nine classes based on the type of tissue: Adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), and colorectal adenocarcinoma epithelium (TUM). According to NCT100k dataset's guidelines, we trained our models with NCT100k and tested with CRC-VAL-HE-7K (CRC7k) dataset [51], which consisted of samples from the same data distribution.
Fig. 3 and Table 3 show multi-class classification results for the NCT100k dataset. Similar to the binary case, we experimented at 100%, 10%, and 1% fine-tuning settings. First, at the 100% setting our SimCLRv2 and UA-SimCLRv2 performed on par with the SOTA. Interestingly, out-distribution-pre-trained SimCLRv2 was the best-performing model and surpassed the SOTA by a small margin. At the 10% setting, our models still performed comparable to the 100% baseline and SOTA. But at the 1% setting, we observed a clear degradation of performance by a few percentage points.
We further investigated the model behaviors at 100% and 1% settings using t-distributed stochastic neighbor embedding (T-SNE) on the learned feature representations. Fig. 3.C&E show the T-SNE maps for SimCLRv2 at 100% & 1% settings respectively. Fig. 3.D1&F1 show the same for UA-SimCLRv2. Compared to the 100% setting, the 1% setting of UA-SimCLRv2 showed more overlapping clusters. For instance, the NORM class heavily overlapped with the TUM class (see Fig. 3.F1). Fig. 3.D2&F2 show the same T-SNE plots from UA-SimCLRv2 in D1&F1 but color-coded with the associated uncertainty of the predictions. Interestingly, the overlapping regions showed high uncertainty. In Fig. 3.D3&F3 we further color-coded only the incorrect predictions. We observed that in the 100% setting most incorrect predictions were associated with higher uncertainty (Fig. 3.D3). But in the 1% setting some incorrect predictions were associated with lower uncertainty (Fig. 3.F3). We also plotted the histograms of uncertainty values of correct and incorrect predictions (see Fig. 3.G&H). For both 100% and 1% settings, correct predictions showed a left-skewed distribution; incorrect predictions showed a right-skewed distribution. Thus uncertainty in predictions allows us to identify the data classes that lead to inaccurate predictions. This insight enabled us to develop a sample
Figure 1: The SimCLRv2 framework. (A1) The pre-training step. Contrastive Learning is used to pre-train a deep neural encoder using a large set of unlabelled images. (A2) The supervised fine-tuning step. A classification head is added to the pre-trained encoder. The model is then fine-tuned using a small fraction of labeled images. (A3) The knowledge distillation step. The model from ‘A2’ is used as a teacher network to generate pseudo labels for all unlabeled training images. Then the pseudo labels are used to train a student network (with the same architecture). (B) The proposed Uncertainty-aware(UA) SimCLRv2 model with an additional output for the uncertainty score.
Figure 2: Binary classification results for PCam dataset for an ensemble of models. (A) Representative images from the tumor class. (B) Representative images from the non-tumor class. (C) Classification accuracy for competing models (shown in black) vs. the proposed models: SimCLRv2 and UA-SimCLRv2 (i.e. Uncertainty Aware SimCLRv2). The proposed models were first trained on 100% of annotations to generate the baselines. Models were pretrained using in-distribution data (i.e. training data from NCT100k) as well as out-of-distribution data (i.e. training data from PCam). Then the same experiments were repeated with 10% of annotations, and 1% of annotations. (D) F1-scores for the same experiments in ‘C’. (E) Area under the curve (AUC) for the same classification experiments in ‘C’.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Regular Model} & \multicolumn{3}{c}{Uncertainty-aware Model} \\
**Labels** & **Training** & **Model** & Acc (\%) & F1 (\%) & Acc (\%) & F1 (\%) & AUC (\%) \\ \hline & \multicolumn{1}{c}{Indomain} & TransPath\({}^{*}\)[46] & 81.20 & 81.00 & 91.70 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & Mocov3\({}^{*}\)[47] & 86.30 & 86.20 & 95.00 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & DINO\({}^{*}\)[48] & 85.80 & 85.60 & 95.70 & - & - & - \\
100\% & \multicolumn{1}{c}{Indomain} & SD-MAE\({}^{*}\)[36] & 88.20 & 87.80 & 96.20 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & MAE [34] & 88.41 & 86.23 & 95.81 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & SimCLRv1 [31] & 83.21 & 84.40 & 88.67 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & 90.57 & 90.20 & 96.47 & 90.29 & 89.95 & 96.49 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & 89.30 & 88.97 & 96.58 & **91.30** & **91.09** & **96.83** \\ \hline
10\% & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & 89.73 & 89.07 & 96.19 & 88.27 & 88.94 & 94.69 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & 89.60 & 88.84 & 96.73 & **90.41** & **89.97** & **96.87** \\ \hline & \multicolumn{1}{c}{Indomain} & MAE [34] & 86.10 & 94.45 & 95.81 & 85.81 & 86.10 & 94.45 \\
1\% & \multicolumn{1}{c}{Indomain} & SimCLRv1 [31] & 88.67 & 81.52 & 83.45 & 87.77 & 88.67 & 81.52 \\ & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & **90.27** & **89.99** & **95.34** & 88.96 & 88.54 & 94.24 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & 89.21 & 88.88 & 95.57 & 87.43 & 86.96 & 92.33 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Binary classification results for PCam dataset for an ensemble of models under 100%, 10%, and 1% labels settings. For selected cases, MAE, SimCLRv1, and SimCLRv2 models were modified with uncertainty-aware loss; the corresponding results are shown in the “Uncertainty-aware Model” columns. Results marked by \(*\) are quoted from [36]. The rest are from our experiments. Results marked by \(\dagger\) are from our selected SimCLRv2 approach. The original references for the model architectures are shown next to each model.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Regular Model} & \multicolumn{3}{c}{Uncertainty-aware Model} \\
**Labels** & **Training** & **Model** & Acc (\%) & F1 (\%) & Acc (\%) & F1 (\%) \\ \hline & \multicolumn{1}{c}{Indomain} & TransPath\({}^{*}\)[46] & 92.80 & 89.90 & - & - \\ & \multicolumn{1}{c}{Indomain} & Mocov3\({}^{*}\)[47] & 94.40 & 92.60 & - & - \\ & \multicolumn{1}{c}{Indomain} & DINO\({}^{*}\)[48] & 94.40 & 91.60 & - & - \\ & \multicolumn{1}{c}{Indomain} - & BYOL\({}^{*}\)[50] & 93.93 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & HistoSSL-Res\({}^{**}\)[49] & 96.55 & - & - & - \\
100\% & \multicolumn{1}{c}{Indomain} & HistoSSL-ViT\({}^{*}\)[49] & 96.18 & - & - & - \\ & \multicolumn{1}{c}{Indomain} & SD-MAE\({}^{*}\)[36] & 95.30 & 93.50 & - & - \\ & \multicolumn{1}{c}{Indomain} & MAE [34] & 94.70 & 94.20 & - & - \\ & \multicolumn{1}{c}{Indomain} & SimCLRv1 [31] & 92.10 & 92.20 & - & - \\ & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & 96.28 & 96.25 & 96.44 & 96.39 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & **96.85** & **96.82** & 95.88 & 95.82 \\ \hline
10\% & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & **96.28** & **96.25** & 95.82 & 95.73 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & 94.62 & 94.56 & 94.98 & 94.87 \\ \hline & \multicolumn{1}{c}{Indomain} & MAE [34] & 93.40 & 92.68 & - & - \\
1\% & \multicolumn{1}{c}{Indomain} & **SimCLRv2\({}^{*}\)[43]** & 94.27 & 94.12 & 91.70 & 91.65 \\ & \multicolumn{1}{c}{Outdomain} & **SimCLRv2\({}^{*}\)[43]** & **94.34** & **94.23** & 92.34 & 92.85 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Multi-class classification results for NCT100k dataset for an ensemble of models under 100%, 10% and 1% labels settings. For selected cases, MAE, SimCLRv1, and SimCLRv2 models were modified with uncertainty-aware loss; the corresponding results are shown in the “Uncertainty-aware Model” columns. Results marked by \(*\) are quoted from [36]; Results marked by \(**\) are quoted from [49]. The rest are from our experiments. Results marked by \(\dagger\) are from our selected SimCLRv2 approach. The original references for the model architectures are shown next to each model.
Figure 3: Multi-class classification results for NCT100k dataset for an ensemble of models. (A) Representative images from nine classes in the dataset. (B1) Classification accuracy for competing models (shown in black) vs. the proposed models: SimCLRv2 and UA-SimCLRv2 (i.e. Uncertainty Aware SimCLRv2). The proposed models were first trained on 100% of annotations to generate the baselines. Models were pre-trained using in-distribution data (i.e. training data from NCT100k) as well as out-domain data (i.e. training data from PCam). Then the same experiments were repeated with 10% of annotations, and 1% of annotations. (B2) F1-scores for the same experiments in ‘A’. (C) T-SNE plot for SimCLRv2 trained in distribution with 100% annotations. (D1) T-SNE plot for UA-SimCLRv2 trained in distribution with 100% annotations. Note that there are four clusters that were hard for the model to separate. (D2) T-SNE plot color coded with the uncertainty values. Note that mixed cluster regions show high uncertainty. (D3) T-SNE plot where only the Incorrect predictions are color coded. Note that most incorrect predictions show high uncertainty. (E, F1, F2, F3) Corresponding versions of ‘C, D1, D2, D3’ with 1% of annotations. Note that in ‘F3’ there are more incorrect predictions with low uncertainty values than in ‘D3’. (G) Histogram of uncertainty values with correct vs. incorrect predictions for 100% annotations (H) Same as in ‘G’ with 1% annotations. Note that incorrect prediction histograms correspond to ‘D3’ and ‘F3’.
selection procedure we call _uncertainty-aware training_ (UA-training) to fine-tune UA-SimCLRv2. Uncertainty-aware training deviates from the random selection of 1% annotations employed while keeping the fine-tuning process unchanged.
### Uncertainty-aware Fine-tuning
The proposed uncertainty-aware training approach is presented in Fig. 4.A. We started fine-tuning using 1% randomly selected labeled patches (from the training set of the target task). The resulting model was then inferred on the remaining training set, and uncertainty scores were computed. We then included annotations of the top 1% patches with the highest uncertainty scores. The annotated patches, totaling 2% of the training set, were then used for the subsequent fine-tuning step. This iterative process emulates a scenario where only highly uncertain patches are selectively labeled by an expert pathologist. The process was repeated until 10% of the training set was annotated for fine-tuning. At each step, the fine-tuned UA SimCLRv2 model was saved and evaluated on the CRC7k dataset (i.e. the test set for multi-class classification) to generate test performance metrics.
Fig. 4 and Table 3 demonstrate the accuracy and F1 score for uncertainty-aware training. In the in-domain training settings, the accuracy and F1-scores immediately reached just below the 100%-setting baseline with only 2% labels; but the performance saturated afterwards. Nevertheless, in both in- and out-domain pre-training settings, UA-training outperformed fine tuning with random selections (compare 2-10% bars with their corresponding 10% rnd bars in Fig. 4 [A-B]). The best-performing case was achieved in the out-domain pre-training setting (i.e. UA SimCLRv2 pre-trained on PCam) at 9% of the labels. Interestingly, this model outperformed both the SOTA, i.e. HistoSSL-Res [49], and the 100% baseline model.
These results establish the proposed UA-SimCLRv2 as the SOTA patch classifier on NCT100k benchmarks. We next present how the SimCLRv2 and UA-SimCLRv2 models trained on patch-level data can be used as encoders in slide-level classification.
### Whole Slide Image (WSI) classification using Multiple Instance Learning (MIL)
In addition to establishing SimCLRv2 as the SOTA patch classifier, we demonstrate its versatility in adapting to whole slide image (WSI) classification at the slide-level. WSI classification is typically performed using multiple instance learning (MIL) under weakly-supervised settings. Here only slide-level annotations are available. We adapted SimCLRv2 and UA-SimCLRv2 for WSI classification on the CAMLYON16 dataset [45]. CAMELYON16 consisted of 400 H&E WSIs of lymph nodes, with metastatic regions labeled. The dataset consists of two sets: a training set consisting 270 WSIs and a testing set consisting 130 WSIs.
We adapted DTFD-MIL [52], the SOTA WSI classifier on CAMELYON16, as our WSI classifier (Fig. 1[D]). DTFD-MIL leverages a ResNet-50 backbone to extract features from patches obtained from WSIs. Features extracted from all the patches (from a particular slide) are then combined using an attention mechanism to predict the slide-level label. In our adaptation, we replaced the ResNet-50 feature extractor (which is pre-trained on ImageNet [58] in a fully supervised setting) with a ResNet-50 that is pre-trained, fine-tuned, and distilled within our proposed SimCLRv2 and UA-SimCLRv2 frameworks using either the CAMELYON16, NCT100k, or PCam datasets.
Fig. 5 shows variation of test accuracy, F1 score and AUC scores comparing DTFD-MIL with and without our encoder. The exact values are reported in Table 4. In all 100%, 10%, and 1% settings, some version of the SimCLRv2 and UA-SimCLRv2 models outperformed the SOTA by a few percentage points in all three performance metrics. This result shows the value of introducing a few patch-level annotations to train an encoder for slide-level classification. We further investigated the effect of patch-level contrastive pre-training alone on the slide-level encoder. In this setting, no patch-level annotations were used to pre-train the encoder. To this end, we pre-trained the SimCLRv2 encoder using patches obtained by splitting the WSIs in CAMELYON16. This model too outperformed the SOTA by a few percentage points, in all three performance metrics (see Camelyon16 case in Fig. 5 and )Table 4).
These results highlight the capability of our proposed approach to achieve accurate and interpretable machine learning with minimal supervision.
\begin{table}
\begin{tabular}{l c c c c} \hline & \multicolumn{2}{c}{In-domain} & \multicolumn{2}{c}{Out-domain} \\
**Labels** & Acc (\%) & F1 (\%) & Acc (\%) & F1 (\%) \\ \hline
1\% & 91.70 & 91.65 & 92.34 & 92.25 \\
**2\%** & 96.26 & 96.15 & 94.38 & 94.34 \\
**3\%** & 96.29 & 96.23 & 96.41 & 96.31 \\
**4\%** & 95.35 & 95.35 & 93.33 & 93.21 \\
**5\%** & 96.03 & 96.02 & 95.69 & 95.68 \\
**6\%** & 95.93 & 95.91 & 96.28 & 96.25 \\
**7\%** & 95.76 & 95.76 & 96.29 & 96.25 \\
**8\%** & 96.50 & 96.45 & 96.25 & 96.12 \\
**9\%** & 96.32 & 96.28 & 97.01 & 96.90 \\
**10\%** & 96.51 & 96.49 & 96.40 & 96.33 \\ \hline \end{tabular}
\end{table}
Table 3: Results from uncertainty-informed training of UA-SimCLRv2 on the NCT100K dataset.
Figure 4: (A) Uncertainty-aware fine-tuning of the UA-SimCLRv2 model for NCT100K. UA-SimCLRv2 was pretrained using in-distribution data (i.e. training data from NCT100k) as well as out-domain data (i.e. training data from PCam). Then the pretrained models were finetuned using a randomly selected 1% of annotations. Next, the uncertainty values were calculated using the 1%-finetuned model for all remaining training data and another 1% of labels with high uncertainty score were annotated to have 2% of annotations for further fine-tuning. This procedure was repeated until the 10% of annotations were used for fine-tuning. (B) Classification accuracy for Uncertainty-aware fine-tuning vs. randomly selected fine-tuning. (C) F1 score for Uncertainty-aware fine-tuning vs. randomly selected fine-tuning. (D) T-SNE map of the learned features from the final distilled model for the best performing case (i.e. out-domain pretrained model with Uncertainty-aware fine-tuning up to 9% annotations – also marked using the green arrow in ‘B’). (E) The same T-SNE map in ‘D’ color coded using the uncertainty score. (F) The same T-SNE map in ‘E’ with only wrong classifications color coded.
[MISSING_PAGE_POST]
## Appendix A Results
Figure 6: Interpreting the pre-training process through T-SNE maps. The first two columns show T-SNE maps for the NCT100K test dataset’s features after pre-training SimCLRv2 using different datasets. Note that the clusters are partially formed for in-domain as well as out-domain pre-training. The remaining two columns show T-SNE maps for the PCam test dataset’s features from the same pre-trained SimCLRv2 models. Both non-color-coded and color-coded T-SNE map pairs are provided (based on ground truth labels mentioned in Fig. 4D). (A1) NCT 100K dataset features extracted from SimCLRv2 pre-trained on the NCT100K dataset itself. (A2) NCT 100K dataset features extracted from SimCLRv2 pre-trained on the Pcam dataset. (A3) NCT 100K dataset features extracted from SimCLRv2 pre-trained on the Camelyon-16 dataset. (B1) PCam dataset features extracted from SimCLRv2 pre-trained on the NCT100K dataset. (B2) PCam dataset features extracted from SimCLRv2 pre-trained on the Pcam dataset itself. (B3) PCam dataset features extracted from SimCLRv2 pre-trained on the Camelyon-16 dataset.
## 4 Discussion and Conclusion
In summary, based on the seminal SimCLRv2 self-supervised framework [19], we introduce an uncertainty-aware contrastive learning method for digital pathology. Through a series of experiments, we showcase the performance of our framework across various histopathology tasks, even when confronted with limited annotated data. Furthermore, we address a critical limitation in histopathology models by incorporating uncertainty awareness, which has significantly enhanced the interpretability of our novel model. This newfound interpretability may empower clinicians to make well-informed decisions, facilitating the seamless integration of AI into the manual clinical decision-making process.
Our findings indicate that vision models with CNN backbones outperform transformer-based models in histopathology visual learning. Specifically, SimCLRv2, equipped with a simple ResNet-50 backbone, surpasses current SOTA models. It has the potential to achieve even better performance with a deeper ResNet backbone. The superiority of SimCLRv2 may be due to contrastive learning. As depicted in Fig. 6, the feature clusters formed by the SSL pre-trained model reveal its ability to differentiate distinct classes and their corresponding features, even before any annotations are introduced. The learning procedure of SimCLRv2, which effectively utilizes a large unlabelled dataset in two steps of the pipeline, confers the advantage of learning a highly effective encoder for patch-level data.
When a sufficient number of annotations are available for UA training, UA SimCLRv2 tends to outperform SimCLRv2. The transition from the feature space clusters of the SimCLRv2 model to those of UA SimCLRv2 results in more well-defined and tightly shaped clusters, indicating an improved ability to classify accurately (see Fig. 3). Moreover, the alignment of high uncertainty with incorrect predictions demonstrates the model's capability to identify challenging cases and exhibit lower confidence in predicting them.
Our results also suggest that when enough labels are available for fine-tuning, pre-training on the out-distribution data results in higher performance. The superiority of this out-domain trained model can be attributed to the advantage gained from a large number of classes available for contrastive pre-training. With an increased diversity of classes, the encoder can effectively compare and cluster features, leading to a better understanding of distinctive tissue characteristics and the establishment of clear boundaries within the feature space. Last, the seamless integration of our models
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & ImgNet weights & Pretrained & Finetuned & Acc(\%) & F1(\%) & AUC(\%) \\ \hline Mean Pooling\({}^{*}\) & ✓ & - & - & 62.6 & 35.5 & 52.8 \\ Max Pooling\({}^{*}\) & ✓ & - & - & 82.6 & 75.4 & 85.4 \\ RNN-MIL\({}^{*}\)[53] & ✓ & - & - & 84.4 & 79.8 & 87.5 \\ Classic AB-MIL\({}^{*}\)[54] & ✓ & - & - & 84.5 & 78.0 & 85.4 \\ DS-MIL\({}^{*}\)[55] & ✓ & - & - & 85.6 & 81.5 & 89.9 \\ CLAM-SB\({}^{*}\)[56] & ✓ & - & - & 83.7 & 77.5 & 87.1 \\ CLAM-MB\({}^{*}\)[56] & ✓ & - & - & 82.3 & 77.4 & 87.8 \\ Trans-MIL\({}^{*}\)[57] & ✓ & - & - & 85.8 & 79.7 & 90.6 \\ DTFD-MIL\({}^{*}\)[52] & ✓ & - & - & 89.9 & 86.6 & **93.3** \\ \hline DTFD-MIL + Our encoder & ✗ & PCAM & 100\% & 90.7 & 87.5 & **95.5** \\ DTFD-MIL + Our encoder & ✗ & NCT-CRC & 100\% & 92.2 & 89.6 & 94.0 \\ DTFD-MIL + Our encoder (UA) & ✗ & PCAM & 100\% & 87.6 & 82.2 & 93.6 \\ DTFD-MIL + Our encoder (UA) & ✗ & NCT-CRC & 100\% & 92.2 & 89.1 & 94.4 \\ \hline DTFD-MIL + Our encoder & ✗ & PCAM & 10\% & 86.8 & 81.9 & 89.7 \\ DTFD-MIL + Our encoder & ✗ & NCT-CRC & 10\% & 89.1 & 86.5 & 93.6 \\ DTFD-MIL + Our encoder (UA) & ✗ & PCAM & 10\% & 89.6 & 89.6 & **95.9** \\ DTFD-MIL + Our encoder (UA) & ✗ & NCT-CRC & 10\% & 92.2 & 89.6 & 94.3 \\ \hline DTFD-MIL + Our encoder & ✗ & PCAM & 1\% & 85.2 & 80.0 & 88.4 \\ DTFD-MIL + Our encoder & ✗ & NCT-CRC & 1\% & 88.4 & 84.0 & 92.1 \\ DTFD-MIL + Our encoder (UA) & ✗ & PCAM & 1\% & 92.2 & 88.8 & **95.3** \\ DTFD-MIL + Our encoder (UA) & ✗ & NCT-CRC & 1\% & 95.9 & 94.9 & 92.7 \\ \hline DTFD-MIL + Our encoder & ✗ & Camelyon-16 & - & 91.5 & 88.8 & **95.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Whole slide classification results for CAMELYON16 dataset. The column “ImgNet weights” shows the models that used ResNet50 encoders trained on ImageNet in fully supervised settings. The column “Finetuned” shows the percentage of data from the same pre-trained dataset used to finetune our encoders. Results marked by \(*\) are quoted from [52]
into the WSI classification demonstrates the versatility and adaptability of our approach.
However, in scenarios with fewer annotations, such as our 1% setting, UA-SimCLRv2 exhibits significant underperformance, often providing incorrect predictions with high confidence, particularly when multiple data classes are present. This poses a substantial risk in the context of medical AI, as the model's confidence does not necessarily reflect the correctness of its decisions. Thus we recommend using approaches such as uncertainty-aware fine-tuning (proposed in the section 3.3) to mitigate such risks when fine-tuning with few annotations. We also highly recommend rigorous testing with independent test datasets that aren't used during any stage of the training process including the hyper-parameter estimation.
In conclusion, our extensive patch- and slide-level experiments with contrastive learning and uncertainty quantification set new benchmarks in digital pathology classification. Our results consistently suggest the efficient use of large datasets -despite having no annotations- helps build better models for digital pathology. We believe that our work sets the foundation upon which multiple clinical tasks can use large digital pathology datasets efficiently, accurately, and interpretably.
## Acknowledgements
This work was supported by the Center for Advanced Imaging and the John Harvard Distinguished Science Fellowship Program within the FAS Division of Science of Harvard University.
## 5 Methodology
### UA SimCLRv2
To address the critical need for interpretability, we introduce UA-SimCLRv2. The primary objective of UA-SimCLRv2 is to enhance the interpretability of the model's predictions in the context of histopathology analysis. This is achieved by incorporating the theory of uncertainty estimation, which serves as the basis for uncertainty awareness in UA SimCLRv2.
In [42], the uncertainty estimation is approached from Dempster-Shafer theory of evidence (DST) perspective [59] assigning belief masses to subsets of a frame of discernment, which denotes the set of exclusive possible states. Subjective logic (SL) formalizes DST's notion of belief assignments over a frame of discernment as a Dirichlet distribution. Term evidence is a measure of the amount of support collected from data in favor of a sample to be classified into a certain class. Through model training evidence \(e_{k}\) (\(k=1,2...K\))are collected and belief masses \(b_{k}\) (\(k=1,2...K\)) are assigned to each class based on the evidence collected and the remaining are marked as uncertainty \(u\). For \(K\) mutually exclusive classes,
\[u+\sum_{k=1}^{K}b_{k}=1 \tag{1}\]
Here \(u\geq 0\) and \(b_{k}\geq 0\) and they are calculated by,
\[b_{k}=\frac{e_{k}}{S}\quad\text{and}\quad u=\frac{K}{S},\quad\text{where}\quad S =\sum_{i=1}^{K}e_{i}+1 \tag{2}\]
Observe that when there is no evidence, the belief for each class is zero and the uncertainty is one. A belief mass assignment, i.e., subjective opinion, corresponds to a Dirichlet distribution with parameters \(\alpha_{k}=e_{k}+1\). A Dirichlet distribution parameterized over evidence represents the density of each such probability assignment; hence it models second-order probabilities and uncertainty [60]. It is characterized by \(K\) parameters \(\alpha=[\alpha_{1},\alpha_{2},....,\alpha_{K}]\) and is given as follows.
\[D(p\|\alpha)=\begin{cases}\frac{1}{B(\alpha)}\prod_{i=1}^{K}p_{i}^{\alpha_{i }-1},&\text{if }p\in S_{K}\\ 0,&\text{otherwise}\end{cases}\]
Where \(S_{K}=\{p\|\sum_{i=1}^{K}p_{i}=1\quad\text{and}\quad 0\leq p_{1},...p_{k}\leq 1\}\) and \(B(\alpha)\) is the \(K\)-dimensional multinomial beta function [61].
Model training follows the classical neural network architecture with a softmax layer replaced with ReLU activation layer to ascertain non-negative output, which is taken as the evidence vector for the predicted Dirichlet distribution. For network parameters \(\theta\), let \(f(x_{i}\|\theta)\) be the evidence vector predicted by the network for the classification. Corresponding Dirichlet distribution's parameters \(\alpha_{i}=f(x_{i}\|\theta)+1\) are calculated and its mean \((\frac{\alpha_{i}}{S})\) is considered as the class probabilities. Let \(y_{i}\) be one hot vector encoding the ground-truth class label of a sample \(x_{i}\). Treating \(D(p_{i}\|\alpha_{i})\) as a prior on the sum of squares loss \(\|y_{i}-p_{i}\|_{2}^{2}\), obtain the loss function
\[L_{i}(\theta)=\int\|y_{i}-p_{i}\|_{2}^{2}\frac{1}{B(\alpha_{i})}\prod_{i=1}^{ K}p_{ij}^{\alpha_{ij}-1}dp_{i} \tag{3}\]
By decomposing the first and second moments, minimization of both the prediction error and the variance of the Dirichlet experiment for each sample is achieved by the above loss function. Further some evidence collected might strengthen the belief for multiple classes. To avoid situations where evidence with more ambiguity assigns more belief to incorrect class, Kullback-Leibler (KL) divergence term is appended to the loss function. Following is the total loss used for UA fine-tuning.
\[L(\theta)= \sum_{i=1}^{N}L_{i}(\theta)\] \[+\lambda_{t}\sum_{i=1}^{N}KL[D(p_{i}\|\tilde{\alpha_{i}})\|D(p_{i}\| <1,...,1>)] \tag{4}\]
where \(\lambda_{t}=\min(1.0,t/10)\in[0,1]\) is the annealing coefficient, \(t\) is the index of the current training epoch, \(D(p_{i}\|<1,...,1>)\) is the uniform Dirichlet distribution, and \(\tilde{\alpha_{i}}=y_{i}+(1-y_{i})*\alpha_{i}\) is the Dirichlet parameters after removal of the non-misleading evidence from predicted parameters \(\alpha_{i}\) for sample \(i\). The KL divergence term in the loss can be calculated as
\[KL[D(p_{i}\|\tilde{\alpha_{i}})\|D(p_{i}\|1)]\\ =\log\left(\frac{\Gamma(\sum_{k=1}^{K}\tilde{\alpha_{ik}})}{ \Gamma(K)\prod_{k=1}^{K}\Gamma(\tilde{\alpha_{ik}})}\right)\\ +\sum_{k=1}^{K}(\tilde{\alpha_{ik}}-1)\bigg{[}\psi(\tilde{\alpha_{ ik}})-\psi\bigg{(}\sum_{j=1}^{K}\tilde{\alpha_{ij}}\bigg{)}\bigg{]}\]
where 1 represents the parameter vector of \(K\) ones, \(\Gamma\) is the gamma function, and \(\psi\) is the digamma function. By gradually increasing the effect of the KL divergence in the loss through the annealing coefficient, the neural network is allowed to explore the parameter space and avoid premature convergence to the uniform distribution for the misclassified samples, which may be correctly classified in future epochs.
### DTFD-MIL framework for MIL
DTFD-MIL [52] uses a pseudo bag concept to virtually increase the number of bags and uses a doublet-tier approach for WSI classification. One WSI bag is randomly divided into multiple pseudo bags with relatively less number of patches. A pseudo bag is given the label of the parent bag. DTFD-MIL is applied on top of the pseudo bags to predict a bag label. It uses the commonly used attention based MIL approach in each tier. First, patch features are extracted from each pseudo bag using a ResNet backbone. These features are forwarded to the attention based tier-1 model which computes attention scores and instance probabilities for each patch. Tier-1 model aggregates patch features into an embedding that represents the pseudo bag. A feature distillation is performed on top of patch embeddings, using tier-1 instance probabilities, to extract a distilled feature vector. Distilled feature vectors from all pseudo bags are forwarded to the tier-2 attention based model, which aggregates them using attention to learn the final bag embedding for the parent bag. Bag labels from all of the tier-1 models and tier-2 model is compared with ground truth parent bag label to drive the cross entropy loss for training.
|
2301.13744 | Elastic solids with strain-gradient elastic boundary surfaces | Recent works have shown that in contrast to classical linear elastic fracture
mechanics, endowing crack fronts in a brittle Green-elastic solid with
Steigmann-Ogden surface elasticity yields a model that predicts bounded
stresses and strains at the crack tips for plane-strain problems. However,
singularities persist for anti-plane shear (mode-III fracture) under far-field
loading, even when Steigmann-Ogden surface elasticity is incorporated.
This work is motivated by obtaining a model of brittle fracture capable of
predicting bounded stresses and strains for all modes of loading. We formulate
an exact general theory of a three-dimensional solid containing a boundary
surface with strain-gradient surface elasticity. For planar reference surfaces
parameterized by flat coordinates, the form of surface elasticity reduces to
that introduced by Hilgers and Pipkin, and when the surface energy is
independent of the surface covariant derivative of the stretching, the theory
reduces to that of Steigmann and Ogden. We discuss material symmetry using
Murdoch and Cohen's extension of Noll's theory. We present a model small-strain
surface energy that incorporates resistance to geodesic distortion, satisfies
strong ellipticity, and requires the same material constants found in the
Steigmann-Ogden theory.
Finally, we derive and apply the linearized theory to mode-III fracture in an
infinite plate under far-field loading. We prove that there always exists a
unique classical solution to the governing integro-differential equation, and
in contrast to using Steigmann-Ogden surface elasticity, our model is
consistent with the linearization assumption in predicting finite stresses and
strains at the crack tips. | Casey Rodriguez | 2023-01-31T16:26:02Z | http://arxiv.org/abs/2301.13744v3 | # Elastic solids with strain-gradient elastic boundary surfaces
###### Abstract.
Recent works have shown that in contrast to classical linear elastic fracture mechanics, endowing crack fronts in a brittle solid with Steigmann-Ogden surface elasticity yields a model that predicts bounded strains at the crack tips for plane-strain problems. However, a logarithmic singularity is still present in general for anti-plane shear (mode-III fracture) even when Steigmann-Ogden surface elasticity is incorporated.
Motivated by obtaining a model of brittle fracture capable of predicting bounded strains for all modes of loading, we formulate an exact general theory of a bulk solid containing a boundary surface with strain-gradient surface elasticity. For planar reference surfaces parameterized by flat coordinates, the form of surface elasticity reduces to that introduced by Hilgers and Pipkin, and when the surface energy is independent of the surface gradient of the stretching, the theory reduces to that of Steigmann and Ogden. We give a full discussion of material symmetry using Murdoch and Cohen's extension of Noll's theory. We present a model quadratic surface energy that incorporates resistance to geodesic distortion, satisfies strong ellipticity, and requires the same material constants found in the Steigmann-Ogden theory.
Finally, we derive and apply the linearized theory to mode-III fracture in an infinite plate. We prove that there always exists a unique classical solution to the governing integro-differential equation, and in contrast to using Steigmann-Ogden surface elasticity, our model is consistent with the linearization assumption in predicting finite strains at the crack tips.
## 1. Introduction
### Surface stressed solid bodies
The study of surface tension for solids was initiated by Gibbs in 1857, and it is now widely accepted that surface tension and more general surfaces stresses must be accounted for when modeling mechanical structures at small length scales. In particular, interfaces1 between a bulk solid and its environment can form due to various mechanisms including coating or atomic rearrangement during fracture. A way to mathematically model such an interface is by endowing part of the three-dimensional body's two-dimensional boundary with it's own thermodynamic properties that are distinct from the bulk (such as energy).
Footnote 1: By an interface, we mean a thin region separating either two distinct materials or two distinct phases of a material.
Motivated by modeling the apparent compressive surface stresses in certain cleaved crystals, Gurtin and Murdoch [4, 5] developed a rigorous general theory of material surface stresses that accounts for the surface's resistance to stretching (but not flexure). Their celebrated theory has been used to model a wide arrange of phenomena over the past 45 years, especially recently due to advances in nanoscience and nanotechnology. In the special case of a hyperelastic, three-dimensional bulk
solid with reference configuration \(\mathcal{B}\) and material surface \(\mathcal{S}\subseteq\partial\mathcal{B}\), the field equations for the current configuration \(\boldsymbol{\chi}:\mathcal{B}\to\boldsymbol{\chi}(\mathcal{B})\) are the Euler-Lagrange equations associated to the total strain energy
\[\boldsymbol{\Phi}[\boldsymbol{\chi}]=\int_{\mathcal{B}}W(\boldsymbol{C})\,dA+ \int_{\mathcal{S}}U(\boldsymbol{\mathsf{C}})\,dA,\]
where we omit the possible explicit dependence of the functions \(W\) and \(U\) on points in \(\mathcal{B}\) and \(\mathcal{S}\) (see Figure 1). Here \(\boldsymbol{C}\) is the left Cauchy-Green stretch tensor and \(\boldsymbol{\mathsf{C}}\) is the left Cauchy-Green surface stretch tensor, the pullbacks by \(\boldsymbol{\chi}\) of the metric tensors on \(\boldsymbol{\chi}(\mathcal{B})\) and \(\boldsymbol{\chi}(\mathcal{S})\), respectively. In particular, the material surface stress tensor is derived from the surface energy density \(U(\boldsymbol{\mathsf{C}})\) in analogy with the bulk's Piola stress being derived from the bulk energy density \(W(\boldsymbol{C})\).
However, Steigmann and Ogden showed in their seminal works [22, 23] that equilibrium states under compressive surface stresses obtained from the Gurtin-Murdoch theory do not satisfy an associated Legendre-Hadamard condition, and thus, these equilibrium states cannot be local energy minimizers.2 Steigmann and Ogden [22, 23] rectified this inadequacy and incorporated the material surface's resistance to flexure by including curvature dependence in the surface energy:
Footnote 2: The fact that a pure membrane, a special type of Gurtin-Murdoch material surface, in equilibrium and under compressive surface stresses cannot by locally energy minimizing is due to Pipkin [20].
\[\boldsymbol{\Phi}[\boldsymbol{\chi}]=\int_{\mathcal{B}}W(\boldsymbol{C})\,dA+ \int_{\mathcal{S}}U(\boldsymbol{\mathsf{C}},\boldsymbol{\kappa})\,dA,\]
where \(\boldsymbol{\kappa}\) is the pullback of the second fundamental form on \(\boldsymbol{\chi}(\mathcal{S})\). Since \(\boldsymbol{\kappa}\) depends on the second derivatives of \(\boldsymbol{\chi}\), the surface energy is of _strain-gradient_ type. In recent years, the Steigmann-Ogden theory has attracted considerable interest from various perspectives including the study of contact problems [10, 11, 12, 13, 31, 32, 37] and inclusion problems [2, 6, 14, 15, 27, 35]. The theory has also been used to model fracture in brittle materials [30, 33, 34], the main phenomenon motivating this work.
One of the most successful and practical theories for modeling fracture in brittle materials is classical linear elastic fracture mechanics. The governing linear partial differential equations are derived from finite elasticity under the assumption of _infinitesimal_ strains (linearized elasticity), but the theory predicts _unbounded_ singular strains at the crack tips, a striking inconsistency and physically impossible prediction. There have been a vast number of suggestions for supplementing
Figure 1. The reference configuration of a bulk solid \(\mathcal{B}\) with strain-gradient elastic surface \(\mathcal{S}\subseteq\partial\mathcal{B}\).
classical linear elastic fracture mechanics to correct this defect in the theory (see, e.g., [1]).
A more recent approach aimed at eliminating these crack tip singularities is to modify the boundary conditions of classical linear elastic fracture mechanics by endowing the crack fronts with material surface stresses. Beginning with [19] and developed further by Sendova and Walton [21], one line of thought has been to prescribe the crack front's surface stresses as a curvature dependent surface tension. Although able to remove the singularities completely in a diverse range of settings (see [3, 26, 28, 29, 36]), it is unclear if the surface stresses from the Sendova-Walton theory can be derived from a surface energy density,3 a reasonable definition of "elastic-like" behavior. Deriving the material stresses for the crack fronts from a Gurtin-Murdoch surface energy _does not_ completely remove the crack tip singularities (see [9, 25]), but however, endowing the crack fronts with Steigmann-Ogden surface energy _does_ completely remove the singularities for plane-strain problems [30, 33] and axisymmetric penny shaped cracks [34]. Unfortunately, for anti-plane shear (mode-III loading), Steigmann-Ogden surface energy reduces to Gurtin-Murdoch surface energy and crack tip singularities _persist_. The fact that the Steigmann-Ogden theory reduces to the Gurtin-Murdoch theory for mode-III loading is due to the fact that the linearized curvature vanishes for anti-plane shear, i.e., for displacement fields tangent to the material surface.
Footnote 3: Equivalently, it is unclear if the governing field equations from the Sendova-Walton model can be derived from a Lagrangian energy functional.
### Main results and outline
Inspired by [3] and motivated by obtaining a model of brittle fracture capable of predicting bounded crack tip strains for _all modes_ of loading, this work proposes an augmentation of the Steigmann-Ogden theory that is of strain-gradient type and includes the derivative of stretching in the surface energy:
\[\boldsymbol{\Phi}[\boldsymbol{\chi}]=\int_{\mathcal{B}}W(\boldsymbol{C})\,dA+ \int_{\mathcal{S}}U(\boldsymbol{\mathsf{C}},\boldsymbol{\kappa},\nabla \boldsymbol{\mathsf{C}})\,dA, \tag{1.1}\]
where \(\nabla\) is the Levi-Cevita connection on \(\mathcal{S}\). For \(\mathcal{S}\) contained in a plane and parameterized by flat coordinates, the surface energy is equivalent to that introduced by Hilgers and Pipkin [7].
In Section 2, the relevant kinematics of the boundary surface \(\mathcal{S}\) convected by a deformation of the bulk solid \(\mathcal{B}\) are first summarized. In particular, we introduce a third order tensor \(\boldsymbol{\mathsf{L}}\) that is obtained from certain transposes of \(\nabla\boldsymbol{\mathsf{C}}\) and has components in terms of the difference of Christoffel symbols on \(\mathcal{S}\) and \(\boldsymbol{\chi}(\mathcal{S})\). This tensor is used in a model surface energy proposed in Section 3. Physically, the tensor \(\boldsymbol{\mathsf{L}}\) furnishes the rate of stretching of convected geodesics and locally characterizes _geodesic distortion_, i.e., how convected geodesics fail to be geodesics on \(\boldsymbol{\chi}(\mathcal{S})\) (see Proposition 2.1). The general form of (1.1) that we consider is then introduced (see (2.4)), and inspired by Murdoch and Cohen's extension to surfaces of Noll's classical theory of material symmetry, we introduce material symmetry for the surface energy density \(U\) (see 2.2). In the final subsection of Section 2, we derive the field equations (2.13) governing equilibrium states of the solid \(\mathcal{B}\) with strain-gradient elastic surface \(\mathcal{S}\subseteq\partial\mathcal{B}\) from a Lagrangian energy functional (2.10) including the boundary relations between the boundary tractions and the surface stresses.
In Section 3, we present a model hemitropic, quadratic surface energy that requires the same number of material constants (with the same physical interpretations) as found in the Steigmann-Ogden theory (see (3.1)). In contradistinction, however, the surface energy we propose satisfies the strong ellipticity condition (see (3.6), (3.7)). We then derive the linearized field equations governing infinitesimal displacements of \(\mathcal{B}\) and \(\mathcal{S}\), including the boundary relations connecting the linearized boundary tractions and the linearized surface stresses.
Finally, in Section 4, we apply the linearized theory (3.11) to the problem of a brittle infinite solid with a straight, non-interfacial crack of finite length, under mode-III loading. Using the explicit Dirichlet-to-Neumann map, the problem is reduced to solving a fourth order integro-differential equation for the crack profile along the boundary crack front (see (4.5)). Analytically, it is the surface energy satisfying the strong ellipticity condition that implies that this integro-differential equation is fourth order in the surface derivative of the displacement.4 The dimensionless form of the equation implies that the behavior of the displacement depends on the size of the crack, and for macro cracks, we expect the displacement to be well-approximated by the solution given by classical linear elastic fracture mechanics except in small regions near the crack tips (boundary layers). Finally, using the Lax-Milgram theorem and regularity afforded by the presence of the fourth order derivative, we prove that there always exist a unique classical solution to the governing integro-differential equation, and in contrast to using the Steigmann-Ogden theory, our model predicts bounded strains up to the crack tips (see Theorem 4.4).
Footnote 4: Physically, it is the model incorporating resistance to geodesic distortion that implies that this integro-differential equation is fourth order in the surface derivative of the displacement.
### Acknowledgments
The author would like to thank Jay R. Walton for several fruitful discussions related to fracture, surface elasticity and interfacial physics. He would also like to thank Jeremy Marzuola for his help generating numerical solutions to the mode-III fracture problem discussed in Section 4.
## 2. Kinematics and field equations
In this section formulate a general mathematical model for a hyperelastic bulk solid containing a boundary surface with strain-gradient surface elasticity. For planar reference surfaces, the form of surface elasticity reduces to that introduced by Hilgers and Pipkin [7], and when the surface energy is independent of the surface gradient of the stretching, the theory reduces to that of Steigmann and Ogden [22, 23].
### Kinematics
Let \(\mathbb{E}^{3}\) be three-dimensional Euclidean space with Cartesian coordinates \((X^{1},X^{2},X^{3})=\boldsymbol{X}\in\mathbb{E}^{3}\) and coordinate vector fields given by a fixed orthonormal basis \(\{\boldsymbol{e}_{i}\}_{i=1}^{3}\) of \(\mathbb{R}^{3}\). We define the following operations for elementary
tensor products of vectors in \(\mathbb{R}^{3}\),
\[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2})\boldsymbol{b}=( \boldsymbol{a}_{2}\cdot\boldsymbol{b})\boldsymbol{a}_{1},\quad(\boldsymbol{a}_{1 }\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a}_{3})\boldsymbol{b}=( \boldsymbol{a}_{3}\cdot\boldsymbol{b})\boldsymbol{a}_{1}\otimes\boldsymbol{a}_ {2},\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a} _{3})(\boldsymbol{b}_{1}\otimes\boldsymbol{b}_{2})=(\boldsymbol{a}_{3}\cdot \boldsymbol{b}_{1})\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2}\otimes \boldsymbol{b}_{2},\] \[(\boldsymbol{b}_{1}\otimes\boldsymbol{b}_{2})(\boldsymbol{a}_{1 }\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a}_{3})=(\boldsymbol{b}_{2}\cdot \boldsymbol{a}_{1})\boldsymbol{b}_{1}\otimes\boldsymbol{a}_{2}\otimes \boldsymbol{a}_{3},\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a }_{3})[\boldsymbol{b}_{1}\otimes\boldsymbol{b}_{2}]=(\boldsymbol{a}_{2}\cdot \boldsymbol{b}_{1})(\boldsymbol{a}_{3}\cdot\boldsymbol{b}_{2})\boldsymbol{a }_{1},\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2})[\boldsymbol{b}_{1} \otimes\boldsymbol{b}_{2}]=(\boldsymbol{a}_{1}\cdot\boldsymbol{b}_{1})( \boldsymbol{a}_{2}\cdot\boldsymbol{b}_{2}),\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a }_{3})[\boldsymbol{b}_{1}\otimes\boldsymbol{b}_{2}\otimes\boldsymbol{b}_{3}]=( \boldsymbol{a}_{1}\cdot\boldsymbol{b}_{1})(\boldsymbol{a}_{2}\cdot \boldsymbol{b}_{2})(\boldsymbol{a}_{3}\cdot\boldsymbol{b}_{3}),\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2})^{T}=\boldsymbol{a }_{2}\otimes\boldsymbol{a}_{1},\quad(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_ {2}\otimes\boldsymbol{a}_{3})^{T}=\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{3} \otimes\boldsymbol{a}_{2},\] \[(\boldsymbol{a}_{1}\otimes\boldsymbol{a}_{2}\otimes\boldsymbol{a }_{3})^{\sim}=\boldsymbol{a}_{2}\otimes\boldsymbol{a}_{1}\otimes\boldsymbol{a }_{3}.\]
These operations are extended to general second and third order tensors on \(\mathbb{R}^{3}\) by linearity.
Let \(\mathcal{B}\subseteq\mathbb{E}^{3}\) be a domain with smooth boundary \(\partial\mathcal{B}\), the reference configuration of a three-dimensional body, and let \(\mathcal{S}\subseteq\partial\mathcal{B}\) be a closed surface with smooth boundary. Let \(\boldsymbol{\chi}:\mathcal{B}\to\boldsymbol{\chi}(\mathcal{B})\subseteq \mathbb{E}^{3}\) be a smooth, invertible deformation of \(\mathcal{B}\). We denote the current position of the reference particle \(\boldsymbol{X}\in\mathcal{B}\) by
\[\boldsymbol{x}=\boldsymbol{\chi}(\boldsymbol{X})=(\chi^{1}( \boldsymbol{X}),\chi^{2}(\boldsymbol{X}),\chi^{3}(\boldsymbol{X})).\]
The deformation gradient \(\boldsymbol{F}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is the second order tensor field
\[\boldsymbol{F}=\frac{\partial\chi^{i}}{\partial X^{a}}(\boldsymbol{X}) \boldsymbol{e}_{i}\otimes\boldsymbol{e}^{a}\]
where \(\boldsymbol{e}^{a}:=\boldsymbol{e}_{a}\), \(a=1,2,3\). We denote the left Cauchy-Green stretch tensor by \(\boldsymbol{C}=\boldsymbol{F}^{T}\boldsymbol{F}\).
Let \(\boldsymbol{Y}=\hat{\boldsymbol{Y}}(\theta^{1},\theta^{2})\) be a local parameterization of the reference surface \(\mathcal{S}\subseteq\partial\mathcal{B}\). Then
\[\boldsymbol{y}=\hat{\boldsymbol{y}}(\theta^{1},\theta^{2})= \boldsymbol{\chi}(\hat{\boldsymbol{Y}}(\theta^{1},\theta^{2}))\]
is a local parameterization of the current surface \(\boldsymbol{\chi}(\mathcal{S})\subseteq\boldsymbol{\chi}(\partial\mathcal{B})\). The (local) tangent vector fields on the reference and current surfaces are then given by
\[\boldsymbol{Y}_{,\alpha}\in T_{\boldsymbol{Y}}\mathcal{S},\quad \boldsymbol{y}_{,\alpha}=\boldsymbol{F}\boldsymbol{Y}_{,\alpha}\in T_{ \boldsymbol{y}}\boldsymbol{\chi}(\mathcal{S}),\]
where \(\cdot_{,\alpha}:=\frac{\partial}{\partial\theta^{\alpha}}\). The dual tangent vector fields on the reference and current surfaces are denoted by \(\boldsymbol{Y}^{,\beta}\) and \(\boldsymbol{y}^{,\beta}\) respectively and satisfy
\[\boldsymbol{Y}^{,\beta}\cdot\boldsymbol{Y}_{,\alpha}=\delta^{ \beta}_{\ \alpha},\quad\boldsymbol{y}^{,\beta}\cdot\boldsymbol{y}_{,\alpha}=\delta^{ \beta}_{\ \alpha}.\]
We remark that we may also write
\[\boldsymbol{y}_{,\alpha}=\boldsymbol{\mathsf{F}}\boldsymbol{Y}_{,\alpha}\]
where \(\boldsymbol{\mathsf{F}}=F^{k}_{\ \beta}\,\boldsymbol{e}_{k}\otimes\boldsymbol{Y}^{,\beta}:=y^{k}_{, \beta}\boldsymbol{e}_{k}\otimes\boldsymbol{Y}^{,\beta}=\boldsymbol{y}_{,\beta }\otimes\boldsymbol{Y}^{,\beta}\) is the _surface deformation gradient_. The first fundamental forms for the reference and current surfaces are then given by
\[\boldsymbol{\mathsf{G}}=\mathsf{G}_{\alpha\beta}\boldsymbol{Y}^{,\alpha}\otimes\boldsymbol{Y}^{,\beta},\quad\mathsf{G}_{\alpha\beta}= \boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{Y}_{,\beta},\] \[\quad\boldsymbol{\mathsf{g}}=\mathsf{g}_{\alpha\beta}\boldsymbol{y}^ {,\alpha}\otimes\boldsymbol{y}^{,\beta},\quad\mathsf{g}_{\alpha\beta}= \boldsymbol{y}_{,\alpha}\cdot\boldsymbol{y}_{,\beta},\]
and we note that
\[\boldsymbol{Y}^{,\alpha}=(\mathsf{G}^{-1})^{\alpha\beta}\boldsymbol{Y}_{,\beta },\quad\boldsymbol{y}^{,\alpha}=(\mathsf{g}^{-1})^{\alpha\beta}\boldsymbol{y}_{,\beta}\]
The Christoffel symbols of the second kind for the reference and current surfaces are denoted by
\[\Gamma^{\alpha}{}_{\beta\delta}=\boldsymbol{Y}^{,\alpha}\cdot\boldsymbol{Y}_{, \beta\delta},\quad\gamma^{\alpha}{}_{\beta\delta}=\boldsymbol{y}^{,\alpha} \cdot\boldsymbol{y}_{,\beta\delta}.\]
The _left Cauchy-Green surface stretch tensor_ on \(\mathcal{S}\) is the second order tensor
\[\boldsymbol{\mathsf{C}}:=\boldsymbol{\mathsf{F}}^{T}\boldsymbol{\mathsf{F}}= \mathsf{g}_{\alpha\beta}\boldsymbol{Y}^{,\alpha}\otimes\boldsymbol{Y}^{, \beta},\]
and the \(s\) surface Green-St. Venant tensor is the second order tensor \(\boldsymbol{\mathsf{E}}=\frac{1}{2}(\boldsymbol{\mathsf{C}}-\boldsymbol{ \mathsf{G}})\). We assume that the reference and current surfaces are orientable with unit normals to the reference and current surfaces (locally) given by
\[\boldsymbol{N}=|\boldsymbol{Y}_{,1}\times\boldsymbol{Y}_{,2}|^{-1} \boldsymbol{Y}_{,1}\times\boldsymbol{Y}_{,2},\quad\boldsymbol{n}=|\boldsymbol{ y}_{,1}\times\boldsymbol{y}_{,2}|^{-1}\boldsymbol{y}_{,1}\times\boldsymbol{y}_{,2}\]
respectively. The second fundamental forms on the reference and current surfaces are given by
\[\boldsymbol{\mathsf{B}}=\mathsf{B}_{\alpha\beta}\boldsymbol{Y}^{, \alpha}\otimes\boldsymbol{Y}^{,\beta},\quad\mathsf{B}_{\alpha\beta}:= \boldsymbol{Y}_{,\alpha\beta}\cdot\boldsymbol{N},\] \[\boldsymbol{\mathsf{b}}=\mathsf{b}_{\alpha\beta}\boldsymbol{y}^{,\alpha}\otimes\boldsymbol{y}^{,\beta},\quad\mathsf{b}_{\alpha\beta}:= \boldsymbol{y}_{,\alpha\beta}\cdot\boldsymbol{n}.\]
The _relative curvature tensor_ on \(\mathcal{S}\) is the second order tensor \(\boldsymbol{\mathsf{K}}=\mathsf{K}_{\alpha\beta}\boldsymbol{Y}^{,\alpha} \otimes\boldsymbol{Y}^{,\beta}\) given by
\[\boldsymbol{\mathsf{K}}=\boldsymbol{\mathsf{F}}^{T}\boldsymbol{\mathsf{b}} \boldsymbol{\mathsf{F}}-\boldsymbol{\mathsf{B}}=[\mathsf{b}_{\alpha\beta}- \mathsf{B}_{\alpha\beta}]\boldsymbol{Y}^{,\alpha}\otimes\boldsymbol{Y}^{, \beta}.\]
As discussed by Steigmann and Ogden [23], \(\boldsymbol{\mathsf{E}}\) and \(\boldsymbol{\mathsf{K}}\) furnish local differences in length and scaled extrinsic normal curvature between a given curve on \(\mathcal{S}\) and the convected curve on \(\boldsymbol{\chi}(\mathcal{S})\), where \(\dot{}:=\frac{d}{ds}\). Indeed, if \(\boldsymbol{Z}(s)\) is a a curve on \(\mathcal{S}\) with unit tangent vector field \(\boldsymbol{\mathsf{T}}\) and \(\boldsymbol{z}(s)=\boldsymbol{\chi}(\boldsymbol{Z}(s))\) is the convected curve, then the tangent to the convected curve is given by \(\boldsymbol{\mathsf{t}}=\boldsymbol{\dot{z}}=\boldsymbol{F}\boldsymbol{ \mathsf{T}}=\boldsymbol{\mathsf{F}}\boldsymbol{\mathsf{T}}\). The _stretch_\(\nu\) of the convected curve is
\[\nu=|\boldsymbol{\mathsf{t}}|^{2}-1=\boldsymbol{\mathsf{F}} \boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{F}}\boldsymbol{\mathsf{T}}-1= \boldsymbol{\mathsf{C}}\boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{T}}-1=2 \boldsymbol{\mathsf{E}}\boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{T}}.\]
The extrinsic normal curvature of the convected curve is
\[\kappa=\boldsymbol{\mathsf{b}}\boldsymbol{\mathsf{t}}\cdot \boldsymbol{\mathsf{t}}=|\boldsymbol{\mathsf{t}}|^{-2}\boldsymbol{\mathsf{b}} \boldsymbol{\mathsf{F}}\boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{F}} \boldsymbol{\mathsf{T}}=(\nu+1)^{-1}(\boldsymbol{\mathsf{F}}^{T}\boldsymbol{ \mathsf{b}}\boldsymbol{\mathsf{F}})\boldsymbol{\mathsf{T}},\]
and thus, the difference in scaled extrinsic normal curvature of the convected curve and the original curve is given by
\[|\boldsymbol{\mathsf{t}}|^{2}\kappa-|\boldsymbol{\mathsf{T}}|^{2} \boldsymbol{\mathsf{B}}\boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{T}}= \boldsymbol{\mathsf{K}}\boldsymbol{\mathsf{T}}\cdot\boldsymbol{\mathsf{T}}.\]
The Levi-Cevita connection on \(\mathcal{S}\) is denoted by \(\nabla\), so
\[\nabla\boldsymbol{\mathsf{E}}=\frac{1}{2}\nabla\boldsymbol{\mathsf{C}}=\frac{ 1}{2}\nabla_{\delta}\mathsf{g}_{\alpha\beta}\boldsymbol{Y}^{,\alpha}\otimes \boldsymbol{Y}^{,\beta}\otimes\boldsymbol{Y}^{,\delta},\]
where
\[\nabla_{\delta}\mathsf{g}_{\alpha\beta}:=\mathsf{g}_{\alpha\beta, \delta}-\Gamma^{\mu}{}_{\alpha\delta}\mathsf{g}_{\mu\beta}-\Gamma^{\mu}{}_{ \beta\delta}\mathsf{g}_{\mu\alpha}.\]
For later use, we define the following third order tensor on \(\mathcal{S}\),
\[\boldsymbol{\mathsf{L}}=\nabla\boldsymbol{\mathsf{E}}+(\nabla \boldsymbol{\mathsf{E}})^{T}-(\nabla\boldsymbol{\mathsf{E}}^{\sim})^{T},\]
with components
\[\mathsf{L}_{\alpha\beta\delta}=\frac{1}{2}(\nabla_{\delta} \mathsf{g}_{\alpha\beta}+\nabla_{\beta}\mathsf{g}_{\alpha\delta}-\nabla_{ \alpha}\mathsf{g}_{\beta\delta})=(\gamma^{\mu}{}_{\beta\delta}-\Gamma^{\mu}{}_{ \beta\delta})\mathsf{g}_{\mu\alpha}.\]
Physically, the tensor \(\mathsf{L}\) (and thus \(\nabla\mathsf{E}\)) furnishes the rate of stretching of convected geodesics, and more generally, it quantifies _geodesic distortion_, i.e. how convected geodesics fail to be geodesics on \(\boldsymbol{\chi}(\mathcal{S})\). More precisely, we have the following.
**Proposition 2.1**.: _Let \(\boldsymbol{Z}(s):I\to\mathcal{S}\) be a geodesic on \(\mathcal{S}\) with unit tangent vector field \(\mathsf{T}=\dot{\boldsymbol{Z}}\). Then \(\mathsf{L}\) yields the rate of stretching of the convected curve \(\boldsymbol{z}(s)=\boldsymbol{\chi}(\boldsymbol{Z}(s))\),_
\[2\mathsf{L}[\mathsf{T}\otimes\mathsf{T}\otimes\mathsf{T}]=\frac{d}{ds}|\dot{ \boldsymbol{z}}|^{2}. \tag{2.1}\]
_Moreover, the convected curve \(\boldsymbol{z}(s)\) is a geodesic on \(\boldsymbol{\chi}(\mathcal{S})\) if and only if for each \(s_{0}\in I\),_
\[\forall\mathsf{U}\in T_{\boldsymbol{Z}(s_{0})}\mathcal{S},\quad\mathsf{L}[ \mathsf{U}\otimes\mathsf{T}\otimes\mathsf{T}]\big{|}_{s=s_{0}}=0. \tag{2.2}\]
Proof.: To prove (2.1), we simply note that since \(\boldsymbol{Z}\) is a geodesic, \(\nabla_{\mathsf{T}}\mathsf{T}=\mathbf{0}\) so
\[\frac{d}{ds}|\dot{\boldsymbol{z}}|^{2} =\frac{d}{ds}[\mathsf{C}\mathsf{T}\cdot\mathsf{T}]\] \[=\nabla\mathsf{C}[\mathsf{T}\otimes\mathsf{T}\otimes\mathsf{T}]+ 2\mathsf{C}\nabla_{\mathsf{T}}\mathsf{T}\cdot\mathsf{T}\] \[=\nabla\mathsf{C}[\mathsf{T}\otimes\mathsf{T}\otimes\mathsf{T}] =2\mathsf{L}[\mathsf{T}\otimes\mathsf{T}\otimes\mathsf{T}].\]
We now prove (2.2). The convected curve \(\boldsymbol{z}\) on \(\boldsymbol{\chi}(\mathcal{S})\) has tangent vector field \(\mathbf{t}=\mathsf{F}\mathsf{T}=\mathrm{t}^{\mu}\boldsymbol{y}_{,\mu}\) and acceleration vector field
\[\mathbf{a}=\big{(}\mathrm{t}^{\mu}+\gamma^{\mu}_{\ \beta\delta}\mathrm{t}^{\beta} \mathrm{t}^{\delta}\big{)}\boldsymbol{y}_{,\mu}.\]
The acceleration is zero and \(\boldsymbol{z}\) is a geodesic if and only if for each \(s_{0}\in I\),
\[\forall\mathbf{u}\in T_{\boldsymbol{z}(s_{0})}\boldsymbol{\chi}(\mathcal{S}),\quad\mathbf{u}\cdot\mathbf{a}|_{s=s_{0}}=0. \tag{2.3}\]
We now show that (2.3) is equivalent to (2.2). We assume without loss of generality that \(s_{0}=0\), and we choose normal coordinates \((\theta^{1},\theta^{2})\) centered at \(\boldsymbol{Z}(0)\). Then for all \(\alpha,\beta,\delta=1,2\),
\[\mathsf{G}_{\alpha\beta}|_{s=0}=\delta_{\alpha\beta},\quad\Gamma^{\mu}_{\ \beta\delta}|_{s=0}=0,\]
and there exist \(\mathsf{T}^{1},\mathsf{T}^{2}\in\mathbb{R}\) with \(\delta_{\alpha\beta}\mathsf{T}^{\alpha}\mathsf{T}^{\beta}=1\) such that \(\boldsymbol{Z}(s)=\hat{\boldsymbol{Y}}(s\mathsf{T}^{1},s\mathsf{T}^{2})\). In particular, we conclude that
\[\mathbf{a}=\gamma^{\mu}_{\ \beta\delta}\mathsf{T}^{\beta}\mathsf{T}^{\delta} \boldsymbol{y}_{,\mu}.\]
Since \(\mathsf{F}|_{\boldsymbol{Z}(0)}:T_{\boldsymbol{Z}(0)}\mathcal{S}\to T_{ \boldsymbol{z}(0)}\boldsymbol{\chi}(\mathcal{S})\) is an isomorphism, (2.3) is equivalent to
\[\forall\mathsf{U}=\mathsf{U}^{\mu}\boldsymbol{Y}_{,\mu}|_{\boldsymbol {Z}(0)}\in T_{\boldsymbol{Z}(0)}\mathcal{S},\quad\mathsf{F}\mathsf{U}\cdot \mathbf{a}\big{|}_{s=0}=0\] \[\iff\forall\mathsf{U}=\mathsf{U}^{\mu}\boldsymbol{Y}_{,\mu}|_{ \boldsymbol{Z}(0)}\in T_{\boldsymbol{Z}(0)}\mathcal{S},\quad\mathsf{g}_{\mu \alpha}\mathsf{U}^{\alpha}\gamma^{\mu}_{\ \beta\delta}\mathsf{T}^{\beta}\mathsf{T}^{\delta}\big{|}_{s=0}=0\] \[\iff\forall\mathsf{U}=\mathsf{U}^{\mu}\boldsymbol{Y}_{,\mu}|_{ \boldsymbol{Z}(0)}\in T_{\boldsymbol{Z}(0)}\mathcal{S},\quad\mathsf{L}[ \mathsf{U}\otimes\mathsf{T}\otimes\mathsf{T}]\big{|}_{s=0}=0.\]
As an illustrative example, consider
\[\mathcal{S}=\{(X^{1},X^{2},0)\mid X^{1}\in[a,b],X^{2}\in[0,\pi]\}\subseteq \mathcal{B}=[a,b]\times[0,\pi]\times[0,\infty),\]
and the deformation
\[\boldsymbol{\chi}(X^{1},X^{2},X^{3})=(e^{X^{1}}\cos X^{2},e^{X^{1}}\sin X^{2}, X^{3}).\]
Then
\[\mathsf{E}=\frac{1}{2}(e^{2X^{1}}-1)\big{(}\boldsymbol{e}_{1}\otimes\boldsymbol{e }_{1}+\boldsymbol{e}_{2}\otimes\boldsymbol{e}_{2}\big{)},\quad\mathsf{K}= \boldsymbol{0},\]
\[\mathsf{L}=e^{2X^{1}}\big{(}\boldsymbol{e}_{1}\otimes\boldsymbol{e}_{1} \otimes\boldsymbol{e}_{1}+\boldsymbol{e}_{2}\otimes\boldsymbol{e}_{1}\otimes \boldsymbol{e}_{2}+\boldsymbol{e}_{2}\otimes\boldsymbol{e}_{2}\otimes \boldsymbol{e}_{1}-\boldsymbol{e}_{1}\otimes\boldsymbol{e}_{2}\otimes \boldsymbol{e}_{2}\big{)}.\]
The image of the coordinate curve \(X^{2}=d\) is a straight, radially outward traveling curve in the \(x^{1}x^{2}\)-plane parameterized by \(X^{1}\in[a,b]\) with
\[\mathsf{L}[\boldsymbol{e}_{1}\otimes\boldsymbol{e}_{1}\otimes\boldsymbol{e}_{ 1}]=e^{2X^{1}},\quad\mathsf{L}[\boldsymbol{e}_{2}\otimes\boldsymbol{e}_{1} \otimes\boldsymbol{e}_{1}]=0.\]
In particular, the stretch is not constant along the convected curve. The image of the coordinate curve \(X^{1}=c\) is the upper-half of the circle in the \(x^{1}x^{2}\)-plane centered at \((0,0)\) of radius \(e^{c}\). The convected curve is parameterized by \(X^{2}\in[0,\pi]\), has constant stretch but nonzero curvature relative to the convected surface (i.e., the \(x^{1}x^{2}\)-plane), and it satisfies
\[\mathsf{L}[\boldsymbol{e}_{2}\otimes\boldsymbol{e}_{2}\otimes\boldsymbol{e}_ {2}]=0,\quad\mathsf{L}[\boldsymbol{e}_{1}\otimes\boldsymbol{e}_{2}\otimes \boldsymbol{e}_{2}]=-e^{2c}.\]
### Strain energy and material symmetry
In our mathematical model, a hyperelastic elastic body \(\mathcal{B}\) with strain-gradient elastic surface \(\mathcal{S}\subseteq\partial\mathcal{B}\) is prescribed a strain energy of the form
\[\Phi[\boldsymbol{\chi}]=\int_{\mathcal{B}}W(\boldsymbol{C})\,dV+\int_{ \mathcal{S}}U(\mathsf{E},\mathsf{K},\nabla\mathsf{E})\,dA, \tag{2.4}\]
where we omit listing the possible dependence of \(W\) on \(\boldsymbol{X}\in\mathcal{B}\) and of \(U\) on \(\boldsymbol{Y}\in\mathcal{S}\). We note that the strain energy is frame indifferent, i.e., it is invariant with respect to super-imposed rigid motions.5
Footnote 5: In the case of \(\mathcal{S}\) contained in the plane, Hilgers and Pipkin showed in Section 7 of [7] that \(U(\mathsf{E},\mathsf{K},\nabla\mathsf{E})\) is the most general form of a surface energy density that is frame indifferent.
Although our main motivation for considering a strain-gradient elastic surface is when it forms part of the boundary of a bulk solid \(\mathcal{B}\), we will treat material symmetry of \(\mathcal{S}\) independently of that of \(\mathcal{B}\). The notion of material symmetry for the energy density \(W\) of the bulk three-dimensional solid \(\mathcal{B}\) is well-known, see [18, 24], so, we will limit our discussion to that of \(\mathcal{S}\).
Consider a material point with reference position \(\boldsymbol{Y}_{0}\in\mathcal{S}\). Our discussion of material symmetry for the surface energy per unit reference area \(U\) at \(\boldsymbol{Y}_{0}\) is inspired by the framework introduced by Murdoch and Cohen [16, 17] which was later advocated for and summarized by Steigmann and Ogden [23] using local coordinate parameterizations. We will follow Steigmann and Ogden's style of exposition, and unless specified otherwise, all quantities in what follows are evaluated at \(\boldsymbol{Y}_{0}\).
Let \(\boldsymbol{\lambda}:\mathbb{E}^{3}\to\mathbb{E}^{3}\) be a rigid motion with deformation gradient \(\boldsymbol{R}\in SO(3)\) satisfying \(\boldsymbol{\lambda}(\boldsymbol{Y}_{0})=\boldsymbol{Y}_{0}\), and \(\boldsymbol{R}\boldsymbol{N}=\boldsymbol{N}\). We define a second reference surface \(\mathcal{S}^{*}=\big{\{}\boldsymbol{Y}^{*}=\boldsymbol{\lambda}^{-1}( \boldsymbol{Y})\mid\boldsymbol{Y}\in\mathcal{S}\big{\}}\). It follows that \(T_{\boldsymbol{Y}_{0}}\mathcal{S}^{*}=T_{\boldsymbol{Y}_{0}}\mathcal{S}\), at \(\boldsymbol{Y}_{0}\in\mathcal{S}^{*}\) the unit normal \(\boldsymbol{N}^{*}\) to \(\mathcal{S}^{*}\) satisfies \(\boldsymbol{N}^{*}=\boldsymbol{N}=\boldsymbol{R}\boldsymbol{N}\), and
\[\mathsf{R}:=\boldsymbol{R}|_{\boldsymbol{Y}_{0}}\mathcal{S}:T_{\boldsymbol{Y}_ {0}}\mathcal{S}\to T_{\boldsymbol{Y}_{0}}\mathcal{S}\]
is a rotation. A local parameterization on \(\mathcal{S}^{*}\) is given by \(\boldsymbol{Y}^{*}=\hat{\boldsymbol{Y}}^{*}(\theta^{1},\theta^{2}):= \boldsymbol{\lambda}^{-1}(\hat{\boldsymbol{Y}}(\theta^{1},\theta^{2}))\), so then
\[\boldsymbol{Y}_{,\alpha}=\boldsymbol{R}\boldsymbol{Y}^{*}_{,\alpha},\quad \boldsymbol{Y}^{,\alpha}=\boldsymbol{R}\boldsymbol{Y}^{*,\alpha},\quad\alpha=1,2,\]
and at \(\boldsymbol{Y}_{0}\), \(\boldsymbol{Y}_{,\alpha}=\mathsf{R}\boldsymbol{Y}^{*}_{,\alpha}\), \(\boldsymbol{Y}^{,\alpha}=\mathsf{R}\boldsymbol{Y}^{*,\alpha}\), for \(\alpha=1,2\).
Let \(\mathbf{\chi}:\mathbb{E}^{3}\to\mathbb{E}^{3}\) be a smooth invertible deformation of Euclidean space. Since a superimposed rigid motion does not affect the value of the surface energy, we will assume without loss of generality that
\[\mathbf{\chi}(\mathbf{Y}_{0})=\mathbf{Y}_{0}\text{ and at }\mathbf{Y}_{0},\,\mathbf{\mathsf{F}}:T_{\mathbf{Y} _{0}}\mathcal{S}\to T_{\mathbf{Y}_{0}}\mathcal{S}.\]
Following [17] we will impose the stronger requirement that
\[\mathbf{\chi}(\mathbf{Y}_{0})=\mathbf{Y}_{0}\text{ and at }\mathbf{Y}_{0},\,\mathbf{\mathsf{F}}= \mathsf{F}^{\alpha}{}_{\beta}\mathbf{Y}_{,\alpha}\otimes\mathbf{Y}^{,\beta}+\mathbf{N} \otimes\mathbf{N}. \tag{2.5}\]
In particular, it follows that at \(\mathbf{Y}_{0}\), \(\mathbf{\mathsf{F}}=\mathsf{F}^{\alpha}{}_{\beta}\mathbf{Y}_{,\alpha}\otimes\mathbf{Y}^{, \beta}\) and \(\mathbf{n}=\mathbf{N}=\mathbf{FN}\). The deformation \(\mathbf{\chi}^{*}(\cdot):=\mathbf{\chi}(\mathbf{\lambda}(\cdot))\) when restricted to \(\mathcal{S}^{*}\) has the same image as \(\mathbf{\chi}|_{\mathcal{S}}\), and in particular, the convected tangent vector fields are the same,
\[\mathbf{y}^{*}_{,\alpha}:=\frac{\partial}{\partial\theta^{\alpha}}\mathbf{\chi}^{*}( \mathbf{Y}^{*})=\frac{\partial}{\partial\theta^{\alpha}}\mathbf{\chi}(\mathbf{Y})=\mathbf{y}_ {,\alpha}.\]
Then the surface deformation gradients of \(\mathbf{\chi}\) and \(\mathbf{\chi}^{*}\) at \(\mathbf{Y}_{0}\) are related by
\[\mathbf{\mathsf{F}}=\mathbf{y}_{\alpha}\otimes\mathbf{Y}^{,\alpha}=\mathbf{y}^{*}_{\alpha} \otimes\mathbf{\mathsf{R}}\mathbf{Y}^{*,\alpha}=\mathbf{\mathsf{F}}^{*}\mathbf{\mathsf{R}}^{T} \implies\,\mathbf{\mathsf{C}}=\mathbf{\mathsf{R}}\mathbf{\mathsf{C}}^{*}\mathbf{\mathsf{R}}^{T}.\]
The components of the first and second fundamental forms associated to \(\mathcal{S}\) and \(\mathcal{S}^{*}\) satisfy
\[\mathsf{G}_{\alpha\beta}=\mathbf{Y}_{,\alpha}\cdot\mathbf{Y}_{,\beta}=\bm {RY}^{*}_{,\alpha}\cdot\mathbf{RY}^{*}_{\beta}=\mathbf{Y}^{*}_{,\alpha}\cdot\mathbf{Y}^{*}_ {\beta}=\mathsf{G}^{*}_{\alpha\beta}\implies\,\mathbf{\mathsf{G}}=\mathbf{\mathsf{R}} \mathbf{\mathsf{G}}^{*}\mathbf{\mathsf{R}}^{T},\] \[\mathsf{B}_{\alpha\beta}=\mathbf{Y}_{,\alpha\beta}\cdot\mathbf{N}=\mathbf{RY} ^{*}_{,\alpha\beta}\cdot\mathbf{RN}^{*}=\mathbf{Y}^{*}_{,\alpha\beta}\cdot\mathbf{N}^{*}= \mathsf{B}^{*}_{\alpha\beta}\implies\,\mathbf{\mathsf{B}}=\mathbf{\mathsf{R}}\mathbf{ \mathsf{B}}^{*}\mathbf{\mathsf{R}}^{T},\]
and thus, \(\mathbf{\mathsf{E}}=\mathbf{\mathsf{R}}\mathbf{\mathsf{E}}^{*}\mathbf{\mathsf{R}}^{T}\). Since \(\mathbf{\mathsf{b}}=\mathsf{b}_{\alpha\beta}\mathbf{y}^{,\alpha}\otimes\mathbf{y}^{\beta}\) is the same for both deformations, we conclude that the relative curvature tensors \(\mathbf{\mathsf{K}}\) and \(\mathbf{\mathsf{K}}^{*}\) associated to \(\mathbf{\chi}\) and \(\mathbf{\chi}^{*}\) satisfy at \(\mathbf{Y}_{0}\),
\[\mathbf{\mathsf{K}}=\mathbf{\mathsf{R}}\mathbf{\mathsf{K}}^{*}\mathbf{\mathsf{R}}^{T}.\]
Finally, since the components of the first fundamental forms associated to \(\mathbf{\chi}(\mathcal{S})\) and \(\mathbf{\chi}^{*}(\mathcal{S}^{*})\) satisfy
\[\mathsf{g}_{\alpha\beta}=\mathbf{y}_{\alpha}\cdot\mathbf{y}_{\beta}=\mathbf{y}^{*}_{\alpha }\cdot\mathbf{y}^{*}_{\beta}=:\mathsf{g}^{*}_{\alpha\beta},\]
we conclude that \(\nabla\mathbf{\mathsf{E}}\) and \(\nabla^{*}\mathbf{\mathsf{E}}^{*}\) at \(\mathbf{Y}_{0}\) satisfy
\[\nabla\mathbf{\mathsf{E}} =\frac{1}{2}\nabla_{\delta}\mathsf{g}_{\alpha\beta}\mathbf{Y}^{, \alpha}\otimes\mathbf{Y}^{,\beta}\otimes\mathbf{Y}^{,\delta}\] \[=\frac{1}{2}\nabla^{*}_{\delta}\mathsf{g}^{*}_{\alpha\beta}\mathbf{ \mathsf{R}}\mathbf{Y}^{*,\alpha}\otimes\mathbf{\mathsf{R}}\mathbf{Y}^{*,\beta}\otimes\mathbf{ \mathsf{R}}\mathbf{Y}^{*,\delta}\] \[=\mathbf{\mathsf{R}}[(\nabla^{*}\mathbf{\mathsf{E}}^{*})^{T}\mathbf{\mathsf{R }}^{T}]^{T}\mathbf{\mathsf{R}}^{T}.\]
We denote the surface energy per unit reference area relative to \(\mathcal{S}^{*}\) by \(U^{*}\). For the surface energy per unit mass to be independent of the reference surface used, we must have
\[U^{*}(\mathbf{\mathsf{E}}^{*},\mathbf{\mathsf{K}}^{*},\nabla^{*}\mathbf{\mathsf{E}}^{*})=U( \mathbf{\mathsf{E}},\mathbf{\mathsf{H}},\nabla\mathbf{\mathsf{E}})\] \[=U\Big{(}\mathbf{\mathsf{R}}\mathbf{\mathsf{E}}^{*}\mathbf{\mathsf{R}}^{T},\mathbf{ \mathsf{R}}\mathbf{\mathsf{K}}^{*}\mathbf{\mathsf{R}}^{T},\mathbf{\mathsf{R}}\big{[}(\nabla^ {*}\mathbf{\mathsf{E}}^{*})^{T}\mathbf{\mathsf{R}}^{T}\big{]}^{T}\mathbf{\mathsf{R}}^{T} \Big{)}. \tag{2.6}\]
We now view \(\mathbf{\chi}\) also as a deformation of \(\mathcal{S}^{*}\), but we denote its values by \(\bar{\mathbf{\chi}}\),
\[\bar{\mathbf{\chi}}(\mathbf{X})=\mathbf{\chi}(\mathbf{X}),\quad\mathbf{X}\in\mathbb{E}^{3},\]
and the associated kinematic variables relative to \(\mathcal{S}^{*}\) are denoted with an over-bar. We will now derive relationships between \(\bar{\mathbf{\mathsf{E}}}\), \(\bar{\mathbf{\mathsf{K}}}\), \(\nabla^{*}\bar{\mathbf{\mathsf{E}}}\) and \(\mathbf{\mathsf{E}}\), \(\mathbf{\mathsf{K}}\), \(\nabla\mathbf{\mathsf{E}}\).
We first note that since \(\mathbf{C}=\bar{\mathbf{C}}\), we immediately conclude that
\[\mathbf{\mathsf{E}} =\frac{1}{2}[(\mathbf{C}-\mathbf{I})\mathbf{Y}_{,\alpha}\cdot\mathbf{Y}_{,\beta}] \mathbf{Y}^{,\alpha}\otimes\mathbf{Y}^{,\beta}\] \[=\frac{1}{2}[(\bar{\mathbf{C}}-\mathbf{I})\mathbf{Y}_{,\alpha}^{*}\cdot\mathbf{Y}_ {,\beta}^{*}]\mathbf{Y}^{*,\alpha}\otimes\mathbf{Y}^{*,\beta}=\bar{\mathbf{\mathsf{E}}}. \tag{2.7}\]
Let
\[\operatorname{Grad}\mathbf{C}=\frac{\partial C_{ij}}{\partial X^{a}}\mathbf{e}^{i} \otimes\mathbf{e}^{j}\otimes\mathbf{e}^{a},\]
a third order tensor on \(\mathbb{R}^{3}\). The identity \(\mathsf{g}_{\alpha\beta}=\mathbf{C}\mathbf{Y}_{,\alpha}\cdot\mathbf{Y}_{,\beta}\), the Gauss equations \(\mathbf{Y}_{,\alpha\delta}=\Gamma^{\mu}{}_{\alpha\delta}\mathbf{Y}_{,\mu}+\mathsf{B}_ {\alpha\delta}\mathbf{N}\) on \(\mathcal{S}\), and the symmetry of \(\mathbf{C}\) imply that
\[\mathsf{g}_{\alpha\beta,\delta} =\operatorname{Grad}\mathbf{C}[\mathbf{Y}_{,\alpha}\otimes\mathbf{Y}_{, \beta}\otimes\mathbf{Y}_{,\delta}]+\mathbf{C}\mathbf{Y}_{,\alpha\delta}\cdot\mathbf{Y}_{,\beta }+\mathbf{C}\mathbf{Y}_{,\alpha}\cdot\mathbf{Y}_{,\beta\delta}\] \[=\operatorname{Grad}\mathbf{C}[\mathbf{Y}_{,\alpha}\otimes\mathbf{Y}_{,\beta }\otimes\mathbf{Y}_{,\delta}]+\Gamma^{\mu}{}_{\alpha\delta}\mathsf{g}_{\mu\beta}+ \mathsf{B}_{\alpha\delta}\mathbf{C}\mathbf{N}\cdot\mathbf{Y}_{,\beta}\] \[\quad+\Gamma^{\mu}{}_{\beta\delta}\mathsf{g}_{\mu\alpha}+\mathsf{ B}_{\beta\delta}\mathbf{C}\mathbf{N}\cdot\mathbf{Y}_{,\alpha}.\]
By (2.5), we have \(\mathbf{C}\mathbf{N}=\mathbf{N}\), and thus, at \(\mathbf{Y}_{0}\),
\[\nabla\mathbf{\mathsf{E}}=\frac{1}{2}\operatorname{Grad}\mathbf{C}[\mathbf{Y}_{,\alpha} \otimes\mathbf{Y}_{,\beta}\otimes\mathbf{Y}_{,\delta}]\mathbf{Y}^{,\alpha}\otimes\mathbf{Y}^{,\beta}\otimes\mathbf{Y}^{,\delta}.\]
In particular, since \(\bar{\mathbf{C}}=\mathbf{C}\) we conclude that
\[\nabla^{*}\bar{\mathbf{\mathsf{E}}}=\nabla\mathbf{\mathsf{E}}. \tag{2.8}\]
Finally, as shown in Section 6 of [23], we have
\[\bar{\mathbf{\mathsf{K}}}=\mathbf{\mathsf{K}}. \tag{2.9}\]
Inspired by Murdoch and Cohen's extension of Noll's theory of material symmetry, we say that \(\mathbf{Y}_{0}\in\mathcal{S}\) is symmetry related to \(\mathbf{Y}_{0}\in\mathcal{S}^{*}\) if the mechanical responses to the arbitrary deformation \(\mathbf{\chi}\) are identical, i.e.,
\[U(\mathbf{\mathsf{E}},\mathbf{\mathsf{K}},\nabla\mathbf{\mathsf{E}})=U^{*}(\bar{\mathbf{ \mathsf{E}}},\bar{\mathbf{\mathsf{K}}},\nabla^{*}\bar{\mathbf{\mathsf{E}}}).\]
By (2.6), (2.7), (2.8), (2.9), this requirement leads to the following definition: the rotation \(\mathbf{\mathsf{R}}\) is in the _symmetry set_ of \(\mathbf{Y}_{0}\) relative to \(\mathcal{S}\) if for every smooth, invertible deformation \(\mathbf{\chi}:\mathbb{E}^{3}\to\mathbb{E}^{3}\) satisfying (2.5), we have
\[U(\mathbf{\mathsf{E}},\mathbf{\mathsf{K}},\nabla\mathbf{\mathsf{E}})=U\Big{(}\mathbf{\mathsf{ R}}\mathbf{\mathsf{R}}^{T},\mathbf{\mathsf{R}}\mathbf{\mathsf{K}}\mathbf{\mathsf{R}}^{T},\mathbf{ \mathsf{R}}\big{[}\nabla\mathbf{\mathsf{E}}^{T}\mathbf{\mathsf{R}}^{T}\big{]}^{T}\mathbf{ \mathsf{R}}^{T}\Big{)}.\]
As in the case of the standard theory of Noll [18], one can verify that the symmetry set of \(\mathbf{Y}_{0}\) relative to \(\mathcal{S}\) is a subgroup of the group of proper rotations of \(T_{\mathbf{Y}_{0}}\mathcal{S}\). In the case that the symmetry set of \(\mathbf{Y}_{0}\) relative to \(\mathcal{S}\) equals the group of proper rotations of \(T_{\mathbf{Y}_{0}}\mathcal{S}\), we say that the surface energy density \(U\) is _hemitropic_ at \(\mathbf{Y}_{0}\).
### Field equations
The field equations for the body \(\mathcal{B}\) with strain-gradient elastic surface \(\mathcal{S}\subseteq\partial\mathcal{B}\) are defined to be the Euler-Lagrange equations for the Lagrangian energy functional
\[\mathcal{A}[\mathbf{\chi}]=\Phi[\mathbf{\chi}]+V[\mathbf{\chi}] \tag{2.10}\]
where \(V[\mathbf{\chi}]\) is the potential energy associated to the applied forces. In this work, we assume that the Gateaux derivative of the load potential takes the form
\[\dot{V}=-\int_{\mathcal{B}}\mathbf{f}\cdot\mathbf{u}\,dV-\int_{\mathcal{S}}\mathbf{t}\cdot \mathbf{u}\,dA,\]
where \(\mathbf{f}\) is a prescribed external body force on \(\mathcal{B}\) and \(\mathbf{t}\) is a prescribed boundary traction on \(\mathcal{S}\).
Let \(\mathbf{\chi}(\cdot;\epsilon)\) be a one-parameter family of deformations of \(\mathcal{B}\) such that \(\mathbf{\chi}(\cdot;\epsilon)|_{\partial\mathcal{B}\setminus\mathcal{S}}=\mathbf{\chi} _{0}(\cdot)\), and denote
\[\dot{\cdot}:=\frac{d}{d\epsilon}\Big{|}_{\epsilon=0},\quad\mathbf{u}:=\dot{\mathbf{ \chi}}(\cdot;\epsilon).\]
Then \(\mathbf{u}|_{\partial\mathcal{B}}\) vanishes to first order on \(\partial\mathcal{B}\backslash\mathcal{S}\). Using the chain rule and integration by parts we have the classical identity
\[\int_{\mathcal{B}}\dot{W}\,dV=\int_{\partial\mathcal{B}}\mathbf{PN}\cdot\mathbf{u}\,dA -\int_{\mathcal{B}}\operatorname{Div}\mathbf{P}\cdot\mathbf{u}\,dV \tag{2.11}\]
where \(\mathbf{P}=P_{i}{}^{a}\mathbf{e}^{i}\otimes\mathbf{e}_{a}\) is the Piola stress with \(P_{i}{}^{a}=\frac{\partial W}{\partial F_{a}}\) and \(\operatorname{Div}\mathbf{P}=\big{(}\partial_{X^{a}}P_{i}{}^{a}\big{)}\mathbf{e}^{i}\). Using the chain rule we have that
\[\int_{\mathcal{S}}\dot{U}\,dA=\int_{\mathcal{S}}\Bigl{(}\mathbf{ \mathsf{T}}^{\alpha}\cdot\mathbf{u}_{,\alpha}+\mathbf{\mathsf{M}}^{\alpha\beta}\cdot \mathbf{u}_{,\alpha\beta}\Bigr{)}dA, \tag{2.12}\] \[\mathbf{\mathsf{T}}^{\alpha}:=\frac{\partial U}{\partial y_{\alpha}^ {k}}\mathbf{e}^{k},\quad\mathbf{\mathsf{M}}^{\alpha\beta}:=\frac{\partial U}{\partial y _{,\alpha\beta}^{k}}\mathbf{e}^{k}.\]
We define surface stress vectors \(\mathbf{\mathsf{P}}^{\alpha}\) by
\[\mathbf{\mathsf{P}}^{\alpha}:=\mathbf{\mathsf{T}}^{\alpha}-G^{-1/2}(G^{1/2}\mathbf{ \mathsf{M}}^{\alpha\beta})_{,\beta},\]
and use (2.11), (2.12) and integration by parts to obtain the Euler-Lagrange equations associated to the Lagrangian energy functional (2.10),
\[\operatorname{Div}\mathbf{P}+\mathbf{f}=\mathbf{0},\quad\text{on } \mathcal{B}, \tag{2.13}\] \[\mathbf{PN}=G^{-1/2}(G^{1/2}\mathbf{\mathsf{P}}^{\alpha})_{,\alpha}+\mathbf{ t},\quad\text{on }\mathcal{S},\] \[\mathbf{\chi}(\mathbf{X})=\mathbf{\chi}_{0}(\mathbf{X}),\quad\text{on } \partial\mathcal{B}\backslash\mathcal{S}.\]
## 3. Small strain models
Our principle motivation for modeling an elastic solid with strain-gradient elastic boundary surface is the study of brittle fracture. In this setting (to be discussed more in the following section), the surface \(\mathcal{S}\) possessing strain-gradient surface elasticity will be the crack front and strains will be linearized, motivating the introduction of a quadratic surface energy density. In this section, we present a model uniform, hemitropic, quadratic surface energy density that requires the same material constants (with the same physical interpretations) as found in the narrower Steigmann-Ogden theory. In contradistinction, the surface energy incorporates the surface's resistance to geodesic distortion and satisfies the strong ellipticity condition. Moreover, the surface energy density may be viewed as a geometric generalization of that introduced and advocated for by Hilgers and Pipkin in [7, 8].
### Hilgers-Pipkin surface energy
For the surface \(\mathcal{S}\), we propose the uniform, quadratic, hemitropic surface energy density
\[U =\frac{\lambda_{s}}{2}(\mathsf{E}^{\alpha}_{\ \ \alpha})^{2}+\mu_{s}\mathsf{E}_{ \alpha\beta}\mathsf{E}^{\alpha\beta}+\frac{\zeta}{2}\Bigl{[}(\mathsf{K}^{\alpha }_{\ \alpha})^{2}+(\mathsf{g}^{-1})^{\mu\nu}\mathsf{L}_{\mu\alpha}{}^{\alpha} \mathsf{L}_{\nu\beta}{}^{\beta}\Bigr{]}\] \[\quad+\eta\Bigl{[}\mathsf{K}_{\alpha\beta}\mathsf{K}^{\alpha \beta}+(\mathsf{g}^{-1})^{\mu\nu}\mathsf{L}_{\mu\alpha\beta}\mathsf{L}_{\nu}{}^ {\alpha\beta}\Bigr{]}. \tag{3.1}\]
Here \(\lambda_{s}\), \(\mu_{s}\), \(\zeta\) and \(\eta\) are positive numbers that can be interpreted as the surface Lame constants and pure bending moduli. Indices are raised and lowered using the
reference metric \(\mathsf{G}\), but note that \(\boldsymbol{y}^{,\alpha}\) are the dual vector fields to \(\boldsymbol{y}_{,\alpha}\) and are not given by \(\boldsymbol{y}_{,\beta}(\mathsf{G}^{-1})^{\alpha\beta}\). In the case that \(\mathcal{S}\) is contained in a plane with flat coordinates \((\theta^{1},\theta^{2})\), we have
\[\mathsf{E}_{\alpha\beta}=\frac{1}{2}(\mathsf{g}_{\alpha\beta}- \delta_{\alpha\beta}),\quad\mathsf{L}_{\mu\alpha\beta}=\boldsymbol{y}_{,\mu} \cdot\boldsymbol{y}_{,\alpha\beta},\] \[\boldsymbol{y}_{,\alpha\beta}=\mathsf{L}_{\mu\alpha\beta} \boldsymbol{y}^{,\mu}+\mathsf{K}_{\alpha\beta}\boldsymbol{n},\quad\sum_{\alpha =1}^{2}\boldsymbol{y}_{,\alpha\alpha}=\mathsf{L}_{\mu\alpha}^{\phantom{\mu \alpha}\alpha}\boldsymbol{y}^{,\mu}+\mathsf{K}^{\alpha}_{\phantom{\alpha} \alpha}\boldsymbol{n},\] \[\qquad\Bigl{|}\sum_{\alpha=1}^{2}\boldsymbol{y}_{,\alpha\alpha} \Bigr{|}^{2}=(\mathsf{K}^{\alpha}_{\phantom{\alpha}\alpha})^{2}+(\mathsf{g}^{ -1})^{\mu\nu}\mathsf{L}_{\mu\alpha}^{\phantom{\mu\alpha}\alpha}\mathsf{L}_{ \nu\beta}^{\phantom{\nu\beta}\beta},\] \[\qquad\sum_{\alpha,\beta=1}^{2}\lvert\boldsymbol{y}_{,\alpha \beta}\rvert^{2}=\mathsf{K}_{\alpha\beta}\mathsf{K}^{\alpha\beta}+(\mathsf{g}^ {-1})^{\mu\nu}\mathsf{L}_{\mu\alpha\beta}\mathsf{L}_{\nu}^{\phantom{\mu\alpha \beta}\alpha\beta},\]
and (3.1) becomes
\[U=\frac{\lambda_{s}}{2}(\mathsf{E}^{\alpha}_{\phantom{\alpha}\alpha})^{2}+\mu _{s}\mathsf{E}_{\alpha\beta}\mathsf{E}^{\alpha\beta}+\frac{\zeta}{2}\Bigl{|} \sum_{\alpha=1}^{2}\boldsymbol{y}_{,\alpha\alpha}\Bigr{|}^{2}+\eta\sum_{ \alpha,\beta=1}^{2}\lvert\boldsymbol{y}_{,\alpha\beta}\rvert^{2}. \tag{3.2}\]
Up to a choice of constants, the surface energy density (3.2) is precisely that introduced by Hilgers and Pipkin in [7], and therefore, we refer to (3.1) as a _quadratic Hilgers-Pipkin surface energy_. In [7, 8], Hilgers and Pipkin advocated for the use of (3.2) over the classical surface energy
\[U =\frac{\lambda_{s}}{2}(\mathsf{E}^{\alpha}_{\phantom{\alpha}\alpha })^{2}+\mu_{s}\mathsf{E}_{\alpha\beta}\mathsf{E}^{\alpha\beta}+\frac{\zeta}{2 }\bigl{|}\sum_{\alpha=1}^{2}(\boldsymbol{n}\cdot\boldsymbol{y}_{,\alpha\alpha })\Bigr{|}^{2}+\eta\sum_{\alpha,\beta=1}^{2}(\boldsymbol{n}\cdot\boldsymbol{y} _{,\alpha\beta})^{2} \tag{3.3}\]
on the basis of (3.2) being analytically simpler than (3.3). Indeed, with little effort one sees that for (3.2),
\[\boldsymbol{\mathsf{T}}^{\alpha} =(\lambda_{s}\mathsf{E}^{\gamma}_{\phantom{\gamma}\gamma}\delta^ {\alpha\beta}+2\mu_{s}\mathsf{E}^{\alpha\beta})\boldsymbol{y}_{,\beta}, \tag{3.4}\] \[\mathsf{M}^{\alpha\beta} =\zeta\Bigl{(}\sum_{\gamma}\boldsymbol{y}_{,\gamma\gamma}\Bigr{)} \delta^{\alpha\beta}+2\eta\boldsymbol{y}_{,\alpha\beta}. \tag{3.5}\]
Moreover, it is simple to see that (3.2) satisfies the strong ellipticity condition,
\[\forall(\mathsf{a}_{1},\mathsf{a}_{2})\in\mathbb{R}^{2}\backslash\{(0,0)\}, \boldsymbol{b}\in\mathbb{R}^{3}\backslash\{\boldsymbol{0}\},\quad\mathsf{a}_{ \alpha}\mathsf{a}_{\beta}\boldsymbol{b}\cdot\Bigl{(}\boldsymbol{C}^{\alpha \beta\xi\gamma}\mathsf{a}_{\delta}\mathsf{a}_{\gamma}\boldsymbol{b}\Bigr{)}>0, \tag{3.6}\]
\[\boldsymbol{C}^{\alpha\beta\gamma\delta}:=\frac{\partial^{2}U}{\partial y^{i}_ {,\alpha\beta}\partial y^{j}_{,\delta\gamma}}\boldsymbol{e}^{i}\otimes \boldsymbol{e}^{j},\]
while (3.3) does not (see (3.7) and (3.8) below). Physically, this may be viewed as a consequence of the surface energy (3.1) incorporating the surface's resistance to geodesic distortion via also including dependence on the tensor \(\mathsf{L}\).
In general, for (3.1) we have
\[\boldsymbol{\mathsf{T}}^{\alpha}=\frac{\partial U}{\partial\boldsymbol{y}_{, \alpha}}=\frac{\partial U}{\partial\mathsf{E}_{\beta\gamma}}\frac{\partial \mathsf{E}_{\beta\gamma}}{\partial\boldsymbol{y}_{,\alpha}}+\frac{\partial U}{ \partial\mathsf{K}_{\delta\nu}}\frac{\partial\mathsf{K}_{\delta\nu}}{\partial \boldsymbol{y}_{,\alpha}}+\frac{\partial U}{\partial\mathsf{L}_{\beta\gamma \delta}}\frac{\partial\mathsf{L}_{\beta\gamma\delta}}{\partial\boldsymbol{y}_{, \alpha}}.\]
Using
\[\frac{\partial\mathsf{E}_{\beta\gamma}}{\partial\boldsymbol{y}_{, \alpha}}=\frac{1}{2}\Big{(}\delta^{\alpha}{}_{\beta}\boldsymbol{y}_{,\gamma}+ \delta^{\alpha}{}_{\gamma}\boldsymbol{y}_{,\beta}\Big{)},\] \[\frac{\partial\mathsf{K}_{\mu\nu}}{\partial\boldsymbol{y}_{, \alpha}}=-\gamma^{\alpha}{}_{\mu\nu}\boldsymbol{n},\quad\frac{\partial\mathsf{ K}_{\mu\nu}}{\partial\boldsymbol{y}_{,\alpha\beta}}=\frac{1}{2}(\delta^{\alpha}{}_{ \mu}\delta^{\beta}{}_{\nu}+\delta^{\alpha}{}_{\nu}\delta^{\beta}{}_{\mu}) \boldsymbol{n},\] \[\frac{\partial\mathsf{L}_{\beta\gamma\mu}}{\partial\boldsymbol{y}_ {,\alpha}}=\delta^{\alpha}{}_{\beta}\boldsymbol{y}_{\gamma\mu}-(\delta^{\alpha }{}_{\nu}\boldsymbol{y}_{,\beta}+\delta^{\alpha}{}_{\beta}\boldsymbol{y}_{, \nu})\Gamma^{\nu}{}_{\gamma\mu},\] \[\frac{\partial\mathsf{L}_{\gamma\mu\sigma}}{\partial\boldsymbol{y }_{,\alpha\beta}}=\frac{1}{2}(\delta^{\alpha}{}_{\mu}\delta^{\beta}{}_{\sigma} +\delta^{\alpha}{}_{\sigma}\delta^{\beta}{}_{\mu})\boldsymbol{y}_{\gamma},\]
we readily compute that
\[\boldsymbol{\mathsf{T}}^{\alpha}= \Big{[}\lambda_{s}\mathsf{E}^{\mu}_{\ \mu}(\mathsf{G}^{-1})^{\alpha\gamma}+2\mu_{s}\mathsf{E}^{\alpha\gamma}\] \[\quad-(\mathsf{g}^{-1})^{\mu\alpha}(\mathsf{g}^{-1})^{\nu\gamma}( \zeta\mathsf{L}_{\mu\delta}{}^{\delta}\mathsf{L}_{\nu\sigma}{}^{\sigma}+2 \eta\mathsf{L}_{\mu\delta\sigma}\mathsf{L}_{\nu}{}^{\delta\sigma})\Big{]} \boldsymbol{y}_{,\gamma},\] \[-\Big{[}\zeta\mathsf{K}^{\mu}{}_{\mu}(\mathsf{G}^{-1})^{\delta \nu}\gamma^{\alpha}{}_{\delta\nu}+2\eta\mathsf{K}^{\delta\nu}\gamma^{\alpha}{ }_{\delta\nu}\Big{]}\boldsymbol{n},\] \[+\Big{[}\zeta(\mathsf{g}^{-1})^{\mu\alpha}\mathsf{L}_{\mu\nu}{}^ {\nu}(G^{-1})^{\gamma\delta}+2\eta(g^{-1})^{\mu\alpha}\mathsf{L}_{\mu}{}^{ \gamma\delta}\Big{]}\boldsymbol{y}_{,\gamma\delta}\] \[-\zeta\Big{[}(\mathsf{g}^{-1})^{\mu\alpha}\mathsf{L}_{\mu\nu}{}^ {\nu}\Gamma^{\beta}{}_{\delta}{}^{\delta}+(\mathsf{g}^{-1})^{\mu\beta}\mathsf{ L}_{\mu\nu}{}^{\nu}\Gamma^{\alpha}{}_{\delta}{}^{\delta}\Big{]}\boldsymbol{y}_{,\beta}\] \[-2\eta\Big{[}(\mathsf{g}^{-1})^{\mu\alpha}\mathsf{L}_{\mu}{}^{ \delta\sigma}\Gamma^{\beta}{}_{\delta\sigma}+(\mathsf{g}^{-1})^{\mu\beta} \mathsf{L}_{\mu}{}^{\delta\sigma}\Gamma^{\alpha}{}_{\delta\sigma}\Big{]} \boldsymbol{y}_{,\beta}.\]
and
\[\boldsymbol{\mathsf{M}}^{\alpha\beta}=\Big{[}\zeta\mathsf{K}^{ \mu}{}_{\mu}(\mathsf{G}^{-1})^{\alpha\beta}+2\eta\mathsf{K}^{\alpha\beta}\Big{]} \boldsymbol{n}\] \[\quad+\Big{[}\zeta(\mathsf{g}^{-1})^{\mu\gamma}\mathsf{L}_{\mu \nu}{}^{\nu}(\mathsf{G}^{-1})^{\alpha\beta}+2\eta(\mathsf{g}^{-1})^{\mu\gamma} \mathsf{L}_{\mu}{}^{\alpha\beta}\Big{]}\boldsymbol{y}_{,\gamma}.\]
To see that the strong ellipticity condition is satisfied for (3.1), we compute
\[\boldsymbol{C}^{\alpha\beta\mu\nu}=\Big{(}\zeta(\mathsf{G}^{-1})^{\alpha\beta} (\mathsf{G}^{-1})^{\mu\nu}+2\eta[(\mathsf{G}^{-1})^{\alpha\mu}(\mathsf{G}^{-1 })^{\beta\nu}+(\mathsf{G}^{-1})^{\alpha\nu}(\mathsf{G}^{-1})^{\beta\mu}] \Big{)}\boldsymbol{I},\]
and thus, for all \((\mathsf{a}_{1},\mathsf{a}_{2})\in\mathbb{R}^{2}\backslash\{(0,0)\}\) and \(\boldsymbol{b}\in\mathbb{R}^{3}\backslash\{\boldsymbol{0}\}\),
\[\mathsf{a}_{\alpha}\mathsf{a}_{\beta}\boldsymbol{b}\cdot\Big{(}\boldsymbol{C}^ {\alpha\beta\delta\gamma}\mathsf{a}_{\delta}\mathsf{a}_{\gamma}\boldsymbol{b} \Big{)}=(\zeta+2\eta)[(\mathsf{G}^{-1})^{\alpha\beta}\mathsf{a}_{\alpha} \mathsf{a}_{\beta}]^{2}|\boldsymbol{b}|^{2}>0. \tag{3.7}\]
For the classical surface energy (3.3) of Steigmann-Ogden type, we have
\[\boldsymbol{C}^{\alpha\beta\mu\nu}=\Big{(}\zeta(\mathsf{G}^{-1})^{\alpha\beta}( \mathsf{G}^{-1})^{\mu\nu}+2\eta[(\mathsf{G}^{-1})^{\alpha\mu}(\mathsf{G}^{-1})^ {\beta\nu}+(\mathsf{G}^{-1})^{\alpha\nu}(\mathsf{G}^{-1})^{\beta\mu}]\Big{)} \boldsymbol{n}\otimes\boldsymbol{n},\]
and thus, for all \((\mathsf{a}_{1},\mathsf{a}_{2})\in\mathbb{R}^{2}\) and \(\boldsymbol{b}\in\mathbb{R}^{3}\),
\[\mathsf{a}_{\alpha}\mathsf{a}_{\beta}\boldsymbol{b}\cdot\Big{(}\boldsymbol{C}^ {\alpha\beta\delta\gamma}\mathsf{a}_{\delta}\mathsf{a}_{\gamma}\boldsymbol{b} \Big{)}=(\zeta+2\eta)[(\mathsf{G}^{-1})^{\alpha\beta}\mathsf{a}_{\alpha} \mathsf{a}_{\beta}]^{2}(\boldsymbol{n}\cdot\boldsymbol{b})^{2}. \tag{3.8}\]
In particular, (3.8) shows that the surface energy (3.3) satisfies the Legendre-Hadamard condition but not the strong ellipticity condition since the right side of (3.8) is \(0\) for \(\boldsymbol{b}\neq\boldsymbol{0}\) and orthogonal to \(\boldsymbol{n}\).
### Linearized equations
We now compute the linearization of (4.1) about the reference configuration. For the bulk solid, we adopt a classical quadratic, isotropic energy density
\[W=\frac{\lambda}{2}(E^{i}_{\,\,i})^{2}+\mu E_{ij}E^{ij}.\]
Here indices are raised using the flat metric on \(\mathbb{R}^{3}\), and \(\lambda\) and \(\mu\) are the Lame constants for the bulk solid. For the surface \(\mathcal{S}\), the surface energy density is given by (3.1).
Let \(\boldsymbol{u}:\mathcal{B}\to\mathbb{R}^{3}\) be a displacement field such that \(\boldsymbol{u}|_{\partial\mathcal{B}\setminus\mathcal{S}}=\boldsymbol{0}\) and
\[\sup_{\boldsymbol{X}\in\mathcal{B}}\Bigl{[}|\boldsymbol{u}(X)|+|\text{Grad} \,\boldsymbol{u}(\boldsymbol{X})|\Bigr{]}+\sum_{\alpha,\beta=1}^{2}\sup_{ \boldsymbol{Y}\in\mathcal{S}}|\boldsymbol{u}_{,\alpha\beta}(\boldsymbol{Y})| \leq\delta_{0}. \tag{3.9}\]
Assume that the body force \(\boldsymbol{f}\), boundary traction \(\boldsymbol{t}\), and Dirichlet condition \(\boldsymbol{\chi}_{0}\) satisfy
\[|\boldsymbol{f}|=O(\delta_{0}),\quad|\boldsymbol{t}|=O(\delta_{0}),\quad| \boldsymbol{\chi}_{0}-\text{Id}|=O(\delta_{0}).\]
If \(\boldsymbol{\chi}(\boldsymbol{X})=\boldsymbol{X}+\boldsymbol{u}(\boldsymbol{X})\), then \(E_{ij}=\varepsilon_{ij}+O(\delta_{0}^{2})\), \(\mathsf{E}_{\alpha\beta}=\epsilon_{\alpha\beta}+O(\delta_{0}^{2})\), and \(\mathsf{K}_{\alpha\beta}=\mathsf{k}_{\alpha\beta}+O(\delta_{0}^{2})\) where
\[\varepsilon_{ij}=\frac{1}{2}\Bigl{(}\boldsymbol{e}_{i}\cdot\frac{\partial \boldsymbol{u}}{\partial X^{j}}+\boldsymbol{e}_{j}\cdot\frac{\partial \boldsymbol{u}}{\partial X^{i}}\Bigr{)}=O(\delta_{0}),\]
and on \(\mathcal{S}\),
\[\epsilon_{\alpha\beta}=\frac{1}{2}\bigl{(}\boldsymbol{Y}_{,\alpha}\cdot \boldsymbol{u}_{,\beta}+\boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{u}_{,\beta} \bigr{)}=O(\delta_{0}),\quad\mathsf{k}_{\alpha\beta}=\boldsymbol{N}\cdot \boldsymbol{u}_{;\alpha\beta}=O(\delta_{0}),\]
see (3.12) in [23]. Now we observe that \(\mathsf{L}_{\alpha\beta\delta}=\mathsf{l}_{\alpha\beta\delta}+O(\delta_{0}^{2})\), with
\[\mathsf{l}_{\alpha\beta\delta} =\boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{u}_{,\beta\delta}+ \boldsymbol{Y}_{,\beta\delta}\cdot\boldsymbol{u}_{,\alpha}-\Gamma^{\mu}_{\,\, \,\beta\delta}\bigl{(}\boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{u}_{,\mu}+ \boldsymbol{Y}_{,\mu}\cdot\boldsymbol{u}_{,\alpha}\bigr{)}\] \[=\boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{u}_{,\beta\delta}+ \boldsymbol{Y}_{,\beta\delta}\cdot\boldsymbol{u}_{,\alpha}\] \[=\boldsymbol{Y}_{,\alpha}\cdot\boldsymbol{u}_{,\beta\delta}+( \boldsymbol{N}\cdot\boldsymbol{u}_{,\alpha})\mathsf{B}_{\beta\delta}=O( \delta_{0}).\]
Then \(\boldsymbol{\mathsf{T}}^{\alpha}=\boldsymbol{\mathsf{t}}^{\alpha}+O(\delta_{0}^ {2})\) and \(\boldsymbol{\mathsf{M}}^{\alpha\beta}=\boldsymbol{\mathsf{m}}^{\alpha\beta}+O( \delta_{0}^{2})\) where
\[\boldsymbol{\mathsf{t}}^{\alpha} :=\Bigl{[}\lambda_{s}\epsilon^{\mu}_{\,\,\mu}(\mathsf{G}^{-1})^ {\alpha\gamma}+2\mu_{s}\epsilon^{\alpha\gamma}\Bigr{]}\boldsymbol{Y}_{,\, \gamma}-\Bigl{[}\zeta\mathsf{k}^{\mu}_{\,\,\mu}\Gamma^{\alpha}_{\,\,\nu}+2\eta \mathsf{k}^{\delta\nu}\Gamma^{\alpha}_{\,\,\delta\nu}\Bigr{]}\boldsymbol{N},\] \[\quad+\Bigl{[}\zeta^{\Omega}_{\,\,\nu}^{\,\,\nu}\mathsf{B}^{ \delta}_{\,\,\delta}+2\eta\mathsf{l}^{\alpha\delta\sigma}\mathsf{B}_{\delta \sigma}\Bigr{]}\boldsymbol{N}-\Bigl{[}\zeta^{\beta}_{\,\,\nu}^{\,\,\beta} \Gamma^{\alpha}_{\,\,\delta}+2\eta\mathsf{l}^{\beta\delta\sigma}\Gamma^{\alpha }_{\,\,\delta\sigma}\Bigr{]}\boldsymbol{Y}_{,\,\beta},\] \[\boldsymbol{\mathsf{m}}^{\alpha\beta} :=\Bigl{[}\zeta\mathsf{k}^{\mu}_{\,\,\mu}(\mathsf{G}^{-1})^{ \alpha\beta}+2\eta\mathsf{k}^{\alpha\beta}\Bigr{]}\boldsymbol{N}+\Bigl{[} \zeta^{\Omega^{\gamma}_{\,\,\nu}}(\mathsf{G}^{-1})^{\alpha\beta}+2\eta \mathsf{l}^{\gamma\alpha\beta}\Bigr{]}\boldsymbol{Y}_{,\,\gamma}.\]
The linearization of (2.13) about the reference configuration is obtained by omitting the \(O(\delta_{0}^{2})\) terms from \(\boldsymbol{P}\), \(\boldsymbol{\mathsf{T}}^{\alpha}\) and \(\boldsymbol{\mathsf{M}}^{\alpha\beta}\), yielding
\[\text{Div}\,\boldsymbol{\sigma}+\boldsymbol{f}=\boldsymbol{0}, \quad\text{on }\mathcal{B}, \tag{3.10}\] \[\boldsymbol{\sigma}\boldsymbol{N}=G^{-1/2}(G^{1/2}\boldsymbol{ \mathsf{\rho}}^{\alpha})_{,\alpha}+\boldsymbol{t},\quad\text{on }\mathcal{S},\] \[\boldsymbol{u}=\boldsymbol{0},\quad\text{on }\partial\mathcal{B} \backslash\mathcal{S},\]
where \(\boldsymbol{\sigma}=\lambda(\operatorname{tr}\!\boldsymbol{\varepsilon})\boldsymbol{I}+2 \mu\boldsymbol{\varepsilon}\) and \(\boldsymbol{\mathsf{p}}^{\alpha}=\boldsymbol{\mathsf{t}}^{\alpha}-G^{-1/2}(G^{1/ 2}\boldsymbol{\mathsf{m}}^{\alpha\beta})_{,\beta}\). We observe that solutions to the linearized equations (3.10) are critical points of the energy functional
\[\mathcal{A}_{L}[\boldsymbol{u}] =\int_{\mathcal{B}}\Bigl{[}\frac{\lambda}{2}(\varepsilon^{i}_{\ \ i})^{2}+\mu \varepsilon_{ij}\varepsilon^{ij}\Bigr{]}dV-\int_{\mathcal{B}}\boldsymbol{f} \cdot\boldsymbol{u}\,dV+\int_{\mathcal{S}}\Bigl{[}\frac{\lambda_{s}}{2}( \epsilon^{\alpha}_{\ \ \alpha})^{2}+\mu_{s}\epsilon_{\alpha\beta}\epsilon^{\alpha\beta} \Bigr{]}dS\] \[\quad+\int_{\mathcal{S}}\Bigl{[}\frac{\zeta}{2}\Bigl{(}( \mathsf{k}^{\alpha}_{\ \ \alpha})^{2}+\mathsf{l}_{\mu\alpha}{}^{\alpha}\mathsf{l}^{\mu}_{\ \beta}{}^{\beta}\Bigr{)}+\eta\Bigl{(}\mathsf{k}_{\alpha\beta}\mathsf{k}^{ \alpha\beta}+\mathsf{l}_{\mu\alpha\beta}\mathsf{l}^{\mu\alpha\beta}\Bigr{)} \Bigr{]}dA-\int_{\mathcal{S}}\boldsymbol{t}\cdot\boldsymbol{u}\,dA\]
over the set of \(\boldsymbol{u}\) satisfying \(\boldsymbol{u}|_{\partial\mathcal{B}\setminus\mathcal{S}}=\boldsymbol{0}\).
In the case of the classical Hilgers-Pipkin surface energy (3.2), we see from (3.4) and (3.5) that
\[\boldsymbol{\mathsf{t}}^{\alpha}=(\lambda_{s}\epsilon^{\gamma}_{\ \ \gamma}\delta^{\alpha\beta}+2\mu_{s}\epsilon^{\alpha\beta}) \boldsymbol{Y}_{\,\beta},\quad\boldsymbol{\mathsf{m}}^{\alpha\beta}=\zeta\delta^{\alpha\beta} \boldsymbol{u}_{,\gamma}{}^{\gamma}+2\eta\boldsymbol{u}^{\alpha\beta}.\]
Writing \(\boldsymbol{u}=\boldsymbol{\mathsf{u}}+\mathsf{u}^{3}\boldsymbol{N}=\mathsf{u} ^{\gamma}\boldsymbol{Y}_{\ \gamma}+\mathsf{u}^{3}\boldsymbol{N}\), it follows that (3.10) becomes
\[\operatorname{Div}\boldsymbol{\sigma}+\boldsymbol{f}=\boldsymbol{ 0},\quad\text{on }\mathcal{B}, \tag{3.11}\] \[\boldsymbol{\sigma}\boldsymbol{N}=\mu_{s}\boldsymbol{\mathsf{u}} _{,\alpha}^{\ \ \alpha}+(\lambda_{s}+\mu_{s})\mathsf{u}^{\gamma}_{\ \,\alpha\gamma} \boldsymbol{Y}^{\alpha}-(\zeta+2\eta)\partial_{\alpha}\partial_{\beta} \boldsymbol{u}^{\,\alpha\beta}+\boldsymbol{t},\quad\text{on }\mathcal{S},\] \[\boldsymbol{u}=\boldsymbol{0},\quad\text{on }\partial\mathcal{B} \setminus\mathcal{S}.\]
## 4. Mode-III Fracture Problem
In this section, we apply the linearized theory (3.11) to the problem of a brittle infinite plate, with a straight crack \(\mathcal{C}\) of length \(2\ell\), under far-field anti-plane shear loading \(\sigma\). As discussed in the Introduction and in contrast to ascribing either a quadratic Gurtin-Murdoch or Steigmann-Ogden surface energy (3.3) to the crack fronts, the use of (3.1) yields a model that predicts bounded strains and stresses up to the crack tips (see Theorem 4.4).
### Formulation and governing equations
We consider a brittle, infinite plate under anti-plane shear loading, \(\lim_{x^{2}\to\pm\infty}\sigma_{13}=0\) and \(\lim_{x^{2}\to\pm\infty}\sigma_{23}=\sigma\), with a straight crack \(\mathcal{C}=\{(x^{1},0,x^{3})\mid x^{1}\in[-\ell,\ell]\}\) of length \(2\ell\) (see Figure 2). For anti-plane shear, the displacement field takes the form
\[\boldsymbol{u}(x^{1},x^{2},x^{3})=u(x^{1},x^{2})\boldsymbol{e}_{3},\]
Then the only nonzero components of the stress are
\[\sigma_{13}=\mu u_{,1},\quad\sigma_{23}=\mu u_{,2}\]
By the symmetry of the problem, \(u\) can be taken to be even in \(x^{1}\) and odd in \(x^{2}\), so we will focus only on the strain and stress fields for \(x^{2}\geq 0\). The governing field equations are (3.11) on \(\mathcal{B}=\{(x^{1},x^{2},x^{3})\mid x^{2}\geq 0\}\) with \(\mathcal{S}=\{(x^{1},0,x^{3})\mid x^{1}\in[-\ell,\ell]\}\), \(\boldsymbol{t}=\boldsymbol{0}\) and \(\boldsymbol{f}=\boldsymbol{0}\).
We define dimensionless variables
\[x=\frac{x^{1}}{\ell},\quad y=\frac{x^{2}}{\ell},\quad z=\frac{x^{3}}{\ell}, \quad w(x,y,z)=\frac{1}{\ell}\Bigl{(}u(x^{1},x^{2},x^{3})-\frac{\sigma}{\mu} x^{2}\Bigr{)}.\]
Then the field equations take the dimensionless form
\[\Delta w(x,y)=0,\quad y>0, \tag{4.1}\] \[-w_{y}(x,0)=\alpha w_{xxx}(x,0)-\beta w_{xxxx}(x,0)+\gamma,\quad x \in(-1,1),\] \[w(x,0)=0,\quad|x|\geq 1,\] \[w_{x}(\pm 1,0)=0,\]
with the decay condition \(\lim_{y\to\infty}|\nabla w(x,y)|=0\). We note that the boundary conditions \(w_{x}(\pm 1,0)=0\) imply that the crack opening is cusp shaped rather then blunted (see also Figure 3 and Figure 4). The dimensionless parameters \(\alpha,\beta\) and \(\gamma\) are given by
\[\alpha=\frac{\mu_{s}}{\mu\ell}>0,\quad\beta=\frac{\zeta+2\eta}{\mu\ell^{3}}>0, \quad\gamma=\frac{\sigma}{\mu}, \tag{4.2}\]
and in particular, we see from (4.2) that the behavior of the displacement \(w\) depends on the length of the crack, \(\ell\). For macro cracks satisfying \(\beta\ll\alpha\ll 1\), we expect \(w(x,0)\) to be well-approximated by the singular, rounded opening profile from the classical linear elastic fracture mechanics except in small regions near the crack tips (boundary layers). See Figure 4.
We remark that in using the Steigmann-Ogden surface energy (3.3) rather than (3.1), the boundary conditions at \(y=0\) are replaced by
\[-w_{y}(x,0)=\alpha w_{xx}(x,0)+\gamma,\quad x\in(-1,1), \tag{4.3}\] \[w(x,0)=0,\quad|x|\geq 1.\]
One may view this loss of higher order derivatives in the boundary conditions as a consequence of the fact that the Steigmann-Ogden surface energy does not satisfy the strong-ellipticity condition (3.6): for anti-plane shear, \(\mathbf{b}=\mathbf{u}(x^{1},0,x^{3})\) is orthogonal to the surface's normal \(\mathbf{n}=-\mathbf{e}_{2}\) (see (3.8)). As discussed in [9, 25], the boundary conditions (4.3) _do not_ lead to a model predicting bounded strains up to the crack tips \(x=\pm 1\), i.e., the displacement field satisfies
\[\sup_{y>0}|\nabla w(\pm 1,y)|=\infty.\]
We see that (4.1) is the system of Euler-Lagrange equations for the energy functional
\[\mathcal{A}_{L}[w] =\frac{1}{2}\int_{0}^{\infty}\int_{-\infty}^{\infty}|\nabla w(x,y )|^{2}dxdy+\int_{-\infty}^{\infty}\Bigl{(}\frac{\alpha}{2}|w_{x}(x,0)|^{2}+ \frac{\beta}{2}|w_{xx}(x,0)|^{2}\Bigr{)}dx\] \[\quad-\gamma\int_{-\infty}^{\infty}w(x,0)dx\]
defined for \(w\) with \(\nabla w\in L^{2}(\{y>0\})\), \(w(\cdot,0)\in H^{2}(\mathbb{R})\) and \(w(x,0)=0\) for all \(|x|\geq 1\). Motivated by this observation, we define the Hilbert space \(H\) to be the
Figure 2. Schematic of the mode-III problem with the crack \(\mathcal{C}\) appearing in blue.
completion of \(C^{\infty}_{c}((-1,1))\) under the norm
\[\|f\|_{H}^{2}:=\int_{-\infty}^{\infty}\bigl{(}\alpha|f^{\prime}(x)|^{2}+\beta|f^ {\prime\prime}(x)|^{2}\bigr{)}dx.\]
It is straightforward to verify the following facts using the fundamental theorem of calculus and Cauchy-Schwarz inequality:
* (Sobolev embedding) If \(f\in H\) then \(f\in C^{1,1/2-}_{c}(\mathbb{R})\) and \(f(x)=0\) for all \(|x|\geq 1\), and for all \(\delta\in[0,1/2)\), there exists a constant \(A>0\) depending on \(\alpha,\beta\) and \(\delta\), such that for all \(f\in H\), \[\|f\|_{C^{1,\gamma}(\mathbb{R})}\leq A\|f\|_{H}.\]
* \(f\in H\) if and only if \(f\in H^{2}(\mathbb{R})\) and \(f(x)=0\) for all \(|x|\geq 1\). Moreover, there exist \(b,B>0\) depending on \(\alpha\) and \(\beta\) such that for all \(f\in H\), \[b\|f\|_{H}\leq\|f\|_{H^{2}(\mathbb{R})}\leq B\|f\|_{H}.\] (4.4)
The problem (4.1) can be reduced completely to a problem on the boundary by using the Dirichlet-to-Neumann map \(-w_{y}(x,0)=\mathcal{H}w_{x}(x,0)\) where \(\mathcal{H}\) is the Hilbert transform
\[\mathcal{H}f(x)=\frac{1}{\pi}\,\text{p.v.}\,\int_{-\infty}^{\infty}\frac{f(s) }{x-s}ds,\quad f\in H.\]
Then finding \(w\) with \(\nabla w\in L^{2}(\{y>0\})\) and \(w(\cdot,0)\in H\) satisfying (4.1) is equivalent to determining \(w(\cdot,0)=:f\in H\) satisfying6
Footnote 6: Once \(f\) is found, \(w\) is determined on the upper half plane using the standard Poisson kernel for the upper half plane.
\[\beta f^{\prime\prime\prime\prime}(x)-\alpha f^{\prime\prime}(x)+\mathcal{H}f ^{\prime}(x)=\gamma,\quad x\in(-1,1). \tag{4.5}\]
By using the Plancherel theorem, the Fourier representation of the Hilbert transform (see (4.13)), and (4.4), we have for all \(f\in H\),
\[\|\mathcal{H}f^{\prime}\|_{H^{1}(\mathbb{R})}\leq\|f^{\prime}\|_{H^{1}( \mathbb{R})}\leq\|f\|_{H^{2}(\mathbb{R})}\leq B\|f\|_{H}. \tag{4.6}\]
_Definition 4.1_.: A function \(f\in H\) is a _weak solution_ to the integro-differential equation (4.5) if for all \(g\in H\)
\[\int_{-\infty}^{\infty}[\beta f^{\prime\prime}(x)g^{\prime\prime}(x)+\alpha f ^{\prime}(x)g^{\prime}(x)+\mathcal{H}f^{\prime}(x)g(x)\Big{]}dx=\int_{-\infty }^{\infty}\gamma g(x)dx. \tag{4.7}\]
We remark that since \(f,g\in H\), the integrals appearing in (4.7) are in fact over the interval \((-1,1)\).
A function \(f\in H\) is a _classical solution_ to (4.5) if \(f\in C^{4}((-1,1))\cap H\) and \(f\) satisfies (4.5) pointwise.
We note that by (4.6) and Cauchy-Schwarz, (4.7) is well-defined for each \(f,g\in H\).
### Solution of the integro-differential equation
We now establish that there exists a unique classical solution to (4.5), and the solution's behavior is consistent with the linearization assumption (3.9). We denote the following Green function
\[G(x,\tau)=\begin{cases}\frac{1}{24}(x-1)^{2}(\tau+1)^{2}(1+2x-2\tau-x\tau)&\tau \in[-1,x],\\ \frac{1}{24}(\tau-1)^{2}(x+1)^{2}(1+2\tau-2x-x\tau)&\tau\in[x,1],\end{cases}\]
satisfying \(G_{xxxx}(x,\tau)=\delta(x-\tau)\), \(G(\pm 1,\tau)=0\), \(G_{x}(\pm 1,\tau)=0\). We note that \(G(x,\tau)=G(\tau,x)\) for all \(\tau,x\in[-1,1]\), \(G\in C^{2}([-1,1]\times[-1,1])\) and \(\int_{-1}^{1}G(x,\tau)d\tau=\frac{1}{24}(1-x^{2})^{2}\). In particular, we have for all \(f\in H\),
\[\int_{-1}^{1}G_{\tau\tau}(x,\tau)f^{\prime\prime}(\tau)d\tau=f(x). \tag{4.8}\]
**Lemma 4.2**.: _A function \(f\in H\) is a weak solution to (4.5) if and only if \(f\) satisfies_
\[\beta f(x)+\int_{-1}^{1}G(x,\tau)(-\alpha f^{\prime\prime}(\tau)+\mathcal{H}f^ {\prime}(\tau))d\tau=\frac{\gamma}{24}(1-x^{2})^{2},\quad x\in[-1,1]. \tag{4.9}\]
Proof.: Let \(h\in L^{2}(\mathbb{R})\) with \(h(x)=0\) for all \(|x|>1\), and set
\[g(x)=\begin{cases}\int_{-1}^{1}G(x,\tau)h(\tau)d\tau&\text{ if }|x|\leq 1,\\ 0&\text{ if }|x|>1.\end{cases}\]
Since \(G\in C^{2}([-1,1]\times[-,1,1])\), \(G(\pm 1,\tau)=0\), and \(G_{x}(\pm 1,\tau)=0\), \(g\) is twice continuously differentiable on \(\mathbb{R}\backslash\{\pm 1\}\) and continuously differentiable on \(\mathbb{R}\) with
\[g^{\prime}(x) =\chi_{\{|x|\leq 1\}}(x)\int_{-1}^{1}G_{x}(x,\tau)h(\tau)d\tau,\] \[g^{\prime\prime}(x) =\chi_{\{|x|\leq 1\}}(x)\int_{-1}^{1}G_{xx}(x,\tau)h(\tau)d\tau,\]
Figure 3. Numerical solutions for the equivalent formulation of (4.5) as a Fredholm problem (4.10). The parameters \((\beta,\alpha,\gamma)\) range over \((1,1,1)\), \((5,1,5)\) and \((10,1,10)\). For \(\gamma=\beta\) and \(\beta\gg 1\simeq\alpha\), the opening profile is well approximated by the limiting opening profile \(f_{\infty}(x)=\frac{1}{24}(1-x^{2})^{2}\) on \([-1,1]\).
where \(\chi_{E}\) is the indicator function of a subset \(E\subseteq\mathbb{R}\). In particular, we conclude that \(g\in H^{2}(\mathbb{R})\) and thus, \(g\in H\). Inserting \(g\) into (4.7), integrating by parts in the second term and using that \(g(\pm 1)=0\) yield
\[\beta\int_{-1}^{1}\int_{-1}^{1}G_{xx}(x,\tau)f^{\prime\prime}(x)h(\tau)d\tau dx\]
\[+\int_{-1}^{1}\int_{-1}^{1}G(x,\tau)(-\alpha f^{\prime\prime}(x)+\mathcal{H}f ^{\prime}(x))h(\tau)d\tau dx\]
\[=\gamma\int_{-1}^{1}\int_{-1}^{1}G(x,\tau)h(\tau)d\tau dx.\]
Interchanging the order of integration, using the symmetry of \(G\) and relabeling the integration variables lead to
\[\beta\int_{-1}^{1}\int_{-1}^{1}G_{\tau\tau}(x,\tau)f^{\prime\prime}(\tau)d\tau \,h(x)dx\]
\[+\int_{-1}^{1}\int_{-1}^{1}G(x,\tau)(-\alpha f^{\prime\prime}(\tau)+\mathcal{ H}f^{\prime}(\tau))d\tau\,h(x)dx\]
\[=\int_{-1}^{1}\frac{\gamma}{24}(1-x^{2})^{2}h(x)dx.\]
Finally, by (4.8) we conclude that
\[\int_{-1}^{1}\Bigl{[}\beta f(x)+\int_{-1}^{1}G(x,\tau)(-\alpha f^{\prime \prime}(\tau)+\mathcal{H}f^{\prime}(\tau))d\tau\Bigr{]}\,h(x)dx\]
\[=\int_{-1}^{1}\frac{\gamma}{24}(1-x^{2})^{2}h(x)dx,\]
for all \(h(x)\in L^{2}(\mathbb{R})\) with \(h(x)=0\) for all \(|x|>1\), proving (4.9).
Conversely, if \(f\in H\) and (4.9) holds, then for all \(g\in H\), we have
\[\beta\int_{-\infty}^{\infty}f^{\prime\prime}(x)g^{\prime\prime}( x)dx =\int_{-1}^{1}\int_{-1}^{1}G_{xx}(x,\tau)(\alpha f^{\prime\prime}( \tau)-\mathcal{H}f^{\prime}(\tau))g^{\prime\prime}(x)dx\] \[\quad+\int_{-1}^{1}\frac{\gamma}{6}(3x^{2}-1)g^{\prime\prime}(x)dx.\]
We again interchange the order of integration and use integration by parts and \(\int_{-1}^{1}G_{xx}(x,\tau)g^{\prime\prime}(x)dx=g(\tau)\) to conclude that
\[\int_{-1}^{1}\int_{-1}^{1}G_{xx}(x,\tau)(\alpha f^{\prime\prime}(\tau)- \mathcal{H}f^{\prime}(\tau))g^{\prime\prime}(x)dx+\int_{-1}^{1}\frac{\gamma}{ 6}(3x^{2}-1)g^{\prime\prime}(x)dx\]
\[=\int_{-1}^{1}(\alpha f^{\prime\prime}(\tau)-\mathcal{H}f^{\prime}(\tau))g( \tau)d\tau+\int_{-1}^{1}\gamma g(x)dx\]
\[=-\int_{-1}^{1}(\alpha f^{\prime}(x)g^{\prime}(x)+\mathcal{H}f^{\prime}(x)g(x ))dx+\int_{-1}^{1}\gamma g(x)dx\]
This proves \(f\in H\) satisfies (4.7) and concludes the proof of the lemma.
Via integration by parts and straightforward computations, we conclude from Lemma 4.2 that the crack opening profile \(f\) must satisfy the following Fredholm integral equation of the second kind (4.10). We will show in Theorem 4.4 that this
equation is uniquely solvable for _arbitrary_\(\alpha,\beta>0\) and \(\gamma\). We remark that since the kernel extends to a continuous function on \([-1,1]\times[-1,1]\) (the singularities are removable), the numerical computation of solutions is relatively straightforward via the Nystrom method with the trapezoidal rule to approximate the integral (see Figure 3 and Figure 4).
**Corollary 4.3**.: _A function \(f\in H\) is a weak solution to (4.5) if and only if \(f\) satisfies the Fredholm equation_
\[\beta f(x)+\int_{-1}^{1}K(x,s)f(s)ds=\frac{\gamma}{24}(1-x^{2})^{2},\quad x\in [-1,1], \tag{4.10}\]
_where \(K(x,s)=-\alpha G_{ss}(x,s)+\frac{1}{\pi}\)p.v. \(\int_{-1}^{1}\frac{G_{\tau}(x,\tau)}{s-\tau}d\tau\),_
\[G_{ss}(x,s)=\begin{cases}-\frac{1}{4}(x-1)^{2}(2s+xs+1)&\quad s\in[-1,x],\\ -\frac{1}{4}(x+1)^{2}(-2s+xs+1)&\quad s\in[x,1],\end{cases}\]
Figure 4. Numerical solutions for the macro-crack regime \(\beta\ll\alpha\ll 1\). The parameters \((\beta,\alpha,\gamma)\) range over \((10^{-1},10^{-1},1)\), \((10^{-2},10^{-1},1)\), \((10^{-5},10^{-2},1)\) and \((10^{-6},10^{-3},1)\). For \(\beta\ll\alpha\ll 1\), we expect the crack opening \(f(x)\) to be well-approximated by the singular, rounded opening profile predicted by classical linear elastic fracture mechanics away from the crack tips where \(f^{\prime}(\pm 1)=0\) (and the profile is cusped).
_and_
\[\frac{1}{\pi}p.v.\int_{-1}^{1}\frac{G_{\tau}(x,\tau)}{s-\tau}d\tau =\frac{1}{4\pi}(sx-1)(x^{2}-1)+\frac{1}{2\pi}(s-x)^{2}\log|x-s|\] \[\quad-\frac{1}{8\pi}(x-1)^{2}(-x+2s+sx)(1+s)\log(1+s)\] \[\quad-\frac{1}{8\pi}(x+1)^{2}(x-2s+sx)(1-s)\log(1-s).\]
**Theorem 4.4**.: _There exists \(C>0\) depending on \(\alpha\) and \(\beta\) such that the following hold. There exists a unique classical solution \(f\) to (4.5), and \(f\) satisfies_
\[\|f\|_{C^{4}([-1,1])}\leq C|\gamma|. \tag{4.11}\]
_Moreover, the displacement field \(w(x,y)=\int_{-\infty}^{\infty}P_{y}(x-s)f(s)ds\), where \(P_{y}(\cdot)\) is the Poisson kernel for the upper half plane, has bounded strains up to the crack tips:_
\[\|w\|_{C^{1}(\{y\geq 0\})}\leq C|\gamma|. \tag{4.12}\]
Proof.: In what follows, \(C\) will denote a positive constant depending only on \(\alpha\) and \(\beta\) that may change from line to line, and we denote the Fourier transform and inverse Fourier transform by
\[\hat{f}(\xi)=\int_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx,\quad\check{f}(x)= \int_{-\infty}^{\infty}f(\xi)e^{2\pi ix\xi}\,d\xi.\]
We recall that
\[\widehat{f}^{\prime}(\xi)=2\pi i\xi\hat{f}(\xi),\quad\widehat{\mathcal{H}f}( \xi)=-i\text{sgn}(\xi)\hat{f}(\xi), \tag{4.13}\]
the latter relation following from the Fourier representation of \(w\) on the upper half plane (see (4.15)).
We define a bilinear form \(B(\cdot,\cdot):H\times H\to\mathbb{R}\) by
\[B(f,g)=\int_{-\infty}^{\infty}[\beta f^{\prime\prime}(x)g^{\prime\prime}(x)+ \alpha f^{\prime}(x)g^{\prime}(x)+\mathcal{H}f^{\prime}(x)g(x)\Big{]}dx,\quad f,g\in H.\]
By Cauchy-Schwarz, (4.6), and (4.4) we conclude that for all \(f,g\in H\),
\[|B(f,g)|\leq(1+B^{2})\|f\|_{H}\|g\|_{H},\]
so that \(B(\cdot,\cdot)\) is a bounded bilinear form on \(H\). Moreover, by the Plancherel theorem and (4.13), we have for all \(f\in H\),
\[\int_{-\infty}^{\infty}\mathcal{H}f^{\prime}(x)f(x)dx=\int_{-\infty}^{\infty }2\pi|\xi||\hat{f}(\xi)|^{2}d\xi\geq 0.\]
Thus, the bilinear form is coercive. Since for all \(g\in H\),
\[\Big{|}\int_{-\infty}^{\infty}\gamma g(x)dx\Big{|}\leq|\gamma|\|g\|_{C([-1,1] )}\leq|\gamma|A\|g\|_{H},\]
the classical Lax-Milgram theorem implies that there exists a unique weak solution \(f\in H\) to (4.5), and moreover,
\[\|f\|_{H}\leq A|\gamma|. \tag{4.14}\]
To prove (4.12), we express \(w\) via the Fourier transform,
\[w(x,y)=\int_{-\infty}^{\infty}e^{-2\pi y|\xi|}e^{2\pi ix\xi}\hat{f}(\xi)d\xi. \tag{4.15}\]
Since \(f\in H\subset H^{2}(\mathbb{R})\), we have
\[\int(1+|\xi|)^{4}|\hat{f}(\xi)|^{2}d\xi\leq C_{0}\|f\|_{H^{2}(\mathbb{R})}^{2}\leq C \|f\|_{H}^{2}.\]
Thus, by Cauchy-Schwarz
\[|w(x,y)| +|\nabla w(x,y)|\] \[\leq\int_{-\infty}^{\infty}(1+2\pi|\xi|\sqrt{2})|\hat{f}(\xi)|^{2 }\,d\xi\] \[\leq 2\pi\sqrt{2}\Big{(}\!\int_{-\infty}^{\infty}|\xi|^{2}(1+|\xi |)^{-4}d\xi\Big{)}^{1/2}\Big{(}\!\int_{-\infty}^{\infty}(1+|\xi|)^{4}|\hat{f}( \xi)|^{2}d\xi\Big{)}^{1/2}\] \[\leq C\|f\|_{H}\leq C|\gamma|.\]
We now show that the weak solution \(f\) is a classical solution, and (4.11) holds. By a density argument in \(H\), we have that \(\int_{-1}^{1}G(\cdot,\tau)(\alpha f^{\prime\prime}(\tau)-\mathcal{H}f^{\prime} (\tau))d\tau\in H^{3}([-1,1])\) with
\[\frac{d^{k}}{dx^{k}}\int_{-1}^{1}\!G(x,\tau)(\alpha f^{\prime \prime}(\tau)-\mathcal{H}f^{\prime}(\tau))d\tau\] \[\qquad\qquad=\int_{-1}^{1}\partial_{x}^{k}G(x,\tau)(\alpha f^{ \prime\prime}(\tau)-\mathcal{H}f^{\prime}(\tau))d\tau,\quad k=1,2,\] \[\frac{d^{3}}{dx^{3}}\int_{-1}^{1}\!G(x,\tau)(\alpha f^{\prime \prime}(\tau)-\mathcal{H}f^{\prime}(\tau))d\tau\] \[\qquad\qquad=\int_{-1}^{x}\frac{1}{4}(2-\tau)(\tau+1)^{2}(\alpha f ^{\prime\prime}(\tau)-\mathcal{H}f^{\prime}(\tau))d\tau\] \[\qquad\qquad\qquad-\int_{x}^{1}\frac{1}{4}(2+\tau)(\tau-1)^{2}( \alpha f^{\prime\prime}(\tau)-\mathcal{H}f^{\prime}(\tau))d\tau. \tag{4.16}\]
By Lemma 4.2, we conclude that \(f\in H^{3}([-1,1])\), and by (4.9) (4.16), Cauchy-Schwarz, and (4.14) we have
\[\|f^{\prime\prime\prime}\|_{L^{2}([-1,1])} \leq C(\|f^{\prime\prime}\|_{L^{2}(\mathbb{R})}+\|\mathcal{H}f^{ \prime}\|_{L^{2}(\mathbb{R})}+|\gamma|)\] \[\leq C(\|f\|_{H}+|\gamma|)\leq C|\gamma|. \tag{4.17}\]
Moreover, by (4.14), (4.17) and the fundamental theorem of calculus, \(f\in C^{2}([-1,1])\) with
\[\|f\|_{C^{2}([-1,1])}\leq C|\gamma|. \tag{4.18}\]
By (4.7) and integration by parts, it follows that for all \(g\in C_{c}^{\infty}((-1,1))\subset H\),
\[\int_{-1}^{1}f^{\prime\prime\prime}(x)g^{\prime}(x)dx=\frac{1}{\beta}\int_{-1 }^{1}[-\alpha f^{\prime\prime}(x)+\mathcal{H}f^{\prime}(x)-\gamma]g(x)dx,\]
and, thus, \(f\in H^{4}([-1,1])\) and
\[\beta f^{\prime\prime\prime\prime}(x)-\alpha f^{\prime\prime}(x)+\mathcal{H} f^{\prime}(x)=\gamma,\quad\text{for a.e. }x\in(-1,1). \tag{4.19}\]
Then by (4.19), (4.18), and the fact that \(\mathcal{H}f^{\prime}\in H^{1}(\mathbb{R})\hookrightarrow C(\mathbb{R})\), we conclude that \(f^{\prime\prime\prime\prime}\in C([-1,1])\) and
\[\|f^{\prime\prime\prime\prime}\|_{C([-1,1])} \leq C\Big{(}\|f\|_{C^{2}([-1,1])}+\|\mathcal{H}f^{\prime}\|_{C([ -1,1])}+|\gamma|\Big{)}\] \[\leq C\Big{(}\|\mathcal{H}f^{\prime}\|_{H^{1}(\mathbb{R})}+| \gamma|\Big{)}\] \[\leq C\Big{(}\|f\|_{H}+|\gamma|\Big{)}\leq C|\gamma|. \tag{4.20}\]
By (4.17), (4.18) and (4.20), it follows that \(f\in C^{4}([-1,1])\) is a classical solution to (4.5) and (4.11) holds.
|
2309.08220 | UniST: Towards Unifying Saliency Transformer for Video Saliency
Prediction and Detection | Video saliency prediction and detection are thriving research domains that
enable computers to simulate the distribution of visual attention akin to how
humans perceiving dynamic scenes. While many approaches have crafted
task-specific training paradigms for either video saliency prediction or video
salient object detection tasks, few attention has been devoted to devising a
generalized saliency modeling framework that seamlessly bridges both these
distinct tasks. In this study, we introduce the Unified Saliency Transformer
(UniST) framework, which comprehensively utilizes the essential attributes of
video saliency prediction and video salient object detection. In addition to
extracting representations of frame sequences, a saliency-aware transformer is
designed to learn the spatio-temporal representations at progressively
increased resolutions, while incorporating effective cross-scale saliency
information to produce a robust representation. Furthermore, a task-specific
decoder is proposed to perform the final prediction for each task. To the best
of our knowledge, this is the first work that explores designing a transformer
structure for both saliency modeling tasks. Convincible experiments demonstrate
that the proposed UniST achieves superior performance across seven challenging
benchmarks for two tasks, and significantly outperforms the other
state-of-the-art methods. | Junwen Xiong, Peng Zhang, Chuanyue Li, Wei Huang, Yufei Zha, Tao You | 2023-09-15T07:39:53Z | http://arxiv.org/abs/2309.08220v1 | # UniST: Towards Unifying Saliency Transformer for Video Saliency Prediction and Detection
###### Abstract
Video saliency prediction and detection are thriving research domains that enable computers to simulate the distribution of visual attention akin to how humans perceiving dynamic scenes. While many approaches have crafted task-specific training paradigms for either video saliency prediction or video salient object detection tasks, few attention has been devoted to devising a generalized saliency modeling framework that seamlessly bridges both these distinct tasks. In this study, we introduce the **Un**ified **S**aliency **T**ransformer (**UnST**) framework, which comprehensively utilizes the essential attributes of video saliency prediction and video salient object detection. In addition to extracting representations of frame sequences, a saliency-aware transformer is designed to learn the spatio-temporal representations at progressively increased resolutions, while incorporating effective cross-scale saliency information to produce a robust representation. Furthermore, a task-specific decoder is proposed to perform the final prediction for each task. To the best of our knowledge, this is the first work that explores designing a transformer structure for both saliency modeling tasks. Convincible experiments demonstrate that the proposed **UniST** achieves superior performance across seven challenging benchmarks for two tasks, and significantly outperforms the other state-of-the-art methods.
## Introduction
As the continuous emergence of massive dynamic data propels deep learning further towards human vision, a growing attention of research has focused on video saliency prediction (VSP) [1, 22, 13] and video salient object detection (VSOD) [14, 15, 16], which are regarded as the fundamental tasks in computer vision. Both the tasks essentially aim to simulate the visual attention distribution of humans perceiving dynamic scenes by modeling spatio-temporal cues in video content. However, the modeling paradigms of prior works are tailored for specialized tasks, and thereby lack the capacity for generalization to address broader tasks.
Previous VSP approaches have made impressive progress in predicting the most salient visual regions within frame sequences based on video encoder and decoder [11, 16, 15, 17], as depicted in Figure 1(a). To capture spatio-temporal information from video sequences, these methods uniformly rely on video models pre-trained on extensive video datasets, e.g., the lightweight _S3D_[14] and the effective _Video Swin Transformer_[13]. Subsequently, various kinds of decoding strategies have been successively proposed for processing the obtained spatio-temporal features. [13, 12] proposed a 3D fully convolutional decoder with U-Net-like structure to progressively concatenate features along the temporal dimension. In addition, [15]
Figure 1: Comparisons of the traditional different modeling paradigm for VSP and VSOD tasks, as well as our proposed unified saliency transformer framework. The VSP and VSOD adopt video encoder and image encoder respectively as feature extractors, followed by corresponding decoders. Differently, a unified saliency transformer directly applies an image encoder for feature processing, and follows a transformer structure for spatio-temporal modeling, and finally uses different decoders for different tasks.
2023) proposed a 3D convolutional decoder based on the feature pyramid structure, to build high-level semantic features at all scales.
Compared to VSP, the video dataset used in the current VSOD task is usually limited-sized, making it difficult to converge the video encoder-based method to the optimum [18, 19]. As a result, many VSOD approaches adopt an image encoder and decoder-based training paradigm, where pre-training is initially performed on the image dataset, and then the pre-trained weights are transferred to the video dataset to further improve the model's generalization capabilities [10, 11, 12], as shown in the Figure 1(b). To extract the frame-wise features from video sequences, [10, 11, 12] employed networks of the ResNet family [13] and the MobileNet family [14] as feature extractors. For the temporal cues between image feature sequences, [10] proposed to use the modified non-local blocks [12] to obtain the temporal information between feature sequences. For temporal features extraction in a different way, [11, 12] proposed to construct another temporal branch from optical flow and fuse with spatial features.
Unfortunately, except for the above achievements on each independent saliency task, there has not been much prior effort to bridge both tasks for the construction of a generalized saliency modeling paradigm. Thus, several questions naturally arise: 1) _why it is difficult to unify modeling for video saliency prediction and detection tasks?_ and 2) _is it possible to build a unified saliency model generalized to these two different tasks?_
As an answer to the questions above, a novel **Un**ifed **S**aliency **T**ransformer model (**UniST**) is proposed, which comprehensively utilizes the essential attributes of video saliency prediction and video salient object detection tasks. **UniST** composes of an image encoder, a saliency-aware transformer and a task-specific decoder as shown in Figure 1(c), and the incorporated image encoder is to obtain a generic representation for each image with video sequences. In addition, a saliency-aware transformer is also introduced to model spatio-temporal representations of image feature sequences by stacking multiple sal-transformer blocks, as well as augmenting feature scale progressively. Subsequently, a task-specific decoder is proposed to leverage the transformer's output to make the final prediction for each task. We train **UniST** and achieve superior performance to well-established prior works for both tasks. The main contributions in this work can be summarized as follows:
1. A novel unified saliency transformer **UniST** framework is proposed to comprehensively intrinsically associate both video saliency prediction and video salient object detection tasks.
2. An efficient saliency-aware transformer is designed to learn the spatio-temporal representations at gradually increased resolutions, and incorporate effective cross-scale saliency information in the meantime.
3. Convincible experiments have been conducted on different challenging benchmarks across two tasks, which is able to demonstrate a superior performance of the proposed **UniST** in comparison to the other state-of-the-art works.
## Related Work
### Video Saliency Prediction
Deep learning has led to the emergence of numerous video saliency prediction methods that focus on modeling continuous motion information across frames. [1] presented a two-stream deep model based on element-wise or convolutional fusion strategies to learn spatio-temporal features. [11] designed an object-to-motion CNN network to extract features and two-layer ConvLSTM with a Bayesian dropout to learn dynamic saliency. [12] combined the ConvLSTM network with an attention mechanism to improve training efficiency and performance. However, existing LSTM-based methods fall short of effectively integrating spatial and temporal information [13]. For this challenge, different studies have tried to incorporate 3D convolutional models or video transformer models into the VSP task. [13] introduced a TASED-Net to leverage the S3D model [20] to simultaneously handle spatial and temporal cues in VSP. Since then, the training paradigm of 3D convolution has been widely used in VSP task. [11] adopted a 3D encoder-decoder structure, resembling U-Net, which allows the constant concatenation of decoding features from various layers(with the corresponding encoder features in the temporal dimension). [12] proposed a video swin transformer-based framework, which can eliminate the reliance on pre-trained S3D model and enhance the performance upper bound in VSP task. In a different way, the proposed UniST directly applies an image encoder for feature processing and follows a saliency-aware transformer for spatio-temporal modeling.
### Video Salient Object Detection
In VSOD, since the content of each frame is highly correlated, it can be considered whose purpose is to capture long-range feature information among the adjacency frame. Traditional methods [15, 16] often rely on conventional heuristics drawn from the domain of image salient object detection. Recent works [18, 12] strive to acquire highly semantic representations and usually perform spatial-temporal detection end-to-end.
By taking temporal information into consideration networks modeling, different works have been proposed, e.g., ConvLSTM [13], took optical-flows as input [13], or 3D convolution [14]. But in real-world scenarios, the temporal cost incurred by introducing three-dimensional convolution operations is noteworthy, and incorporating optical flow information may not fully align with the ideal concept of an end-to-end network. More recently, attention-based mechanisms have gained traction for refining pairwise relationships between regions across consecutive frames. [19, 18]
Gu et al. 2020) proposed a visual-attention-consistent module and a pyramid constrained self-attention block to better capture the temporal dynamics cues, respectively. Despite these advancements, the existing methods remain tailored to specific tasks. Therefore, this paper proposes a unified framework to address VSOD and VSP tasks more comprehensively.
## Proposed Method
An overview of the proposed UniST is presented in Figure 2. To tackle the challenges of video saliency prediction and video salient object detection, the UniST is constructed based on the encoder-decoder architecture which consists of a visual feature encoder, a saliency-aware transformer and a task-specific decoder.
To elaborate, video clips are initially fed into the visual feature encoder, yielding multi-level spatial features for each image. Then, a saliency-aware transformer is introduced to intricately capture spatio-temporal representations of image feature sequences. This is achieved through the stacking of multiple effective sal-transformer stages, which increases the scale of the feature maps progressively. Finally, a task-specific decoder is devised to leverage the transformer's output features, and facilitate the output predictions for each individual task.
### Visual Feature Encoder
Let \(X\in\mathbb{R}^{T\times H\times W\times 3}\) denote an RGB video clip of length \(T\). This is input to a 2D backbone network which produces frame-wise feature maps. The backbone consists of 4 encoder stages, and outputs 4 hierarchical visual feature maps, illustrated in Figure 2(a). The generated feature maps are denoted as \(\{f_{X_{i}}\}\in\mathbb{R}^{T\times h_{i}\times w_{i}\times C_{i}}\), where \((h_{i},w_{i})=(H,W)/2^{i+1},i=1,...,4\). In practical implementation, we employ the off-the-shelf MViT[11] as a visual encoder to encode the spatial information of image sequences, which can also be replaced with other general-purpose encoders, e.g., PVT[12], Swin[13].
\[\{f_{X_{1}},f_{X_{2}},f_{X_{3}},f_{X_{4}}\}=VisualEncoder(X) \tag{1}\]
### Saliency-aware Transformer
For each input video clip, the encoder generates feature maps at four distinct scales. Nevertheless, the extracted multi-scale feature maps solely capture information within the spatial domain of the image sequences, neglecting any temporal domain modeling. Another issue to be noted is that the low spatial resolution of the visual encoder's output feature makes it unsuitable for VSP and VSOD tasks. With these considerations, a saliency-aware transformer is designed to conduct spatio-temporal modeling, as well as enhance the resolution of the feature maps progressively.
As shown in Figure 2(b), there are four stages in the saliency-aware transformer, and each one is a designed sal-transformer stage. The primary stage of the saliency-aware transformer focus on learning the spatio-temporal sal-attention of the feature with the lowest resolution, and the
Figure 2: An overview of the proposed UniST. The visual encoder learns frame-wise visual representations from the video clips. The multi-scale visual features are then used as inputs to the saliency-aware transformer for global spatio-temporal modeling, generating refined and scaled-up spatio-temporal features for final prediction. (c) denotes the channel-wise concatenation.
subsequent three stages augment the spatial resolution of the feature maps, and calculate spatio-temporal sal-attention at higher resolutions. Similarly, the semantic-guided block and sal-transformer block are used together in the first stage, but the up embedding block and sal-transformer block are used in the following three stages.
**Semantic-Guided Block** Considering the most substantial semantic cues encapsulated by high-level featuresChang and Zhu (2021); Sun et al. (2022), a semantic-guided block is introduced to effectively guide the saliency modeling process. Specifically, as in Figure 2(d), \(f_{X_{4}}\) is initially fed into a Conv3d-BN-ReLU block denoted as \(F_{S}\). It consists of a 3D convolutional layer with kernel size of \(3\times 3\times 3\), a 3D batch normalization layer and a ReLU activation function. The output of this block is a semantic feature \(f_{X_{4}}^{S}\). This semantic feature is processed by a linear projection layer \(F_{P}\) to reduce the channel dimension to 1, and generate a saliency feature map \(f_{X_{4}}^{P}\). Finally, the semantic and saliency feature maps are concatenated along the channel dimension, and we use another Conv3d-BN-ReLU block \(F_{C}\) to adjust the channel numbers to the original dimension \(C_{4}\) to obtain the combined feature \(f_{X_{4}}^{C}\in\mathbb{R}^{T\times h_{x}\times w_{4}\times C_{4}}\).
\[f_{X_{4}}^{C}=F_{C}(Cat(F_{P}(F_{S}(f_{X_{4}})),F_{S}(f_{X_{4}}))) \tag{2}\]
where \(Cat(\cdot,\cdot)\) represents the concatenation operation. To feed the sementic-guided feature into the subsequent sal-transformer block, we flatten the feature in the spatio-temporal dimension to obtain a 2D feature-token sequence \(f_{X_{4}}^{C}\in\mathbb{R}^{Th_{x}u_{4}\times C_{4}}\).
**Up Embedding** Since the transformer module typically operates on 2D feature-token sequences, which may cause the original spatial layout of the feature map has been corrupted. To reconstruct the spatial details within the feature map, an up embedding block specifically has been designed for the sal-transformer block. The input to each up embedding block comes from the 2D feature-token sequence, which is generated by the previous sal-transformer stage. In the beginning, the up embedding block reshapes the feature sequence \(f_{X_{i}}\in\mathbb{R}^{Th_{i}w_{i}\times C_{i}}\) into spatial-temporal feature maps of dimensions \(\mathbb{R}^{T\times h_{i}\times w_{i}\times C_{i}}\). Subsequently, a bilinear interpolation is performed to amplify the height and width of each spatial feature by \(2\times\) along the temporal dimension, and a Conv-BN-ReLU block \(F_{Conv}\) is used to reduce the channel dimension to \(C_{i-1}\) at the same time. For the obtained features of size \(\mathbb{R}^{T\times h_{i-1}\times w_{i-1}\times C_{i-1}}\), they are fused with the feature maps of previous hierarchical level \(f_{X_{i-1}}\) via addition operation.
\[f_{X_{i-1}}=f_{X_{i-1}}+F_{Conv}(Interpolation(f_{X_{i}})) \tag{3}\]
The fused features are reshaped back to feature sequence \(f_{X_{i-1}}\in\mathbb{R}^{Th_{i-1}w_{i-1}\times C_{i-1}}\) as an upsampled token sequence.
**Sal-Transformer Block** When the feature maps obtained by the semantic-guided or up embedding block are input into the sal-transformer block, spatio-temporal feature modeling starts. Inspired by incorporating multi-scale feature maps to improve model performance Li et al. (2019); Jain et al. (2021), sal-transformer block is proposed which not only models the spatio-temporal domain of features via sal-attention mechanism, but also integrates the multi-scale attention information of the previous stage.
Notice that calculating global sal-attention from temporal high-resolution features has a prohibitively large memory footprint. To alleviate this problem, in sal-attention (see shown in Figure 2 (e)), a reduction operation is performed on the size of query \(Q\), key \(K\), and value \(V\) matrices for attention computation, like prior works Fan et al. (2021); Wang et al. (2021). In detail, it utilizes the \(Conv3dLN\) operation consisting of a 3D convolution and a layer normalization for embedding extraction, and obtains \(Q\), \(K\) and \(V\) in varying dimensions by controlling the convolution's kernel and stride sizes. This strategy for dimensionality reduction considerably enhances the memory and computational efficiency associated with global sal-attention computation, which makes it feasible to be used by multiple consecutive sal-transformer blocks.
For an input feature \(f_{X_{i}}\in\mathbb{R}^{Th_{i}w_{i}\times C_{i}}\), the sal-attention first transforms the 2D feature sequence into spatio-temporal domain, and followed by the operation of \(Conv3dLN_{Q}\), \(Conv3dLN_{K}\) and \(Conv3dLN_{V}\) to \(f_{X_{i}}\). Then, the linear projections \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{C_{i}\times C_{i}}\) are applied to obtain \(Q\), \(K\) and \(V\) embeddings, respectively. The attention score matrix \(A_{X_{i}}\) of \(X_{i}\) feature map can be calculated as:
\[Q_{X_{i}}=W_{Q}(Conv3dLN_{Q}(f_{X_{i}})),Q_{X_{i}}\in\mathbb{R}^{ Th_{i}w_{i}\times C_{i}} \tag{4}\] \[K_{X_{i}}=W_{K}(Conv3dLN_{K}(f_{X_{i}})),K_{X_{i}}\in\mathbb{R}^{ Th\hat{u}\hat{v}\times C_{i}}\] \[V_{X_{i}}=W_{V}(Conv3dLN_{V}(f_{X_{i}})),V_{X_{i}}\in\mathbb{R}^{ Th\hat{u}\hat{v}\times C_{i}}\] \[A_{X_{i}}=\frac{Q_{X_{i}}(K_{X_{i}})^{T}}{\sqrt{C_{i}}},A_{X_{i}} \in\mathbb{R}^{Th_{i}w_{i}\times T\hat{u}\hat{v}}\]
To further utilize the cross-scale insights from different stages within the saliency-aware transformer, a saliency transfer mechanism is introduced to enhance the attention score\(A_{X_{i}}\) before the softmax operation by integrating attention scores from distinct transformer stages. The attention score \(A_{X_{i+1}}\) from the feature map \(X_{i+1}\) is utilized to bolster the attention score \(A_{X_{i}}\) of the feature map \(X_{i}\). It's noteworthy that the second dimension of attention scores across different stages maintains consistent dimensions because of the size of the convolutional kernel designed. Specifically, a reshape operation is performed to align the shape of \(A_{X_{i+1}}\) with \(\mathbb{R}^{T\times h_{i+1}\times w_{i+1}\times T\hat{h}\hat{w}}\), and followed by a \(2\times\) bilinear interpolation in the first two spatial dimensions, then finally flatten it to get the attention matrix \(M_{X_{i+1}}\in\mathbb{R}^{Th_{i}w_{i}\times T\hat{h}\hat{w}}\) with same dimension as \(A_{X_{i}}\). The obtained \(M_{X_{i+1}}\) is fused with \(A_{X_{i}}\) via addition to obtain the fused attention score with a convolution operation. Then, we perform a softmax operation on the score and a dot-product operation with \(V\), the final updated feature maps \(f_{X_{i}}\) becomes,
\[A_{X_{i}}=Conv(A_{X_{i}}+M_{X_{i+1}}) \tag{5}\] \[f_{X_{i}}=Softmax(A_{X_{i}})V_{i}+f_{X_{i}}\]
Since each sal-transformer stage produces feature maps with varying scales, an efficient strategy is also proposed to aggregate the multi-scale features. For each level of features, we employ a 3D convolution and upsample operation. The channel dimension of feature maps is initially adjusted by 3D convolution, and then their spatial dimensions are upsampled to align with the dimensions of the \(f_{X_{1}}\) feature map. To obtain the aggregated feature \(f_{X}\in\mathbb{R}^{T\times h_{1}\times w_{1}\times C_{1}}\), the features at each scale are concatenated along channel dimension and the number of channels is reduced by 3D convolution.
### Task-Specific Decoder
Video Saliency PredictionFor the feature \(f_{X}\), a combination of Conv3d-BN-ReLU block and Conv3d-Sigmoid block is employed. The former compresses the temporal dimension of the feature to 1, and the latter further reduces the channel dimension of the feature to 1. The upsampling operation is performed to output the prediction results \(P_{VSP}\) by aligning the spatial dimension of ground-truth at the meantime.
Video Salient Object DetectionAs the VSOD task requires the output of \(T\)-frames, the temporal dimension compression of the feature \(f_{X}\) is not necessary. We directly apply a Conv-Sigmoid block to reduce the number of channels of \(f_{X}\). Like the VSP decoder, the predictions \(P_{VSOD}\) are upsampled to align the spatial dimensions of the ground-truth.
## Experiments
### Datasets
For convincible validation, three popular video datasets, DHF1KWang et al. (2018), Hollywood-2Marszalek et al. (2009) and UCF-SportsRodriguez et al. (2008), are used in our experiment. DHF1K contains 600 training videos, 100 validation videos and 300 testing videos with a frame rate of 30 fps. The UniST model can only be evaluated on the validation set of DHF1K due to unavailable annotations of the test set. Hollywood2 contains 1707 videos extracted from 69 movies with 12 categorized action classes, 823 videos are used for training and 884 for testing. UCF-Sports contains 150 videos (103 for training, 47 for testing) collected from broadcast TV channels, which cover 9 sports, such as diving, weightlifting, and horse riding.
For VSOD, the proposed model is evaluated on four public benchmark datasets including DAVIS\({}_{16}\)Perazzi et al. (2016), FBMSOchs et al. (2013), ViSalWang et al. (2015) and SegTrackV2Li et al. (2013). DAVIS\({}_{16}\) is a frequently used dataset, which contains 50 videos with a total of 3455 high-quality pixel-wise annotation frames. FBMS is a test dataset containing 59 videos with 720 sparsely annotated frames. ViSal is a dataset only used for test containing 19 videos with 193 pixel-wise annotation frames. SegTrackV2 is also a test dataset with 14 videos and 1,065 annotated frames.
### Implementation Details
To facilitate implementation, the pre-trained MViT-small modelFan et al. (2021) on ImageNetDeng et al. (2009) is employed. The input images of the network are all resized to \(224\times 384\) for training and testing. We follow the experimental settings of prior works Jain et al. (2021); Liu et al. (2022). For the VSP task, we first pre-train the UniST model on the DHF1K dataset and then fine-tune it on the Hollywood2 and UCF-Sports datasets. And as for the VSOD task, we choose to pre-train the entire model on the image DUTS dataset Wang et al. (2017), and then fine-tune it on the DAVIS\({}_{16}\) dataset.
All experimental training process chooses Adam as the optimizer with a learning rate of \(1e-4\). The computation platform is configured by two NVIDIA GeForce RTX 4090 GPUs in a distributed fashion, using PyTorch. More implementation details are in the appendix.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{**DHF1K**} & \multicolumn{2}{c}{**DAVIS\({}_{16}\)**} \\ \cline{2-5} & \(CC\uparrow\) & \(SIM\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) \\ \hline \hline UniST baseline & 0.521 & 0.410 & 0.025 & 0.880 \\ \hline UniST w/SAT & 0.532 & 0.417 & 0.022 & 0.891 \\ UniST w/SAT+SGB & 0.536 & 0.419 & 0.019 & 0.898 \\ UniST w/SAT+SGB+ST & **0.541** & **0.423** & **0.018** & **0.904** \\ \hline Performance \(\Delta\) & +0.020 & +0.013 & -0.007 & +0.024 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation Studies. The proposed UniST and its components yield consistent improvement on different datasets and achieve clear overall improvement on both tasks. \(\downarrow\) means lower better and \(\uparrow\) means higher better.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Encoder} & \multicolumn{3}{c}{**DHF1K**} & \multicolumn{3}{c}{**DAVIS\({}_{16}\)**} \\ \cline{2-7} & \(AUC\)-\(J\uparrow\) & \(CC\uparrow\) & \(SIM\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(F_{B}\uparrow\) \\ \hline \hline Image MViT & 0.920 & 0.541 & 0.423 & **0.018** & **0.904** & **0.873** \\ Video MiViT & **0.921** & **0.547** & **0.428** & 0.069 & 0.729 & 0.590 \\ \hline Image Swin & 0.909 & 0.483 & 0.371 & 0.025 & 0.885 & 0.854 \\ Video Swin & 0.911 & 0.510 & 0.384 & 0.077 & 0.708 & 0.546 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of using using different types and families of transformer encoder structures in UniST on DHF1K and DAVIS\({}_{16}\) datasets.
Figure 3: Performance analysis of Sal-Transformer Stages on DHF1K and DAVIS\({}_{16}\) datasets.
### Evaluation Metrics
For video saliency detection, we use AUC-Judd \(AUC\)-\(J\), Similarity Metric \(SIM\), Linear Correlation Coefficient \(CC\), and Normalized Scanpath Saliency \(NSS\), following existing workHu et al. (2023). For video salient object detection, we adopt three evaluation metrics for comparison, including the mean absolute error \(MAE\), F-measure \(maxF\)Achanta et al. (2009), and S-measure \(S_{m}\)Fan et al. (2017).
### Ablation Studies
To conduct an in-depth analysis of the proposed UniST framework, a range of model baselines and variants are defined as outlined in Table 1. (i) "UniST baseline" denotes a strong saliency model of the proposed UniST framework. It uses MViT-small encoder and task-specific decoders for VSP and VSOD tasks. It also combines multi-scale features from the encoder to help boost performance. (ii) "UniST w/SAT" indicates adding the pure saliency-aware transformer upon "UniST baseline". The pure saliency-aware transformer refers to the replacement of the semantic-guided block with a standard 3D convolution, while omitting the saliency transfer operation. (iii) "UniST w/SAT+SGB" indicates adding the proposed semantic-guided block upon "UniST w/SAT". Similarly, "UniST w/SAT+SGB+ST" denotes the full model after integrating the saliency transfer mechanism.
To analyze the effectiveness of each part in this work, we investigate the performance of the UniST baseline and its model variants on both DHF1K and DAVIS\({}_{16}\) datasets, as shown in Table 1. It can be observed that the SAT, SGB and ST modules all achieve clear improvement. Specifically, as the core module of the UniST framework, SAT significantly improves the VSP task by 0.011(\(CC\)) on DHF1K, and the VSOD task by 0.011(\(S_{m}\)) on DAVIS\({}_{16}\). Finally, the full UniST model achieves remarkable performance gain compared to the UniST baseline.
#### Effect of the Number of Sal-Transformer Stages
There are four stages in the proposed saliency-aware transformer as a default configuration. Figure 3 illustrates the impact of varying the number of sal-transformer stages on task performance across the DHF1K and DAVIS\({}_{16}\) datasets. Notably, the most optimal performance is attained when the number of sal-transformer stages is set to **4**. These results indicate that UniST needs to gradually fuse the feature of each scale and increase the feature resolution.
#### Image Encoder vs. Video Encoder
A further performance comparison is conducted using image and video encoders from two transformer families. As shown in Table 2, there is minimal variation in the model's performance when employing either the image or video encoder on the DHF1K dataset. Conversely, on the DAVIS\({}_{16}\) dataset, the model employing the image encoder surpasses its video encoder-based counterpart by a significant margin. Due to the limited size of the DAVIS\({}_{16}\) dataset, it is difficult to converge the model to an optimal state by training on this dataset directly. Therefore, the image MViT model with the best performance on both datasets is chosen as the default encoder in our work.
#### Further Analysis of Saliency Transfer
Figure 4 shows the visualized results of the attention scores in various salt-transformer stages with/without the saliency transfer mechanism. Compared to configurations without saliency transfer, the gradual integration of the attention score significantly helps the model learn a more discriminative feature, thus resulting in better visualization results.
### Comparision with State-of-the-arts
#### Video Saliency Prediction
The proposed UniST is compared with recent state-of-the-art works on three video saliency datasets as shown in Table 3. Experimental results in the table highlight the superiority of the proposed unified scheme, as it outperforms the other comparable works on almost all datasets and metrics. Notably, UniST significantly surpasses the previous top-performing methods, such as TMFI-Net Zhou et al. (2023) and VSFT Ma et al. (2022). Compared to TMFI-Net, UniST improves the average \(CC\) and \(SIM\) performance on the DHF1K and Hollywood2 datasets by 4.2% and 3.1%, respectively, and becomes the new state-of-the-art on both benchmarks. Moreover, on the UCF-Sports dataset, our method demonstrates competitive performance comparable to TMFI-Net. Figure 5(a) shows that the prediction results of the proposed work are more likely as the ground-truth in comparison to other works. These significant improvements indicate that our saliency-aware transformer is well suited for the spatio-temporal modeling required in video saliency prediction.
method is closer to the ground-truth. More visualization results can be found in the supplementary.
## Conclusion
In this paper, we propose a novel unified saliency transformer framework, UniST, to unify the modeling paradigms for video saliency prediction and video saliency object detection tasks. By capturing the spatial features of video frame sequences through a visual feature encoder, the subsequent saliency-aware transformer not only helps the model to capture the spatio-temporal information in the sequence of image features but also progressively increases the scale of the features for saliency prediction. Finally, a task-specific decoder is devised to leverage the transformer's output features, and facilitate the output predictions for each individual task. Extensive experiments demonstrated the effectiveness of the proposed method and also showed its significantly better performance on seven popular benchmarks compared to the previous state-of-the-art methods.
LimitationsAlthough our model has good performance in saliency prediction, the improvements on detection are not such significant. We conjecture that this is due to the lack of temporal information in the image dataset used for VSOD pre-training, making it difficult for the saliency-aware transformer to provide spatio-temporal modeling. We believe that the UniST can be further improved after adding more video datasets.
AcknowledgmentsThis work is supported by the National Natural Science Foundation of China (No.61971352,
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**DAVIS\({}_{16}\)**} & \multicolumn{3}{c|}{**FBMS**} & \multicolumn{3}{c|}{**ViSal**} & \multicolumn{3}{c}{**SegV2**} \\ \cline{2-13} & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(maxF\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(maxF\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(maxF\uparrow\) & \(MAE\downarrow\) & \(S_{m}\uparrow\) & \(maxF\uparrow\) \\ \hline SSAV\({}_{\text{CVPR}^{\text{\tiny{2019}}}}\) & 0.028 & 0.893 & 0.861 & 0.040 & 0.879 & 0.865 & 0.020 & 0.943 & 0.939 & 0.023 & 0.851 & 0.801 \\ CAS\({}_{\text{TNISH}^{\text{\tiny{2020}}}}\) & 0.032 & 0.873 & 0.860 & 0.056 & 0.865 & 0.863 & - & - & - & - & 0.029 & 0.820 & 0.847 \\ PCSA\({}_{\text{AAAT}^{\text{\tiny{2020}}}}\) & 0.022 & 0.902 & 0.880 & 0.040 & 0.868 & 0.837 & 0.017 & 0.946 & 0.940 & 0.025 & 0.865 & 0.810 \\ FSN\({}_{\text{ICCV}^{\text{\tiny{2021}}}}\) & 0.020 & **0.920** & **0.907** & 0.041 & 0.890 & 0.888 & - & - & - & 0.023 & 0.870 & 0.772 \\ ReuseVOS\({}_{\text{CVPR}^{\text{\tiny{2021}}}}\) & 0.019 & 0.883 & 0.865 & 0.027 & 0.888 & 0.884 & 0.020 & 0.928 & 0.933 & 0.025 & 0.844 & 0.832 \\ TransVOS\({}_{\text{PrePaint}^{\text{\tiny{2021}}}}\) & 0.018 & 0.885 & 0.869 & 0.038 & 0.867 & 0.886 & 0.021 & 0.917 & 0.928 & 0.024 & 0.816 & 0.800 \\ UFOOTNOTE:\({}_{\text{TMI}^{\text{\tiny{2023}}}}\) & 0.036 & 0.864 & 0.828 & 0.028 & 0.894 & **0.890** & 0.011 & **0.953** & 0.940 & 0.022 & 0.892 & **0.863** \\ \hline
**UniST(_Ours_)** & **0.018** & 0.904 & 0.873 & **0.027** & **0.902** & 0.884 & **0.011** & 0.952 & **0.952** & **0.017** & **0.897** & 0.854 \\ \hline \end{tabular}
\end{table}
Table 4: Comparisons of our method with the other state-of-the-arts on VSOD datasets. Our UniST outperforms the previous state-of-the-arts on most of the metrics on these four datasets.
Figure 5: Qualitative results of our method compared with other state-of-the-art methods on VSP and VSOD tasks. GT means the ground-truth.
\begin{table}
\begin{tabular}{c|c c c|c c c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**DHF1K**} & \multicolumn{3}{c|}{**Hollywood2**} & \multicolumn{3}{c}{**UCF-Sports**} \\ \cline{2-13} & \(CC\uparrow\) & \(NSS\uparrow\) & \(AUC\)-\(J\uparrow\) & \(SIM\uparrow\) & \(CC\uparrow\) & \(NSS\uparrow\) & \(AUC\)-\(J\uparrow\) & \(SIM\uparrow\) & \(CC\uparrow\) & \(NSS\uparrow\) & \(AUC\)-\(J\uparrow\) & \(SIM\uparrow\) \\ \hline TASED\({}_{\text{ICCV}^{\text{\tiny{2019}}}}\) & 0.440 & 2.541 & 0.898 & 0.351 & 0.646 & 3.302 & 0.918 & 0.507 & 0.582 & 2.920 & 0.899 & 0.469 \\ UN
No.62271239), Ningbo Natural Science Foundation (No.2021J048, No.2021J049), Jiangxi Double Thousand Plan (No.JXSQ2023201022), Fundamental Research Funds for the Central Universities (No.D5000220190), Innovative Research Foundation of Ship General Performance (No.25522108).
|
2309.11942 | On the Probability of Immunity | This work is devoted to the study of the probability of immunity, i.e. the
effect occurs whether exposed or not. We derive necessary and sufficient
conditions for non-immunity and $\epsilon$-bounded immunity, i.e. the
probability of immunity is zero and $\epsilon$-bounded, respectively. The
former allows us to estimate the probability of benefit (i.e., the effect
occurs if and only if exposed) from a randomized controlled trial, and the
latter allows us to produce bounds of the probability of benefit that are
tighter than the existing ones. We also introduce the concept of indirect
immunity (i.e., through a mediator) and repeat our previous analysis for it.
Finally, we propose a method for sensitivity analysis of the probability of
immunity under unmeasured confounding. | Jose M. Peña | 2023-09-21T09:57:03Z | http://arxiv.org/abs/2309.11942v2 | # On the probability of immunity
###### Abstract.
This work is devoted to the study of the probability of immunity, i.e. the effect occurs whether exposed or not. We derive necessary and sufficient conditions for non-immunity and \(\epsilon\)-bounded immunity, i.e. the probability of immunity is zero and \(\epsilon\)-bounded, respectively. The former allows us to estimate the probability of benefit (i.e., the effect occurs if and only if exposed) from a randomized controlled trial, and the latter allows us to produce bounds of the probability of benefit that are tighter than the existing ones. We also introduce the concept of indirect immunity (i.e., through a mediator) and repeat our previous analysis for it. Finally, we propose a method for sensitivity analysis of the probability of immunity under unmeasured confounding.
## 1. Introduction
Let \(X\) and \(Y\) denote an exposure and its outcome, respectively. Let \(X\) and \(Y\) be binary taking values in \(\{x,x^{\prime}\}\) and \(\{y,y^{\prime}\}\). Let \(Y_{x}\) and \(Y_{x^{\prime}}\) denote the counterfactual outcome when the exposure is set to level \(X=x\) and \(X=x^{\prime}\). Let \(y_{x}\), \(y^{\prime}_{x}\), \(y_{x^{\prime}}\) and \(y^{\prime}_{x^{\prime}}\) denote the events \(Y_{x}=y\), \(Y_{x}=y^{\prime}\), \(Y_{x^{\prime}}=y\) and \(Y_{x^{\prime}}=y^{\prime}\). For instance, let \(X\) represent whether a patient gets treated or not for a deadly disease, and \(Y\) represent whether she survives it or not. Individual patients can be classified into immune (they survive whether they are treated or not, i.e. \(y_{x}\wedge y_{x^{\prime}}\)), doomed (they die whether they are treated or not, i.e. \(y^{\prime}_{x}\wedge y^{\prime}_{x^{\prime}}\)), benefited (they survive if and only if treated, i.e. \(y_{x}\wedge y^{\prime}_{x^{\prime}}\)), and harmed (they die if and only if treated, i.e. \(y^{\prime}_{x}\wedge y_{x^{\prime}}\)).
In general, the average treatment effect (ATE) estimated from a randomized controlled trial (RCT) does not inform about the probability of benefit (or of any of the other response types, i.e. harm, immunity, and doom). However, it may do it under certain conditions. For instance,
\[ATE=p(y_{x})-p(y_{x^{\prime}}) =p(y_{x},y_{x^{\prime}})+p(y_{x},y^{\prime}_{x^{\prime}})-[p(y_{ x},y_{x^{\prime}})+p(y^{\prime}_{x},y_{x^{\prime}})]\] \[=p(\text{benefit})-p(\text{harm}) \tag{1}\]
and thus \(p(\text{benefit})=ATE\) if \(p(\text{harm})=0\) (a.k.a. monotonicity). Necessary and sufficient conditions are derived by Mueller and Pearl [1] to determine from observational and experimental data if monotonicity
holds. In this work, we derive similar conditions for non-immunity, i.e. \(p(\text{immunity})=p(y_{x},y_{x^{\prime}})=0\). These are interesting because under non-monotonicity, they turn an RCT informative about the probabilities of benefit and harm. To see it, consider
\[ATE=p(y_{x})-p(y_{x^{\prime}})\]
where the terms on the right-hand side of the equation are estimated from an RCT. Moreover,
\[p(y_{x}) =p(y_{x},y_{x^{\prime}})+p(y_{x},y_{x^{\prime}}^{\prime})=p(\text {immunity})+p(\text{benefit}) \tag{2}\] \[p(y_{x^{\prime}}) =p(y_{x},y_{x^{\prime}})+p(y_{x}^{\prime},y_{x^{\prime}})=p( \text{immunity})+p(\text{harm}) \tag{3}\]
and thus \(p(\text{benefit})=p(y_{x})\) and \(p(\text{harm})=p(y_{x^{\prime}})\) if \(p(\text{immunity})=0\).
In some cases, non-immunity is assured. For instance, when evaluating the effect of advertising on the purchase of a new product. The control group not being exposed to the ad has no way of purchasing the product, i.e. \(p(y_{x^{\prime}})=0\) and thus \(p(y_{x},y_{x^{\prime}})=0\). In other cases, non-immunity cannot be assured. For instance, when evaluating the effect of a drug. An individual may carry a gene variant that makes her recover from the disease regardless of whether she takes the drug or not, i.e. \(p(y_{x},y_{x^{\prime}})\geq 0\). However, it may still be bounded as \(p(y_{x},y_{x^{\prime}})\leq\epsilon\) from expert knowledge. We show that our necessary and sufficient conditions for non-immunity can trivially be adapted to \(\epsilon\)-bounded immunity. Moreover, we show that the knowledge of \(\epsilon\)-bounded immunity may tighten the bounds of the probabilities of benefit and harm by Tian and Pearl [2]. We also introduce the concepts of indirect benefit and harm (i.e., through a mediator) and repeat our previous analysis for them. Finally, we propose a method for sensitivity analysis of immunity under unmeasured confounding. We illustrate our results with concrete examples.
## 2. Conditions for Non-Immunity
Consider the bounds of \(p(\text{benefit})\) derived by Tian and Pearl [2]:
\[\max\left\{\begin{array}{c}0,\\ p(y_{x})-p(y_{x^{\prime}}),\\ p(y)-p(y_{x^{\prime}}),\\ p(y_{x})-p(y)\end{array}\right\}\leq p(\text{benefit})\leq\min\left\{ \begin{array}{c}p(y_{x}),\\ p(y_{x^{\prime}}^{\prime}),\\ p(x,y)+p(x^{\prime},y^{\prime}),\\ p(y_{x})-p(y_{x^{\prime}})+\\ p(x,y^{\prime})+p(x^{\prime},y)\end{array}\right\}. \tag{4}\]
Then, combining Equations 2 or 3 with 4 gives
\[\max\left\{\begin{array}{c}0,\\ p(y_{x})-p(y^{\prime}_{x^{\prime}}),\\ p(y_{x})-p(x,y)-\\ p(x^{\prime},y^{\prime}),\\ p(y_{x^{\prime}})-p(x,y^{\prime})-\\ p(x^{\prime},y)\end{array}\right\}\leq p(\mbox{immunity})\leq\min\left\{ \begin{array}{c}p(y_{x}),\\ p(y_{x^{\prime}}),\\ p(y_{x})-p(y)+\\ p(y_{x^{\prime}}),\\ p(y)\end{array}\right\}. \tag{5}\]
A sufficient condition for \(p(\mbox{immunity})=0\) to hold is that some argument to the \(\min\) function in Equation 5 is equal to 0, that is
\[p(y_{x})=0\mbox{ or }p(y_{x^{\prime}})=0\mbox{ or }p(y_{x})+p(y_{x^{\prime}})=p(y)\mbox{ or }p(y)=0. \tag{6}\]
Likewise, a necessary condition for \(p(\mbox{immunity})=0\) to hold is that all the arguments to the \(\max\) function are non-positive, that is
\[p(y_{x})+p(y_{x^{\prime}})\leq 1\mbox{ and }\] \[p(y_{x})\leq p(x,y)+p(x^{\prime},y^{\prime})\mbox{ and }\] \[p(y_{x^{\prime}})\leq p(x,y^{\prime})+p(x^{\prime},y). \tag{7}\]
### Conditions for \(\epsilon\)-Bounded Immunity
The conditions in the previous section can be relaxed to allow certain degree of immunity (e.g., based on expert knowledge), making them more applicable in practice. Specifically, a sufficient condition for \(p(\mbox{immunity})\leq\epsilon\) to hold is
\[p(y_{x})\leq\epsilon\mbox{ or }p(y_{x^{\prime}})\leq\epsilon\mbox{ or }p(y_{x})+p(y_{x^{\prime}})\leq p(y)+\epsilon\mbox{ or }p(y)\leq\epsilon.\]
Likewise, a necessary condition for \(p(\mbox{immunity})\leq\epsilon\) to hold is
\[p(y_{x})+p(y_{x^{\prime}})\leq 1+\epsilon\mbox{ and }\] \[p(y_{x})\leq p(x,y)+p(x^{\prime},y^{\prime})+\epsilon\mbox{ and }\] \[p(y_{x^{\prime}})\leq p(x,y^{\prime})+p(x^{\prime},y)+\epsilon. \tag{8}\]
### \(\epsilon\)-Bounds on Benefit and Harm
Assuming \(\epsilon\)-bounded immunity (e.g., based on expert knowledge) can help narrowing the bounds on \(p(\mbox{benefit})\) and \(p(\mbox{harm})\). Specifically, if \(p(\mbox{immunity})\leq\epsilon\) then Equation 2 gives
\[p(y_{x})-\epsilon\leq p(\mbox{benefit})\leq p(y_{x}).\]
Incorporating this into Equation 4 gives
\[\max\left\{\begin{array}{c}0,\\ p(y_{x})-p(y_{x^{\prime}}),\\ p(y)-p(y_{x^{\prime}}),\\ p(y_{x})-p(y),\\ p(y_{x})-\epsilon\end{array}\right\}\leq p(\mbox{benefit})\leq\min\left\{ \begin{array}{c}p(y_{x}),\\ p(y^{\prime}_{x^{\prime}}),\\ p(x,y)+p(x^{\prime},y^{\prime}),\\ p(y_{x})-p(y_{x^{\prime}})+\\ p(x,y^{\prime})+p(x^{\prime},y)\end{array}\right\} \tag{9}\]
which can potentially return a tighter lower bound than Equation 4, i.e. if \(\epsilon<\min(p(y_{x^{\prime}}),p(y))\). Although the value of \(\epsilon\) is typically determined from expert knowledge and not from data, the experimental
and observational data available do restrict the values that are valid, as indicated by Equation 8. In short, \(\epsilon\) can take any value as long as the lower bound is not greater than the upper bound in Equation 9. Moreover, \(p(\text{harm})\) can likewise be bounded by simply swapping \(x\) and \(x^{\prime}\) in Equation 9.
### Examples
This section illustrates the results above with two concrete examples.1
Footnote 1: R code for the examples can be found at [https://tinyurl.com/2s3bxmyu](https://tinyurl.com/2s3bxmyu).
#### 2.3.1. Example 1
A pharmaceutical company wants to market their drug to cure a disease by claiming that no one is immune. The RCT they conducted for the drug approval yielded the following:
\[p(y_{x}) =0.76\] \[p(y_{x^{\prime}}) =0.31\]
which correspond to the following unknown data generation model:
\[p(u)=0.3 p(x|u) =0.2 p(y|x,u) =0.9\] \[p(y|x,u^{\prime}) =0.7\] \[p(x|u^{\prime}) =0.9 p(y|x^{\prime},u) =0.8\] \[p(y|x^{\prime},u^{\prime}) =0.1.\]
Therefore, the necessary condition for non-immunity in Equation 7 does not hold, and thus the company is not entitled to make the claim they intended to make. The company changes strategy and now wishes to market their drug as having a minimum of 50 % efficacy, i.e. benefit. To do so, they first conduct an observational study that yields the following:
\[p(x,y) =0.5 p(x,y^{\prime}) =0.2\] \[p(x^{\prime},y) =0.2 p(x^{\prime},y^{\prime}) =0.1.\]
Then, they apply Equation 4 to the RCT and observational results to conclude that \(0.45\leq p(\text{benefit})\leq 0.61\). Again, the company cannot proceed with their marketing strategy. A few months later, a research publication reports that no more than 25 % of the population is immune. The company realizes that this value is compatible with their RCT and observational results, by checking the necessary condition for \(\epsilon\)-bounded immunity in Equation 8. More importantly, the company realizes that Equation 9 with \(\epsilon=0.25\) allows to conclude that \(0.51\leq p(\text{benefit})\leq 0.61\), and thus they can resume their marketing strategy.
#### 2.3.2. Example 2
The previous example has shown that expert knowledge on immunity may complement experimental and observational data. While data alone rarely provide precise information on immunity (or on any other response type, for that matter), there are cases where data alone provide enough actionable information. The following example illustrates this.
A pharmaceutical company is concerned by the poor sales of a drug to cure a disease. The RCT conducted for the drug approval and a subsequent observational study yielded the following:
\[p(y_{x}) =0.48\] \[p(y_{x^{\prime}}) =0.36\]
and
\[p(x,y) =0.08 p(x,y^{\prime}) =0.2\] \[p(x^{\prime},y) =0.25 p(x^{\prime},y^{\prime}) =0.47.\]
which correspond to the following unknown data generation model:
\[p(u) =0.4 p(x|u) =0.1 p(y|x,u) =0.9\] \[p(y|x,u^{\prime}) =0.2\] \[p(x|u^{\prime}) =0.4 p(y|x^{\prime},u) =0.3\] \[p(y|x^{\prime},u^{\prime}) =0.4.\]
The fact that 36 % of the untreated recover from the disease makes the company suspect that the low sales are due to a large part of the population being immune. Equation 5 allows to conclude that \(0\leq p(\text{immunity})\leq 0.34\), which suggests that the explanation offered by the company is rather unlikely. A more plausible explanation for the low sales may be that the efficacy or benefit of the drug is not very high, as \(0.14\leq p(\text{benefit})\leq 0.48\) by Equation 4.
## 3. Indirect Benefit and Harm
In the previous sections, the causal graph of the domain under study was unknown. In this section, we assume that the graph is available (e.g., from expert knowledge) and discuss two advantages that follow with it. Specifically, suppose that the domain under study corresponds to the following causal graph:
and thus \(p(y_{x})=p(y|x)\) and \(p(y_{x^{\prime}})=p(y|x^{\prime})\). Then, \(p(y_{x})\) and \(p(y_{x^{\prime}})\) can be estimated from observational data and thus, unlike in the previous sections, no RCT is required. A further advantage is that we can now compute the probabilities of benefit and harm mediated by \(Z\). We elaborate on this below.
The effect of \(X\) on \(Y\) mediated by \(Z\) (a.k.a. indirect effect) corresponds to the effect due to the indirect path \(X\to Z\to Y\), i.e. after deactivating the direct path \(X\to Y\). Different ways of deactivating the direct path have resulted in different indirect effect measures in the literature. Pearl [3] proposes deactivating the direct path by setting \(X\) to non-exposure and comparing the expected outcome when \(Z\) takes the value it would under exposure and non-exposure:
\[NIE=E[Y_{x^{\prime},Z_{x}}]-E[Y_{x^{\prime}}]\]
which is known as the average natural (or pure) indirect effect. Geneletti [4] also proposes deactivating the direct path by setting \(X\) to non-exposure but instead, she proposes comparing the expected outcome when \(Z\) is drawn from the distributions \(\mathcal{Z}_{x}\) and \(\mathcal{Z}_{x^{\prime}}\) of \(Z_{x}\) and \(Z_{x^{\prime}}\):
\[IIE=E[Y_{x^{\prime},\mathcal{Z}_{x}}]-E[Y_{x^{\prime},\mathcal{Z}_{x^{\prime} }}]\]
which is known as the interventional indirect effect. Although \(NIE\) and \(IIE\) do not coincide in general, they coincide for the causal graph above [5]. Finally, Fulcher et al. [6] proposes deactivating the direct path by setting \(X\) to its natural (observed) value and comparing the expected outcome when \(Z\) takes its natural value and the value it would under no exposure:
\[PIIE=E[Y_{X,Z_{X}}]-E[Y_{X,Z_{x^{\prime}}}]\]
which is also known as the population intervention indirect effect. This measure is suitable when the exposure is harmful (e.g., smoking), and thus one may be more interested in elucidating the effect (e.g., disease prevalence) of eliminating the exposure rather than in contrasting the effects of exposure and non-exposure.
We propose an alternative way of deactivating the direct path \(X\to Y\) and measuring the indirect effect of \(X\) on \(Y\) through \(Z\). Specifically, we assume that the direct path \(X\to Y\) is actually mediated by an unmeasured random variable \(U\) that is left unmodelled. This arguably holds in most domains. The identity of \(U\) is irrelevant. Let \(G\) denote the causal graph below, i.e. the original causal graph refined with the addition of \(U\).
Now, deactivating the direct path \(X\to Y\) in the original causal graph can be achieved by adjusting for \(U\) in \(G\), i.e. \(\sum_{u}E[Y|x,u]p(u)\). Unfortunately, \(U\) is unmeasured. Instead, we propose the following way of deactivating \(X\to Y\). Let \(H\) denote the causal graph below, i.e. the result of reversing the edge \(X\to U\) in \(G\).
The average total effect of \(X\) on \(Y\) in \(H\) can be computed by the front-door criterion [7]:
\[TE =E[Y_{x}]-E[Y_{x^{\prime}}]\] \[=\sum_{z}p(z|x)\sum_{\dot{x}}E[Y|\dot{x},z]p(\dot{x})-\sum_{z}p(z| x^{\prime})\sum_{\dot{x}}E[Y|\dot{x},z]p(\dot{x}).\]
Note that \(G\) and \(H\) are distribution equivalent, i.e. every probability distribution that is representable by \(G\) is representable by \(H\) and vice versa [7]. Then, evaluating the second line of the equation above in \(G\) or \(H\) gives the same result. If we evaluate it in \(H\), then it corresponds to the part of association between \(X\) and \(Y\) that is attributable to the path \(X\to Z\to Y\). If we evaluate it in \(G\), then it corresponds to the part of \(TE\) in \(G\) that is attributable to the path \(X\to Z\to Y\), because \(TE\) in \(G\) equals the association between \(X\) and \(Y\), since \(G\) has only directed paths from \(X\) to \(Y\). Therefore, the second line in the equation above corresponds to the part of \(TE\) in the original causal graph that is attributable to the path \(X\to Z\to Y\), thereby deactivating the direct path \(X\to Y\). We propose to use the second line in the equation above as a measure of the indirect effect of \(X\) on \(Y\) in the original causal graph.
The reasoning above can be extended to the probabilities of benefit and harm, and thereby measure the benefit and harm mediated by \(Z\). As mentioned above, the causal graphs \(G\) and \(H\) represent different data generation mechanisms but the same probability distribution over \(X\), \(Y\) and \(Z\). Therefore, the mechanisms agree on observational probabilities but may disagree on counterfactual probabilities. We use \(p()\) to denote observational probabilities obtained from either mechanism, and \(q()\) to denote counterfactual probabilities obtained from the mechanism corresponding to \(H\). The probabilities of benefit and harm of \(X\) on \(Y\) mediated by \(Z\) in \(G\) and thus in the original causal graph (henceforth indirect benefit and harm, or \(IB\) and \(IH\)) can be computed by applying Equation 2 to \(H\). That is,
\[IB=q(\text{benefit})=q(y_{x})=\sum_{z}p(z|x)\sum_{\dot{x}}p(y|\dot{x},z)p( \dot{x})\]
where the second equality holds if \(q(\text{immunity})=0\), and the third is due to the front-door criterion on \(H\). Likewise for \(IH\) simply replacing \(x\) by \(x^{\prime}\). Applying Equation 5 to \(H\) yields necessary and sufficient
conditions for \(q(\text{immunity})=0\). That is,
\[\sum_{z}p(z|x)\sum_{\dot{x}}p(y|\dot{x},z)p(\dot{x})=0\text{ or}\] \[\sum_{z}p(z|x^{\prime})\sum_{\dot{x}}p(y|\dot{x},z)p(\dot{x})=0 \text{ or}\] \[\sum_{z}[p(z|x)+p(z|x^{\prime})]\sum_{\dot{x}}p(y|\dot{x},z)p(\dot {x})=p(y)\text{ or}\] \[p(y)=0 \tag{10}\]
is a sufficient condition, whereas
\[\sum_{z}[p(z|x)+p(z|x^{\prime})]\sum_{\dot{x}}p(y|\dot{x},z)p(\dot {x})\leq 1\text{ and}\] \[\sum_{z}p(z|x)\sum_{\dot{x}}p(y|\dot{x},z)p(\dot{x})\leq p(x,y)+p (x^{\prime},y^{\prime})\text{ and}\] \[\sum_{z}p(z|x^{\prime})\sum_{\dot{x}}p(y|\dot{x},z)p(\dot{x})\leq p (x,y^{\prime})+p(x^{\prime},y) \tag{11}\]
is a necessary condition. Necessary and sufficient conditions for \(\epsilon\)-bounded immunity on \(H\) (i.e., \(q(\text{immunity})\leq\epsilon\)) can be obtained much like in Section 2.1. That is, it suffices to add \(\epsilon\) to the right-hand sides of the conditions above and replace \(=\) with \(\leq\). Finally, we can adapt accordingly the equations in Section 2.2 to obtain \(\epsilon\)-bounds on \(IB\) and \(IH\). Note that the analysis of indirect benefit and harm presented here does not require an RCT, i.e. all the expressions involved can be estimated from just observational data.
### Example
This section illustrates the results above with a concrete example borrowed from Pearl [8]. It concerns the following causal graph:
where \(X\) represents a drug treatment, \(Z\) the presence of a certain enzyme in a patient's blood, and \(Y\) recovery. Moreover, we have that
\[p(z|x)=0.75 p(y|x,z)=0.8\] \[p(y|x,z^{\prime})=0.4\] \[p(z|x^{\prime})=0.4 p(y|x^{\prime},z)=0.3\] \[p(y|x^{\prime},z^{\prime})=0.2.\]
Since \(p(x)\) is not given in the original example, we take \(p(x)=0.6\).
Pearl imagines a scenario where the pharmaceutical company plans to develop a cheaper drug that is equal to the existing one except for the lack of direct effect on recovery, i.e. it just stimulates enzyme production as much as the existing drug. Therefore, the probability of benefit of the planned drug is the probability of benefit of the existing drug that is mediated by the enzyme. The company wants to market
their drugs by claiming that no one is immune. The sufficient conditions for non-immunity in Equations 6 and 10 do not hold for the drugs. However, while the existing drug satisfies the necessary condition for non-immunity in Equation 7, the planned drug does not satisfy the corresponding condition in Equation 11. Therefore, the company should either abandon their marketing strategy or abandon the plan to develop the new drug and instead focus on trying to confirm non-immunity for the existing drug.
## 4. Sensitivity Analysis of Immunity
In this section, like in the previous section, we assume that the causal graph of the domain under study is available, e.g. from expert knowledge. We also assume that we only have access to observational data, i.e. no RCT is available. Specifically, consider the following causal graph:
which includes potential unmeasured exposure-outcome confounding. Since \(p(y_{x})=\sum_{z}p(z|x)\sum_{\dot{x}}E[Y|\dot{x},z]p(\dot{x})\) by the front-door criterion, we can proceed as in the previous section to derive necessary and sufficient conditions for non-immunity. Suppose now that \(Z\) is unmeasured or that the effect of \(X\) on \(Y\) is direct rather than mediated by \(Z\). Then, \(p(y_{x})\) is unidentifiable from observational data [7], and thus we cannot proceed as in the previous section. We therefore take an alternative approach to inform the analyst about the probability of immunity and thereby help her in decision making. In particular, we propose a sensitivity analysis method to bound the probability of immunity as a function of the observed data distribution and some intuitive sensitivity parameters. Our method is an straightforward adaption of the method by Pena [9], originally developed to bound the probabilities of benefit and harm.
Let \(U\) denote the unmeasured exposure-outcome confounders. For simplicity, we assume that all these confounders are categorical, but our results also hold for ordinal and continuous confounders.2 For simplicity, we treat \(U\) as a categorical random variable whose levels are the Cartesian product of the levels of the elements in the original \(U\).
Footnote 2: If \(U\) is continuous then sums/maxima/minimima over \(u\) should be replaced by integrals/suprema/infima.
Note that
\[p(y_{x})=p(y_{x}|x)p(x)+p(y_{x}|x^{\prime})p(x^{\prime})=p(y|x)p(x)+p(y_{x}|x^{ \prime})p(x^{\prime})\]
where the second equality follows from counterfactual consistency, i.e. \(X=x\Rightarrow Y_{x}=Y\). Moreover,
\[p(y_{x}|x^{\prime})=\sum_{u}p(y_{x}|x^{\prime},u)p(u|x^{\prime})=\sum_{u}p(y|x,u)p(u|x^{\prime})\leq\max_{u}p(y|x,u)\]
where the second equality follows from \(Y_{x}\!\perp\!X|U\) for all \(x\), and counterfactual consistency. Likewise,
\[p(y_{x}|x^{\prime})\geq\min_{u}p(y|x,u).\]
Now, let us define
\[M_{x}=\max_{u}p(y|x,u)\]
and
\[m_{x}=\min_{u}p(y|x,u)\]
and likewise \(M_{x^{\prime}}\) and \(m_{x^{\prime}}\). Then,
\[p(x,y)+p(x^{\prime})m_{x}\leq p(y_{x})\leq p(x,y)+p(x^{\prime})M_{x}\]
and likewise
\[p(x^{\prime},y)+p(x)m_{x^{\prime}}\leq p(y_{x^{\prime}})\leq p(x^{\prime},y)+ p(x)M_{x^{\prime}}.\]
These equations together with Equation 5 give
\[\max\left\{\begin{array}{c}0,\\ p(x^{\prime})m_{x}+p(x)m_{x^{\prime}}-p(y^{\prime}),\\ p(x^{\prime})m_{x}-p(x^{\prime},y^{\prime}),\\ p(x)m_{x^{\prime}}-p(x,y^{\prime})\end{array}\right\}\leq p(\mbox{immunity}) \leq\min\left\{\begin{array}{c}p(x,y)+p(x^{\prime})M_{x},\\ p(x^{\prime},y)+p(x)M_{x^{\prime}},\\ p(x^{\prime})M_{x}+p(x)M_{x^{\prime}},\\ p(y)\end{array}\right\} \tag{12}\]
where \(m_{x}\), \(M_{x}\), \(m_{x^{\prime}}\) and \(M_{x^{\prime}}\) are sensitivity parameters. The possible regions for \(m_{x}\) and \(M_{x}\) are
\[0\leq m_{x}\leq p(y|x)\leq M_{x}\leq 1 \tag{13}\]
and likewise for \(m_{x^{\prime}}\) and \(M_{x^{\prime}}\).
Our lower bound in Equation 12 is informative if and only if3
Footnote 3: Note that the second row in the maximum equals the third plus the fourth rows.
\[0<p(x^{\prime})m_{x}-p(x^{\prime},y^{\prime})\]
or
\[0<p(x)m_{x^{\prime}}-p(x,y^{\prime}).\]
Then, the informative regions for \(m_{x}\) and \(m_{x^{\prime}}\) are
\[p(y^{\prime}|x^{\prime})<m_{x}\leq p(y|x)\]
and
\[p(y^{\prime}|x)\leq m_{x^{\prime}}<p(y|x^{\prime}).\]
On the other hand, our upper bound in Equation 12 is informative4 if and only if5
Footnote 4: Note that we already know that \(p(\mbox{immunity})\leq p(y)\) by Equation 5.
Footnote 5: Note that the third row in the minimum equals the first plus the second minus the fourth rows.
\[p(x,y)+p(x^{\prime})M_{x}<p(y)\]
or
\[p(x^{\prime},y)+p(x)M_{x^{\prime}}<p(y)\]
which occurs if and only if \(p(y|x)<p(y|x^{\prime})\) or \(p(y|x^{\prime})<p(y|x)\).6 Therefore, our upper bound is always informative, and thus the informative regions for \(M_{x}\) and \(M_{x^{\prime}}\) coincide with their possible regions.
Footnote 6: To see it, rewrite \(p(y)=p(x,y)+p(x^{\prime},y)\) and recall Equation 13.
### Example
We illustrate our method for sensitivity analysis of \(p(\text{immunity})\) with the following fictitious epidemiological example. Consider a population consisting of a majority and a minority group. Let the binary random variable \(U\) represent the group an individual belongs to. Let \(X\) represent whether the individual gets treated or not for a certain disease. Let \(Y\) represent whether the individual survives the disease. Assume that the scientific community agrees that \(U\) is a confounder for \(X\) and \(Y\). Assume also that it is illegal to store the values of \(U\), to avoid discrimination complaints. In other words, the identity of the confounder is known but its values are not. More specifically, consider the following unknown data generation model:
\[p(u)=0.2 p(x|u)=0.4 p(y|x,u) =0.9\] \[p(y|x,u^{\prime}) =0.8\] \[p(x|u^{\prime})=0.2 p(y|x^{\prime},u) =0.2\] \[p(y|x^{\prime},u^{\prime}) =0.7.\]
Since this model does not specify the functional forms of the causal mechanisms, we cannot compute the true \(p(\text{immunity})\)[7]. However, we can bound it by Equation 5 and the fact that \(p(y_{x})=\sum_{u}p(y|x,u)p(u)\)[7], which yields \(p(\text{immunity})\in[0.42,0.6]\). Note that these bounds cannot be computed in practice because \(U\) is unmeasured.
Figure 1 (top) shows the lower bound of \(p(\text{immunity})\) in Equation 12 as a function of the sensitivity parameters \(m_{x}\) and \(m_{x^{\prime}}\). The axes span the possible regions of the parameters. The dashed lines indicate the informative regions of the parameters. Specifically, the bottom left quadrant corresponds to the non-informative region, i.e. the lower bound is zero. In the data generation model considered, \(m_{x}=0.8\) and \(m_{x^{\prime}}=0.2\). These values are unknown to the epidemiologist, because \(U\) is unobserved. However, the figure reveals that the epidemiologist only needs to have some rough idea of these values to confidently conclude that \(p(\text{immunity})\) is lower bounded by \(0.2\). Figure 1 (bottom) shows our upper bound of \(p(\text{immunity})\) in Equation 12 as a function of the sensitivity parameters \(M_{x}\) and \(M_{x^{\prime}}\). Likewise, having some rough idea of the unknown values \(M_{x}=0.9\) and \(M_{x^{\prime}}=0.7\) enables the epidemiologist to confidently conclude that the \(p(\text{immunity})\) is upper bounded by \(0.65\). Applying Equation 5 with just observational data produces looser bounds, namely \(0\) and \(0.67\). Recall that \(p(\text{immunity})\in[0.42,0.6]\) in truth.
Figure 1. Lower and upper bounds of \(p\)(immunity) in the example in Section 4.1 as functions of the sensitivity parameters \(m_{x}\), \(m_{x^{\prime}}\), \(M_{x}\) and \(M_{x^{\prime}}\).
## 5. Discussion
The analysis in this work can be repeated for \(p\)(doom) instead of \(p\)(immunity) by simply swapping \(y\) and \(y^{\prime}\), and \(p\)(benefit) and \(p\)(harm). Additionally, the analysis of indirect benefit can be repeated for \(p\)(harm) instead of \(p\)(immunity) due to Equation 1, and thereby extend the analysis by Mueller and Pearl [1].
|
2309.10484 | Understanding and addressing the resistance towards autonomous vehicles
(AVs) | Autonomous vehicles (AVs) are expected to bring major benefits to transport
and society. To exploit this potential, their acceptance by society is a
necessary condition. However, AV acceptance is currently at stake: AVs face
resistance by bystanders and local communities. Resistance can prevent the
implementation and use of AVs, threatening road safety and efficiency. The
present study performed a qualitative and quantitative text analysis of
comments submitted by locals in San Francisco (SF) to the California Public
Utilities Commission (CPUC) on the fared deployment of AVs. The results of the
analysis are synthesized, and a conceptual framework explaining and predicting
resistance is proposed. The framework posits that the occurrence of resistance
is a direct result of the perception of threats, which is determined by
individual and system characteristics, direct and indirect consequences of
system use, reactions of others, and external events. AVs as threat to safety
was associated with their unpredictable, and illegal driving behavior, as well
as producing conflict situations. The lack of explicit communication between
AVs and other road users due to the absence of a human driver behind the
steering wheel negatively contributed to perceived safety and trust, especially
for vulnerable populations in crossing situations. Respondents reported a
negative impact on road capacity, congestion, and traffic flow, with AVs
blocking other road users, such as emergency vehicles. Inaccessible vehicle
design contributed to the exclusion of vulnerable groups with disabilities. The
scientific dialogue on acceptance of AVs needs to shift towards resistance as
the 'other' essential element of acceptance to ensure that we live up to our
promise of transitioning towards more sustainable mobility that is inclusive,
equitable, fair, just, affordable, and available to all. | Sina Nordhoff | 2023-09-19T09:54:17Z | http://arxiv.org/abs/2309.10484v1 | # Understanding and addressing the resistance towards autonomous vehicles (AVs)
###### Abstract
Autonomous vehicles (AVs) are expected to bring major benefits to transport and society. To exploit this potential, their acceptance by society is a necessary condition. However, AV acceptance is currently at stake. AVs face resistance by bystanders and local communities. This is especially problematic because resistance can prevent the implementation and use of AVs, threatening road safety and efficiency. Resistance towards AVs has been largely overlooked by research and practitioners. The present study performed a qualitative and quantitative text analysis of comments submitted by locals in San Francisco (SF) to the California Public Utilities Commission (CPUC) on the fared deployment of AVs in SF. The results of the analysis are synthesized, and a conceptual framework explaining and predicting resistance is proposed. The framework posits that the occurrence of resistance is a direct result of the perception of threats, which is determined by individual and system characteristics, direct and indirect consequences of system use, reactions of others, and external events. Perceived threats pertained to safety, traffic, travel choices, energy consumption and pollution, social equity, economy, and society. AVs as threat to safety was associated with their unpredictable, and illegal driving behavior, as well as producing conflict situations. The lack of explicit communication between AVs and other road users due to the absence of a human driver behind the steering wheel negatively contributed to perceived safety and trust, especially for vulnerable populations in crossing situations. Respondents reported a negative impact on road capacity, congestion, and traffic flow, with AVs blocking other road users, such as emergency vehicles. AVs were conceived as a threat to the transition towards more sustainable mobility with inclusive, multimodal public transit options. Inaccessible vehicle design contributed to the exclusion of vulnerable groups with disabilities. The scientific dialogue on acceptance of AVs needs to shift towards resistance as the 'other' essential element of acceptance to ensure that we live up to our promise of transitioning towards more sustainable mobility that is inclusive, equitable, fair, just, affordable, and available to all.
Autonomous vehicles; acceptance; resistance; local communities; perception of threats; vulnerable populations
Introduction
The California Public Utilities Commission (CPUC) approved the fared deployment of autonomous vehicles (AVs) in San Francisco (SF) (CPUC, 2023). As the operation of AVs is limited to specific domains, with these vehicles currently not being able to drive everywhere in all conditions, the present paper defines these vehicles as SAE Level 4 or High Driving Automation (SAE International, 2021). These AVs currently face resistance from locals outside these vehicles, stopping these vehicles by placing objects on them (e.g., cones) or stepping in front of them to test their capabilities and interfere with their operation (Thubron, 2023).
The automated vehicle acceptance literature is skewed towards acceptance, applying technology acceptance models to identify the factors predicting acceptance by drivers (Louw et al., 2021), passengers (Pascale et al., 2021), and other road users (Schrauth, Funk, Maier, & Kraetsch, 2021). Most studies used specific early-adopter populations consisting mostly of males, younger- to middle-aged, and tech-savvy individuals. This implies that important user groups who may benefit the most from AVs, such as the vulnerable populations with special needs, have been excluded from the debate to a large extent until now. The technology acceptance models that were developed for the assessment of automated vehicle acceptance, such as the multi-level model on automated vehicle acceptance (MAVA) (Nordhoff, Kyriakidis, Van Arem, & Happee, 2019), consider resistance only as a side phenomenon without explaining its underlying processes.
The resistance towards AVs is still little understood. In comparison to conventional cars, AVs are mobile, situationally aware, being able to adapt and communicate with their environment (Winfield, 2012), and have higher sensing capabilities than conventional vehicles, creating privacy and security (e.g., hacking) issues (Bloom, Tan, Ramjohn, & Bauer, 2017; Chen, Khalid Khan, Shiwakoti, Stasinopoulos, & Aghabayk, 2023). Current work also revealed concerns related to safety, trust (Chen et al., 2023), affordability, unemployment and financial insecurity (Agrawal et al., 2023). Scholars investigating the acceptance of renewable wind energy have proposed that community or local acceptance is a key determinant for societal acceptability. Resistance is low if the conditions for distributional justice (sharing of costs and benefits), procedural justice (equal opportunities of all relevant stakeholders for participation in the decision-making process), and trust in the information and intentions of the investors and actors outside the community are met (Wustenhagen, Wolsink, & Burer, 2007).
Resistance should not be seen as dysfunctional behavior representing an obstacle or barrier to overcome, or that needs to be investigated to improve the uptake and use of AVs (Milakis & Muller, 2021; Van
Wynsberghe & Guimaraes Pereira, 2022). Instead, it should be considered as functional orientation that can occur as the result of legitimate concerns associated with a change (Marakas & Hornik, 1996). It should be approached with curiosity rather than stigmatization, being assigned an equal weight in the debate about AVs than acceptance. Neither phenomenon is more important than the other; we need to investigate both ends of the spectrum to the same extent to ensure that the development, design, and deployment of AVs reflect the diverse needs, views, and concerns of all socially relevant individuals and groups in and around AVs. The design of AVs should be a process being open to producing different outcomes representing the results of negotiations among socially relevant groups within different sociocultural and political environments until the design no longer creates problems to any group (Klein & Kleinman, 2002; Milakis & Muller, 2021).
Resistance is defined as a psychological reaction or behavior, representing different types of usage behaviors. It can involve disuse (lack of or no use), low level of use, harmful use (Martinko, Zmud, & Henry, 1996), or misuse (Marakas & Hornik, 1996). It can be passive (e.g., excuses, delay tactics), active (e.g., voicing opposite points of view, asking others to intervene, forming coalitions), or aggressive (e.g., strikes, boycotts, sabotage) (Lapointe & Rivard, 2005). Resistance is also inextricably linked to an object or content that is being resisted (e.g., introduction of new technology) (Jermier, Knights, & Nord, 1994; Lapointe & Rivard, 2005). In line with Marakas and Hornik (1996), this paper posits that acceptance and resistance should be placed on one continuum, with acceptance as measure for use at one end of the continuum, and resistance as measure for disuse at the other end of the continuum (Figure 1).
The perception of threats, i.e., expected consequences, is a necessary condition for the occurrence of resistance. In studies examining the resistance towards the implementation of information technology, the perception of threats, a change in the power dynamics between groups with unequal gains, inequity issues, stress and fear, efficacy or outcome expectations represented fertile conditions for the occurrence of resistance. Finally, resistance is linked with initial conditions or subjectivities addressing the differences between individuals or groups of individuals. The interaction of these initial conditions and the object determines the perception of threats, which in turn determine the resistance. Resistance elicits some
Figure 1: Resistance-acceptance continuum, based on Lapointe and Rivard (2005)
"triggers", including the actual consequences of system or technology use, events, and reactions of others (e.g., system's advocates, other actors). Individual resistance can change into group resistance (i.e., aggregate of individual resistance behaviors) when individuals' shared perceptions, affect, and responses (i.e., group norms) are activated (Lapointe & Rivard, 2005).
### The present study
Resistance can have severe negative consequences, preventing the implementation or use of the system, or making it more difficult for system designers to achieve their objectives (Markus, 1983). An enhanced understanding of the resistance towards AVs as psychological phenomenon is expected to contribute to exploit the benefits of this technology (Shariff, Bonnefon, & Rahwan, 2021). The main objective of this study is to examine the resistance towards AVs, identifying the factors underlying resistance. Comments submitted by the locals of SF to the CPUC on the fared operation of AVs were analyzed using qualitative and quantitative text analysis techniques. Finally, this paper offers a conceptual framework, which synthesizes the results of the data analysis.
## 2 Methodology
### Respondents
The present study analysed public comments submitted to the CPUC on the fared deployment of the AVs in SF between February 06, 2020, and August 13, 2023. Next to their comments, respondents provided their name and location of residence.
### Data analysis
The data was analyzed in four steps.
First, simple frequency analysis of the most common terms was conducted. In line with Zhou, Kan, Huang, and Silbernagel (2023), the text was preprocessed and cleaned: Part of speech tagging (POS), such as noun, verb, and adjective, was applied to each token (word). Duplicate, and stopwords were removed, and words were transformed to lower cases, and to its root form (lemmatization). Any other noise was also removed, such as characters, digits, hashtags, or hyperlinks. The sentences were tokenized, which means that each sentence was separated into a smaller unit of sentences so that it can be more easily processed by the algorithm.
Second, the main categories and sub-categories were developed using principles of inductive category development form Mayring (2000). The labeling and description of the sub-categories were adjusted by
compared the sub-categories with the literature, i.e., studies examining the sub-categories (Kusano et al., 2023; Lehtonen et al., 2022).
Third, a guided Latent Dirichlet Allocation (LDA) model was run to investigate the occurrence of these themes identified in the second step. To identify the occurrences of the sub-themes in the data, seed terms were identified, which were developed inductively from the dataset. It was assumed that each sentence can represent a sub-theme. A sub-theme was assigned a frequency of 1 if at least two seed terms representing a sub-theme were mentioned once in a sentence. The total number of mentions of a sub-theme equals the total number of occurrences of at least two seed terms per sub-theme across all sentences of the data. The analysis was conducted in Python.
Fourth, illustrative comments were selected to portray the meaning of each theme. Multiple mentions of a sub-theme by a respondent were not removed but combined with other mentions of the same sub-theme by the respondent. Consequently, some comments represent collections of sentences mentioned by the same respondents. Each comment was assigned a code, denoted as c, which will be cited as reference for each respondent.
## 3 Results
This section provides the results of the analysis.
### Respondents
In total, 325 comments (2283 sentences) were analysed. Most of the respondents seemed to interact with AVs as external road users, while only a small number of respondents reported to have actual experience with these vehicles as passengers. Respondents also represent vulnerable road users with special needs and specific groups (e.g., taxi drivers).
Figure 1 provides an overview of the frequency analysis of the 35 most common terms that were used by respondents.
The qualitative data analysis resulted in the identification of main themes representing the perceived threats associated with the operation of AVs. Table 1 provides an overview of the themes that were identified.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Main theme** & **Sub-theme** & **Meaning** & **Keywords** & \(n\) \\ \hline \multirow{8}{*}{Safety} & \multirow{8}{*}{Unpredictability} & Unpredictability of vehicle & ‘unsafe’, ’hack’, ‘ unpredictability’, ‘unpredictable’, ‘unpredictable’, ‘unexpected’, ‘erratic’, ‘unhuman’,’not safe’, ‘cause accident’, ‘unsafe’, ‘crash’ & \multirow{8}{*}{26} \\ & & Unpredictability of vehicle & ‘reckless speeding’, ‘running red light’, ‘run \\ & & rule, e.g., running a red light or stop sign, or parking in bus lane. & red light’, ‘illegal’, ‘not safe’, ‘cause accident’, ‘unsafe’, ‘crash’ & \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{Illegal driving} & AVs engaging in illegal & ‘reckless speeding’, ‘running red light’, ‘run \\ & & rule, e.g., running a red light or stop sign, or parking in bus lane. & red light’, ‘illegal’, ‘not safe’, ‘cause accident’, ‘crash’, ‘unsafe’ & \\ \hline \end{tabular}
\end{table}
Table 1: Overview of data analysis results; main themes and sub-themes (_n_, number of sentences mentioning at least two key words per sentence)
Figure 1: Frequency analysis of the 35 most frequently used terms
### Safety
#### 3.2.1 Unpredictability
This sub-theme addressed the unpredictability of the vehicle, which was associated with its erratic, unhuman, and unexpected behavior, contributing to low perceived safety and trust.
_"Strongly opposed. AVs are a significant threat to my safety. Their odd behavior (sudden, persistent stops) makes other vehicles move more erratically."_ (c94134)
_"I personally have experienced their unexpected, reasonless stops in the middle of the roadway and intersections. I have observed them start and stop repeatedly after stopping far behind a limit line, and not proceeding."_ (c94118)
_"I've seen self-driving cars make unexpected turns, ride too close to bikes, and speed recklessly. I will never feel safe next to a machine-operated machine."_ (c94102)
_"These AVs have displayed erratic maneuvers across San Francisco that are not in line with how a human driver would typically behave. This unpredictability poses a significant safety hazard to pedestrians, cyclists, and human drivers who share the roads with these autonomous vehicles. As the technology is still relatively new, human drivers are finding it challenging to
anticipate how these robot cars will respond, given their unique driving patterns."_ (c94121)
_"I am so tired of these autonomous vehicles driving around our neighborhood all day and all night. I have witnessed them make so many bad decisions that I simply cannot trust them."_ (c94117)
#### 3.2.2 Illegal driving
Illegal driving behavior of AVs, violating traffic rules, such as running a red light or stop light, as well as parking in the bus lane, were also reported.
_"I see driverless cars and have found them to often be erratic, unpredictable, and at times in violation of traffic laws."_ (c94118)
_"When the light turned green, it proceeded to go around the firetruck to the right and PARK IN THE BUS STOP! Why? It had a perfectly clear road ahead of it. It is illegal for cars to park in the bus stop! If a robocar cannot assess a street situation like this, it should not be on the road at all! Stop the robocars now!"_ (c94109)
_"There have been hundreds of documented examples of AVs behaving badly. Cruise even posted a photo of one of their cars running a red light on their 2022 impact report!!"_ (c94114)
_"I encountered a Cruise car that ran a stop sign."_ (c94115)
_"They can't even follow the rules of the road. I constantly see them signal one direction then proceed in another, or outright fail to yield to pedestrians in crosswalks."_ (c94117)
#### 3.2.3 Conflict situations
AVs were considered a threat to public safety, being perceived as risky, unsafe, or dangerous, producing conflict situations by causing collisions, evasive maneuvers by other road users, or an unsafe proximity with other road users.
"Autonomous cars are disrespectful of the elderly, children, parents with children, and everyone else crossing the street. They come to a complete stop at controlled intersections, but proceed through the intersection even when someone is already crossing the street."_ (c94118)
_"I have narrowly avoided having the car in which I was riding avoid being struck when the AV turned into the wrong lane."_ (c94118)
_"AVs harass pedestrians in marked crosswalks by creeping up on us. I have seen AVs in my neighborhood making right turns without slowing down. This is very dangerous, especially in neighborhoods with lots of kids like mine."_ (c94134)
_"Cruise vehicles can't figure out crosswalks and have nearly hit my family."_ (c94103)
_"I have heard these robocars are unsafe. I have heard they have killed some small dogs in SF due to not perceiving them. What will happen if a toddler, around the same size as a dog, jumps into the road? Human drivers can respond, but robocars look like they can't. We have heard about a lot of accidents they have caused."_ (c94109)
_"I live in the Mission District and have almost been hit twice now by Cruise driverless vehicles while using the crosswalk by my house."_ (c94110)
_"Strongly opposed. I've had to jump out of the way of these cars twice."_ (c94114)
_"I was taking a walk at night with my partner and an AV did not slow down when we were crossing in front of it. We moved quickly out of the way to avoid getting hit. This threat to safety is unacceptable."_ (c94118)
_"While out walking today, two cars failed to give me the right of way when I was in a crosswalk or about to enter one. The Waymo began to make a right turn onto the street I was crossing before I had stepped onto the far curb and was still in the crosswalk. The Cruise vehicle left its stop sign on the far side of the intersection as I was stepping off the curb into the crosswalk. At least with a human driver there is the possibility of establishing eye contact."_ (c94118)
#### 3.2.4... or AVs as blessing for safety?
This sub-theme addressed the perceived positive safety impacts of AVs, especially for vulnerable populations (e.g., pedestrians, cyclists), given the better driving performance of AVs in comparison to human drivers, and the advanced sensing capabilities of AVs, with AVs obeying the traffic rules, and not being prone to human performance decrements and pitfalls while driving (e.g., distracted driving). Respondents mentioned the need to apply the same or similar safety standards for AVs currently being applied for human-controlled vehicles.
_"As a frequent pedestrian I've noticed AVs obey traffic rules and respect pedestrians much better than human drivers. They often speed, drive on sidewalks, drive into oncoming traffic, ignore pedestrians in crosswalks, run red lights, etc because of a lack of patience. AVs have unlimited patience."_ (c94131)
_"I have taken over a dozen autonomous rides and found them to be safe and effective. They obey the rules of the road and allow people like myself to safely cross the road without worrying about distracted and impatient drivers."_ (c94117)
_"As an SF resident that loves biking, I feel much safer around driverless vehicles given their inherently increased visibility and awareness of their surroundings. The data seem to strongly support this; the autonomous vehicles have significantly less incidents per mile driven."_ (c94117)
_"I have been hit multiple times by vehicles on the streets and I feel much safer cycling around these driverless vehicles than around humans. They treat every
stop sign like a real stop sign (no rolling stops), never speed, and provide wide berth when passing me whether it is on my bike or when I'm just walking normally."_ (c94107)
_"As a pedestrian & driver myself, I feel SIGNIFICANTLY safer around these vehicles than the people driving in SF. I watched two human drivers blow through stop signs and nearly run over crossing children just this past weekend."_ (c94114)
_"As an avid cyclist, who's twice been hit by cars on SF streets, I very much SUPPORT driverless cars on our streets. I've biked by many, and (unlike many of the drivers in this city) they are always paying attention. I look forward to safer streets with more bikes and less parked cars, and fully believe that AVs can help us get there."_ (c94158)
_"I am concerned with the public discourse. Many of the negative comments I've seen exaggerate (or lie about) the problems with these services, cite unproven anecdotes and/or fail to compare their safety record with human driven automobiles. It makes no sense to hold AVs to much higher standards than we currently hold human drivers. The data clearly shows that AVs are much safer than human-driven cars, regardless of any random person's opinion on the matter."_ (c94103)
_"I was skeptical about these Cylon death machines until I rode in one. At that point I was an instant fan. Much safer than Uber drivers. Slower. Obeys the signs. Respects cyclists. Isn't pushy or aggressive. They're like big orange cows. Since my first trip, I've taken about fifty trips."_ (c94117)
#### 3.2.5 Explicit communication
This sub-theme addressed the lack of explicit communication (e.g., eye contact, hand gestures, verbal communication) between AVs and other road users due to the absence of a human driver behind the steering wheel. This lack of explicit communication contributed to a decrease in perceived safety, and lack of perceived sense of control, especially at intersections or in crossing situations, and lack of trust in the capability of the AV to successfully detect vulnerable road users.
"Human drivers are dangerous but at least there's someone present I can communicate with, yell at to stop. A driverless car will just hit me and keep going and there's nothing anyone can do."_ (c94103)
"I am a blind individual who works in SF. I am concerned that a poorly controlled autonomous vehicle may collide with me and my guide dog while I cross a street." (c94577)
"As a cyclist, I have stopped at intersections and waited to make eye contact with drivers who have also stopped, to acknowledge who goes next. This is impossible with a driverless car, as THERE ARE NO EYES." (c94117)
"I see driverless cars and have found them without any means to give direct feedback. Human drivers are often this, but I can yell at a human driver to get their attention if needed." (c94118)
"You can't communicate with an AV car. It's like watching a newborn drive but at least a newborn can look at you. No thank you!" (c94110)
"As a cyclist, I am strongly opposed to autonomous vehicles full stop. Bikes rely on human signals to safely move through car-centered streets - gestures, making eye contact, body language." (c94102)
"I am a senior bike rider and I rely on making eye contact with drivers for safety, which can't be done with these cars. It is frightening to come to an intersection with one of these and not know if the "algorhythm" has been tweaked or how it will respond." (c94121)
"Even crossing the street in front of an autonomous vehicle is a challenging proposition. There is no one to make eye contact with to give the pedestrian assurance that they are seen and it is safe to cross the street." (c94110)
"As a pedestrian, intersections are terribly dangerous. The only way I know if I'm relatively safe to cross is if I can make eye contact with the driver at the intersection to know that they acknowledge my right-of-way to cross. Driverless vehicles CAN'T make eye contact and therefore make it impossible for me to cross safely. How will I ever know if it is safe to cross? What are the programming contingencies for these deadly machines? What are the actual statistics for how they behave in what circumstances?" (c94110)
"I was so made, and there was no one to make eye contact with, which is one of the creepiest things." (c94115)
"You can't make eye contact with a robot car to make sure they see you before you cross the street in a crosswalk at a stop sign or stoplight. I don't want to play chicken with a robot car not knowing whether they'll see me in the street." (c94110)
### Traffic implications
#### 3.3.1 Road capacity, congestion, traffic flow
A negative impact on road capacity, congestion, and traffic flow was mentioned due to the expected larger number of vehicles on the road behaving in an unexpected and erratic way, e.g., stopping unexpectedly, impairing merging or splitting situations, or blocking other road users, such as emergency vehicles.
"I am strongly against "self-driving" cars. I've witnessed these cars stop in the middle lane of Hyde Street on a green light as ambulances are trying to pass." (c94109)
"It is DEEPLY troubling and concerning that driverless cars have been involved in these and numerous other incidents, including getting in the path of first responders at numerous scenes of crisis, including a mass shooting." (c94110)
"The self-driving vehicles are worsening congestion. I saw one hit a car with an elderly, minimal English-speaking driver who had no recourse to follow up, and then it blocked a bus stop." (c94115)
"I've lost counts of the number of times I've seen them block intersections, bike lanes, buses, street cars... you name it!"_ (c94114)
_"I witnessed a robocar pull up behind a firetruck. The robocar stopped in the middle of the intersection, blocking northbound traffic. All the cars were honking, not realizing it was a robocar. Many of us gathered to watch. I clocked it - it took 10 minutes before the robocar backed up out of the intersection."_ (c94109)
_"Oppose; Cruise and Waymo AVs are causing havoc in our streets. Multiple times a day they stop randomly and block traffic. Often they interfere with both transit and emergency services."_ (c94117)
_"These vehicles roam around in a pack and make it hard to get away from or merge into. It was fun to see them once in a while at first, but I am now strongly against the expansion of the program due to the sheer annoyance and frustration that they are jamming up our streets."_ (c94118)
_"I am vehemently opposed to any expansion of AVs on San Francisco streets. More than once, I have been on a bus carrying 20\(+\) people to their destinations that could not move forward because an AV carrying no one was immobile in front of it for no apparent reason."_ (c94134)
_"They have greatly increased congestion already. When they encounter an un-programmed situation, they simply stop, blocking everyone behind them."_ (c94122)
_"I witnessed one of these driverless vehicles. It blocked the 44 Muni bus which was unable to go around the car. The Muni driver opened the doors to let off passengers, including elderly and disabled folks, because the bus clearly was not going anywhere anytime soon. After waiting 20 minutes with no resolution in sight, we left with the stopped vehicle still in the middle of the road and the 44 stuck behind."_ (c94118)
_"I have personally witnessed these vehicles disrupt the flow of traffic for no apparent reason on more occasions than I can recall. Most commonly, I have seen Cruise vehicles stopped in the middle of the street or in front of green lights, despite there being no other vehicles, objects, or pedestrians in their path."_ (c94121)
### Travel choices implications
#### 3.4.1 Public transport, walking, cycling use
The negative impact of the operation of AVs on the use of public transport, and sustainable travel modes was mentioned due to traveler's increased use of AVs, and an expected redirection of investments from public transit to AVs. Respondents expected a negative impact on the affordability of public transport, threatening the transition towards more sustainable mobility with inclusive, multimodal public transportation options.
_"I want to see less money invested in self driving cars and more towards public transit, making transit accessible affordable and even free. We have an active bus metro and subway service in the city that needs much more support and attention to make accessible and safer for residents to use."_ (c94112)
_"I strongly urge you to consider very seriously why we are allowing these AVs to take over our streets instead of investing in TRULY sustainable modes of transportation, like bicycling, walking, and accessible, affordable public transit."_ (c94109)
_"The money being spent on these vehicles must be redirected toward public transit and active transit infrastructure that San Francisco badly needs. Buses are packed, bike lanes are a mess, yet we are here talking about driverless drains on our resources."_ (c94112)
_"I am opposed to this. This forces us to build cities car sized instead of people sized, making it inherently unwalkable."_ (c94122)
_"They prevent development of SF into a city with multi-modal transportations favoring all people."_ (c94115)
_"We don't want AV cars on our streets. We want less cars, more sane bike lanes, and more public transportation."_ (c94110)
_"We have concerns about the affordability and that there have been no requirements set that AV fared passenger service would be affordable. People in our community are often the most low-income and rely on public transit. We have concerns that AVs will erode the public transportation system and we will see bus and subway service reduced in areas that AVs are servicing."_ (c94122)
_"San Francisco needs to focus on increasing pedestrian spaces, and on improving public transit. Keep self-driving cars OUT of San Francisco. Car centric eco structure kills. We need more road closures (like JFK), more bike/skate paths, and better busses."_ (cD09C9)
### Energy consumption and air pollution
#### 3.5.1 Fuel, energy, emissions, pollution
This sub-theme covered the negative impact of AVs on energy consumption and air pollution, with respondents reporting incidences of AVs running unoccupied, contributing to an increase in single vehicle miles travelled (VMT), noise, emissions, pollution, and car dependency, impeding the realization of Vision Zero and the transition towards more sustainable mobility.
_"Most self-driving cars I see are empty. This is just releasing more emissions into our already damaged environment."_ (c94109)
_"The last thing we need is more cars of any kind. We are in a climate emergency. We should be reducing vehicle miles traveled (even of electric vehicles)."_ (c94131)
_"They are incredibly wasteful. Climate change is wreaking havoc. Invest in real infrastructure that we already know works."_ (c94103)
"No more driverless cars, please. Our world, our city, needs to move away from cars in order to have a livable planet for future generations. I fail to see how expanding the use of driverless cars driving around San Francisco completely empty is a step toward a sustainable future."_ (c94112)
"AVs do not actually solve any problem related to the climate crisis - they just exacerbate the problems by adding more VMT to San Francisco, at a time when the climate is warming seemingly exponentially. AVs are not the answer." (c94114)
"I have seen these cars sitting idly blocking traffic, creating more pollution for everyone. Most of the time I see these cars COMPETELY EMPTY. Simply taking up space in a congested city, no proving service to anyone at all. I love city life for the people, not empty trash boxes taking up space." (c94110)
"Finally, we are in a climate crisis (remember wild fires?). We should been encouraging the increased use of public transportation, biking, and walking instead of flooding the streets with more cars." (c94110)
### Social equity
#### 3.6.1 Vehicle design
This sub-theme addressed the lack of vehicle accessibility, contributing to the social exclusion of vulnerable populations with disabilities. Accessibility pertained to ordering the vehicle via smartphone app, entering the vehicle, getting buckled on, exiting the vehicle, and assisting passengers with carrying items to the apartment.
"I've heard that autonomous vehicles pick up and drop off passengers in the middle of the street, not at the curb. Making blind passengers go out into traffic to find an autonomous vehicle is extremely dangerous. I need assurances that the AV companies have made their smartphone apps fully accessible for blind passengers and that a reliable method for blind passengers to locate vehicles has been proven to be safe and effective." (c94577)
"They are not even accessible. They never pull up fully to the curb nor do they have any drivers to help folks get in the car."_ (c94117)
"As a senior, I could not use one because there would no one to assist me into and out of the car and ensure that I was safely buckled in." (c94118)
"ZERO ACCESS TO PERONS WITH DISABILITIES. They completely violate federal law, specifically the Americans with Disabilities Act. They have been designed from the ground up to exclude people with disabilities." (c94114)
#### 3.6.2 Public engagement
Insufficient local engagement in the decision of trialing and operating AVs on public roads was reported, manifesting a perceived sense of injustice and social inequality between locals of SF and representatives of the tech companies, and unequal distribution of benefits and risks.
"These vehicles are being foisted on the people of SF without their consent for the purpose of facilitating a small number of companies in their quest to establish fleets of vehicles under their complete control to profit from. I hope the CPUC will enact more local participation and democratic engagement." (c94131)
"When giant corporations get to use San Francisco as a laboratory (without our consent) we effectively become their lab rats. Further expanding the scope of these programs will NOT help the city." (c94122)
"These companies are using us all as guinea pigs to test their highly experimental software. It's deeply unacceptable that these cars were ever allowed onto our streets without our permission." (c94110)
"We did not consent to this. We are fed up with being experimented on by corporations." (c94117)
"They are yet another technological pipe dream that will make few companies rich while using real life San Franciscans as guinea pigs."_ (c94115)
"I do not appreciate having my life and limb at risk as an unwilling participant in their beta testing on our public streets." (c94117)
"AV companies are putting San Franciscans at risk for private profit. It's time to prioritize public safety (and public transit!)." (c94134)
"Should they ever get it to a safe and useable state (something I am deeply skeptical is even possible), it will be thanks to the thousands of San Franciscans who were literally put into harm's way to do so. Yet those San Franciscans will receive zero compensation - those profits will stay with the car companies." (c94114)
"Stop these vehicles, it's a gimmick, a dizzyland ride to make a few rich, does it benefit society, no!" (c94110)
### Governance
#### 3.7.1 Liability
This sub-theme addressed the lack of liability of the AV companies in terms of holding their vehicles accountable in case of traffic violations or accidents caused by the AVs.
"Who will bear the responsibility, when a driverless vehicle breaks the law or causes injury or death to someone in our city?" (c94102)
"A driverless car will just hit me and keep going and there's nothing anyone can do, including holding the company accountable." (c94103)
"If there is an incident on the road, who do you exchange phone numbers with? Where is the accountability?" (c94118)
"The companies are not acting in good faith because you are not holding them to account. They cannot be cited." (c94118)
"Robotaxis are effectively above the law. Their fleets cannot be cited for traffic violations."_ (c94110)
"My first gripe with Cruise / Waymo specifically is that they have no accountability to any city or county of San Francisco authority." (c94107)
"There is no punishment for an AV killing someone. An innocent civilian death is just accepted with no real consequences. At least a drunk driver will be charged with a crime. When someone has an accident, there are consequences. DL suspension, fines, jail, just dying in the crash. None of those exist for an AV."_ (c95073)
"In car-on-person accidents, there is more at stake than just financial liability; there is also the possibility of criminal charges. When driverless cars commit a crime, who is charged with the crime?" (c94112)
#### 3.7.2 Transparency
Insufficient transparency was mentioned with regards to AV companies sharing performance-oriented AV data, with respondents calling for an independent assessment of the impacts of AVs on transport and society.
"I demand that AV companies share unredacted incident data with the public and city agencies. The lack of transparency is alarming. By withholding incident data, AV companies are evading responsibility, forcing the public and city agencies to rely on social media posts to understand the extent of the problems they cause. We need a robust and independent reporting system to address these concerns effectively." (c94110)
"As they refuse to share incident data, the public, as well as city agencies, must rely on social media posts to determine the extent of the problems they cause. A robust and independent reporting system must be put in place." (c94110)
"If these cars are to operate on our streets, they should have to share all safety data with the city, like SFMTA does for our bus fleet."_ (c94110)
"While Cruise may claim to have a clean safety record in terms of fatalities, their reporting provides zero data on the frequency of glitches/adverse events that disrupt the flow of. Until this data is available and addressed, these companies should not be permitted to operate vehicles without a backup driver." (c94121)
"I live in the Richmond district and recently there has been a huge uptick in AV with zero insight into the legal and safety ramifications of having these cars on the street." (c94118)
### Economy
#### 3.8.1 Unemployment
This sub-theme addressed the expected unemployment among professional drivers due to their replacement by AVs, and associated fear due to insufficient governmental support assisting drivers in this transition.
"Cruise and Waymo are a disaster for workers and safety. Please stop them. This is a labor and safety issue that we all need to be concerned about as AI makes more and more of us dispensable." (c94102)
"Moving humans from behind the wheel of a car to even more invisible positions in call centers & support cars makes them even easier to exploit and quashes unionization efforts. A just transition away from cars and car dominance means taking care of workers who are most affected by it, not just relegating them to more exploitable roles." (c94122)
"Not only that, but they are taking away jobs! This is a major concern. Our government has no plans to help all the people who are about to be displaced by technology. These are a horrible idea and should be eliminated." (c94110)
"This expansion that will remove more jobs is yet another nail in the coffin of this once great city. Whose job will be replaced next? Quoting from a line in an old union song, "Which side are you on? Which side are you on? People or Machines? Human or Artificial Intelligence? Working People or Corporations? Human Autonomy or The Surveillance State?" (c94110) "The crucial service of taxi drivers has been severely cut (with people losing their livelihoods) because of Uber and Lyft, and now, Waymo and Cruise is heading down that same destructive path." (c94110) "These driverless vehicles threaten the livelihoods of taxi drivers, many of whom have not recovered from the pandemic and the explosion of unregulated Uber and Lyft vehicles in SF." (c94102) "How about replacing musicians, sportsmen, classroom teachers, restaurant waiters, court judges and all other human roles with robots? Do you want to support this vision?" (c94597)
### Society
#### 3.9.1 Data privacy
Data privacy concerns were mentioned with the AV sensors (i.e., laser, radar, camera, lidar) constantly capturing data from road users around AVs, including video and audio data, engendering civil rights, of e.g., people seeking abortions.
"These tech companies have purchased the ability to collect information uninhibited and surveil the people of SF. Automation can be a useful tool to help the working class when regulated appropriately, but the AVs using San Francisco's streets for beta testing are an overreach by wealthy technocrats that see our privacy & data as an unexploited resource for their taking. Knock it off." (c94131)
"_The dystopian potential these machines present is frightening. Gone are the days when you could walk through San Francisco without being constantly scanned and analyzed by autonomous camera turrets."_ (c94122)
_"It makes me very uneasy to have dozens of driverless vehicles sent out by tech companies equipped with cameras recording and using images for who knows what. There is no way to escape them if one does not wish to consent to their image being recorded and exploited."_ (c94112)
_"These vehicles are constantly capturing video and audio data of their surroundings, which puts us under constant surveillance. Cruise and Waymo have thus installed a surveillance state. Get these surveillance machines off our streets."_ (c94117)
_"These vehicles are equipped with cameras that are used by Cruise and Waymo for both their own internal purposes and the data are shared with law enforcement. We do not need armies of privately owned security cameras patrolling the streets of San Francisco."_ (c94103)
_"Surveillance: This unprecedented invasion of the public's privacy will likely have far-reaching effects on the rights of the general public. The Sacramento police department has forwarded surveillance data to states which could prosecute those seeking an abortion. A city-wide, moving network observing and analyzing everything that happens outdoors is something out of a dystopian movie, not a democratic society."_ (c94110)
#### 3.9.2 Threat to humanity
Respondents expected a negative impact of AVs on humanity, with AVs as embodiment of artificial intelligence (AI) representing a threat humanity in general.
_"At a time when many prominent scientists are raising a red flag regarding the dangers of AI to humanity, why would you promote these Autonomous vehicles? These vehicles are anti-human and should be banned. Do you want
to live in Robot City? For the sake of humanity please do not approve the expansion of Waymo and Cruise operations. Thank you."_ (c94110)
_"As robot taxis they cannot help travelers lift luggage, shoppers lift groceries, disabled folks lift walkers or wheel-chairs into vehicles."_ (c94122)
_"As a nurse, I'm painfully aware of the need of people with disabilities to have assistance to climb steps, to have help with packages, and to have help when someone is ill and needs medical care."_ (c94110)
_"I do not see how these autonomous vehicles, without drivers, are an improvement for the elderly and disabled over regular cabs -- operated by friendly, helpful cab drivers who can help people with doors, luggage, and even carrying groceries upstairs and into homes."_ (c94110)
_"We humans need to preserve and foster social interaction, for our balance and sanity. The use of robot taxis is yet another step towards a lifestyle that goes against natural human instincts and, as such, is an attack on the human spirit. Our leaders must defend the human spirit. The world needs more humanism, not less."_ (c94597)
### Synthesis: Conceptual framework
The results of the analysis are synthesized in a conceptual framework (Figure 2). The framework visits that the occurrence of resistance is a direct result of the perception of threats, which is a function of individual and system characteristics, direct and indirect consequences of system use, reactions of others, and external events. Individual characteristics include, but are not limited to, the individual's age, disability, vulnerability, and knowledge. System characteristics include, but are not limited to, lack of explicit communication, vehicle unpredictability, and vehicle design. Direct consequences of use include, but are not limited to, the experience (as passenger), and as road user experiencing AVs in conflict situations. Reactions of others include, but are not limited to, word-of-mouth, public engagement by decision-makers and other stakeholders, and governance and regulation. External events include, but are not limited to, public engagement by decision-makers and other stakeholders, and governance and regulation.
## 4 Discussion
This study reports the results of the analysis of public comments submitted to the CPUC on the fared deployment of AVs in SF. The data analysis resulted in the extraction of main themes representing the perception of threats associated with the operation of AVs. A conceptual framework synthesizing the results of the analysis is proposed, which explains and predicts the occurrence of resistance towards AVs. The framework proposes that the occurrence of resistance is the direct result of the perception of threats, which are determined by individual and system characteristics, direct and indirect consequences of system use, reactions of others, and external events. The literature on end-user acceptance of AVs has emphasized the relevance of utility-based domain-specific and emotional symbolic-affective factors, and a limited role for moral-normative factors (such as perceived risks) (Nordhoff et al., 2019). In contrast, our study has shown that bystander or local community acceptance may be influenced to a large extent by the perceived risks or threats associated with the operation of AVs. Our analysis has also revealed that direct experience with AVs as passengers might alleviate the perceived threats, which is in line with research studies documenting a positive effect of experience or familiarity on attitudes and acceptance (Pennetsa, Adanu, Wood, Wang, & Jones, 2019; Xing, Zhou, Han, Zhang, & Lu, 2022).
The indirect consequences of system use capture the longer-term impacts of AVs on safety, traffic, travel choices, energy consumption and pollution, social equity, the economy and society, with respondents reporting a perceived negative effect of AVs on these aspects. Studies have shown that vehicle automation is expected to have a positive impact on safety, travel time, highway and intersection capacity, fuel efficiency, and emissions, and a negative effect on vehicle miles travelled, while the effects on energy consumption, social equity, the economy, and society are largely unknown (Milakis, van Arem, & van Wee,
Figure 2: Conceptual model explaining and predicting resistance
2015). Particularly, the fear of unemployment is a common and legitimate concern raised by the public, which should be addressed by implementing processes, procedures, cultures, and values ensuring ethical behavior in supporting workers in the transition (Winfield & Jirotka, 2018).
Most of the studies assessing the safety impacts of AVs were conducted in simulated rather than real traffic environments using vehicle data. Therefore, the safety implications of AVs remain unclear (Tafidis, Farah, Brijs, & Pirdavani, 2022). Data from naturalistic driving studies with and without a trained operator behind the steering wheel of a AV vehicle has shown that contact events between AVs and other road users did not result in severe or life-threatening injuries, with the AVs being more capable of avoiding collisions compared to human drivers, and that collisions resulted from the interactions with human drivers (Schwall, Daniel, Victor, Favaro, & Hohnhold, 2020). Our study has shown that AVs were perceived as threat to public safety given their unpredictable, erratic, and unexpected behavior, violating traffic rules, and causing conflict situations with other road users. Conflict situations included AVs causing accidents, eliciting evasive maneuvers by road users, or creating an unsafe proximity with road users (Kusano et al., 2023). The discussion of what constitutes acceptable safety as embodied by questions, such as 'How safe is safe enough?', 'Safe enough for what?', or 'Safe enough for whom?' is ongoing (Cohen et al., 2020; Liu, Yang, & Xu, 2019; Shariff et al., 2021; Stilgoe, 2021), and will perhaps not be closed anytime soon. The expected positive safety benefits are more likely to be achieved with an increase in the level of automation, cooperation, and penetration rate (Milakis et al., 2015). It has been shown that the public may have unrealistic expectations about the safety of automated vehicles, expecting higher levels of safety from an automated than from a human driven vehicle (Shariff et al., 2021).
This research has also reported a perceived negative impact of the operation of AVs on road capacity, congestion, and traffic flow, with AVs blocking other traffic, such as emergency vehicles. It has been shown that 82% of first responders did not receive AV-related safety training, and 41% of respondents had little knowledge about AVs, and 44% did not trust AVs (Liu et al., 2023). Particularly the interaction between AVs and emergency vehicles may represent a highly emotional topic for the public, which can have serious life-and-death implications for drivers of the emergency vehicles, other traffic, or people losing a person due to AVs impairing the efficient operation of first responders. Ensuring the safe and efficient interaction between emergency vehicles and AVs could thus be a main contributing factor to promote public acceptance or reduce resistance. External communication devices supporting explicit communication between AVs and external road users were not designed from the perspective of first responders. Similar to treating cyclists as distinct user group with unique communication needs (Berge, Hagenzieker, Farah, & de Winter, 2022), it can be argued that first responders should be treated as specific road user group requiring tailored AV
communication. We recommend future research to examine to what extent external Human Machine Interfaces (eHMIs) as explicit communication can support first responders in their interaction with AVs, and determine the specific design characteristics positively impacting safety, efficiency, and acceptance.
This research has also shown that data privacy concerns and its wider societal implications explained resistance, with AVs constantly capturing audio and video data of road users. The data privacy concerns are legitimate, encompassing the collection of demographic information (e.g., driver's license, real-time location, travel behavior), and non-verbal communication (e.g., body movements) (Khan, Shiwakoti, Stasinopoulos, & Warren, 2023). It was revealed that data privacy concerns were a delimiting factor for the acceptance and use of AVs (Zmud, Sener, & Wagner, 2016). Bloom et al. (2017) revealed that respondents' discomfort was highest for the most privacy invasive scenarios involving AVs (vehicle tracking), and lowest for the least privacy invasive scenarios (image capture). To alleviate concerns, locals could be educated about the potential benefits of the large-scale data collection and analysis (e.g., finding of Silver Alert citizens) (Bloom et al., 2017), and be given the possibility to 'opt out' of the analysis of their data. Respondents also mentioned the lack of democratic participation and engagement in providing consent for trialing AVs on public roads, manifesting a perceived sense of injustice and social inequality between citizens and representatives of the tech companies with an unequal distribution of benefits and risks. Future research should examine to what extent engagement and participation (e.g., war gaming methodology, citizen juries) can alleviate concerns of the locals, and promote understanding and knowledge through negotiation and compromise (Birhane et al., 2022; Fraade-Blanar & Weast, 2023).
Insufficient liability and transparency of the AV companies in terms of holding their vehicles to account, and publicly sharing data was mentioned as other factors underlying resistance. Clarifying the responsibilities and roles of stakeholders involved in the deployment of AVs, especially in the case of accidents, having legislation in place, and promoting transparency about the data collection by AVs, and incidences involving AVs could mitigate the occurrence of resistance It remains to be seen to what extent the resistance towards AVs changes when AVs cater for real-world needs and desires, including locals in the development, design, and deployment of these vehicles. For example, the current lack of accessibility to these AVs by vulnerable populations could be directly addressed by the AV companies.
### Limitations and implications for future research
First, the data represents the subjective perceptions of respondents. Future research should perform longer-term ethnographic studies with bystanders and local communities in areas with AVs to learn about their lived experiences and interactions, and differences in resistance across situations, and between individuals
and groups, making observations, and collecting self-reported data from interviews, focus groups, and surveys, and behavioral and physiological data collected from sensors deployed on respondents.
Second, the comments that were subjected to the present analysis were publicly available, meaning that later comments may have been influenced by previous comments. Future research is needed to assess the extent to which the themes identified in this study can be generalized to other 'bystander groups' and local communities.
Third, the calculation of the occurrence of the sub-theme was based on the co-occurrence of at least two seed terms per sentence, which explains the low number of sub-theme occurrences in some instances. Future research should exploit this technique, including the wider semantic context in neighboring sentences.
Fourth, socio-demographic information of the respondents posting the comments were missing. More research is needed to understand the socio-demographic and -economic profile of people resisting AVs.
|
2302.14590 | Status of the lifetimes of heavy hadrons within the HQE | We present the theoretical status of the lifetimes of weakly decaying heavy
hadrons containing a bottom or a charm quark, and discuss the current
predictions, based on the framework of the Heavy Quark Expansion (HQE), for
both mesons and baryons. Potential improvements to reduce the theoretical
uncertainties are also highlighted. | Maria Laura Piscopo | 2023-02-28T14:21:07Z | http://arxiv.org/abs/2302.14590v1 | # Status of the lifetimes of heavy hadrons within the HQE
###### Abstract:
We present the theoretical status of the lifetimes of weakly decaying heavy hadrons containing a bottom or a charm quark, and discuss the current predictions, based on the framework of the Heavy Quark Expansion (HQE), for both mesons and baryons. Potential improvements to reduce the theoretical uncertainties are also highlighted.
Introduction
The total lifetime \(\tau\), or the total decay width \(\Gamma=\tau^{-1}\), is one of the fundamental properties of particles. In the study of lifetimes, a special role is occupied by heavy hadrons: QCD bound states containing a heavy quark \(Q\) with \(m_{Q}\gg\Lambda_{\rm QCD}\), where \(\Lambda_{\rm QCD}\) denotes a typical hadronic non-perturbative scale of the order of few hundreds MeV. As their weak decays involve the interplay between the weak and the strong interactions over the wide range of scales \(m_{W}\gg m_{Q}\gg\Lambda_{\rm QCD}\), heavy hadrons are interesting systems to test the Standard Model (SM) and also, given enough precision in both theory and experiments, perform indirect searches of new physics (NP).
Lifetimes measurements are by now very precise [1, 2]. On the theoretical side, the Heavy Quark Expansion (HQE) [3]1 provides a well established framework to compute inclusive decay widths of heavy hadrons in terms of a systematic expansion in inverse powers of \(m_{Q}\). As the reliability of the HQE strongly depends on the assumption that \(Q\) is heavy, bottom hadrons are clearly the most suited systems to be described within this framework. The charm quark mass, on the other hand, lies at the boundary between the heavy and the light quark regime, and if charmed systems pose notoriously more challenges to precise theoretical studies, they also give an opportunity to test even further the applicability of the HQE.
Footnote 1: See e.g. the review [4] for a more comprehensive list of references.
## 2 The HQE in the bottom sector
The total decay width of a bottom hadron \(H_{b}\) is given by
\[\Gamma(H_{b})=\frac{1}{2m_{H_{b}}}\sum_{n}\int_{\rm PS}(2\pi)^{4}\delta^{(4)} \big{(}p_{n}-p_{H_{b}}\big{)}|\langle n|{\cal H}_{\rm eff}|H_{b}\rangle|^{2}\,, \tag{1}\]
where the sum runs over all possible final states into which \(H_{b}\) can decay, PS is the corresponding phase space, and \({\cal H}_{\rm eff}\) the effective Hamiltonian [5]. Using the optical theorem, eq. (1) becomes
\[\Gamma(H_{b})=\frac{1}{2m_{H_{b}}}{\rm Im}\langle H_{b}|{\cal T}|H_{b}\rangle \,,\quad{\rm with}\quad{\cal T}=i\!\int\!d^{4}x\,\,{\rm T}\{{\cal H}_{\rm eff} (x),{\cal H}_{\rm eff}(0)\}\,. \tag{2}\]
The transition operator \({\cal T}\) contains the time-ordered product of \({\cal H}_{\rm eff}\) and is thus a non-local operator; its expression however, simplifies in the limit of a heavy decaying quark. In fact, by exploiting the hierarchy \(m_{b}\gg\Lambda_{\rm QCD}\), the \(b\)-quark momentum and field can be respectively parametrised as
\[p_{b}^{\mu}=m_{b}v^{\mu}+k^{\mu}\,,\qquad b(x)=e^{-im_{b}v\cdot x}b_{v}(x)\,, \tag{3}\]
where \(v=p_{H_{b}}/m_{H_{b}}\) is the hadron velocity, \(k\ll m_{b}\) a residual momentum of the order of \(\Lambda_{\rm QCD}\), and we have introduced the rescaled field \(b_{v}(x)\) containing only low oscillation frequencies of the order of \(k\). Taking into account eq. (3), the time ordered product in eq. (2) can be systematically expanded into a series of local operators \({\cal O}_{d}\) of increasing dimension \(d\) and suppressed by \(d-3\) powers of the heavy quark mass. In this way, the total decay width takes the following form
\[\Gamma\big{(}H_{b}\big{)}=\Gamma_{3}+\Gamma_{5}\frac{\langle{\cal O}_{5}\rangle }{m_{b}^{2}}+\Gamma_{6}\frac{\langle{\cal O}_{6}\rangle}{m_{b}^{3}}+\ldots+16 \pi^{2}\Bigg{[}\bar{\Gamma}_{6}\frac{\langle\tilde{\cal O}_{6}\rangle}{m_{b} ^{3}}+\bar{\Gamma}_{7}\frac{\langle\tilde{\cal O}_{7}\rangle}{m_{b}^{4}}+ \ldots\Bigg{]}\,. \tag{4}\]
The short-distance functions \(\Gamma_{d}\) can be computed in terms of a perturbative expansion in \(\alpha_{s}\big{(}m_{b}\big{)}\)
\[\Gamma_{d}=\Gamma_{d}^{(0)}+\left(\frac{\alpha_{s}}{4\pi}\right)\Gamma_{d}^{(1) }+\left(\frac{\alpha_{s}}{4\pi}\right)^{2}\Gamma_{d}^{(2)}+\ldots\,, \tag{5}\]
whereas \(\langle\mathcal{O}_{d}\rangle\equiv\langle H_{b}|\mathcal{O}_{d}|H_{b}\rangle /(2m_{H_{b}})\) are matrix elements parametrising the non-perturbative effects. Note that in eq. (4) both two- and four-quark operators contribute. While at LO-QCD the former are obtained from the discontinuity of two-loop diagrams, the latter already arise at one-loop at the same order in \(\alpha_{s}\), hence the explicit phase space factor \(16\pi^{2}\) in front of the four-quark operators contributions, here labeled by a tilde. Lifetime ratios of bottom hadrons can then be computed as
\[\frac{\tau\big{(}H_{b}\big{)}}{\tau\big{(}H_{b}^{\prime}\big{)}}=1+\left[\Gamma \big{(}H_{b}^{\prime}\big{)}^{\rm HQE}-\Gamma\big{(}H_{b}\big{)}^{\rm HQE} \right]\tau\big{(}H_{b}\big{)}^{\rm exp.}\,, \tag{6}\]
where the experimental value of \(\tau(H_{b})\) is used to cancel out the dependence on \(\Gamma_{3}\), albeit this is not necessary and eq. (6) can be also computed entirely within the HQE with only slightly larger uncertainties, see e.g. the discussion in [27]. It is worth emphasising that while two-quark operators present short-distance coefficients which are universal for a given heavy quark, so that any differences in lifetimes of bottom hadrons would only be induced by the value of their matrix elements, the Wilson coefficients of the four-quark operators also depend on the spectator quark in the specific hadron considered and this can in general lead to larger effects in lifetime ratios.
The HQE is by now a very advanced framework. The theoretical status for both the perturbative and non-perturbative side is summarised respectively in tables 1 and 2.
### Lifetimes of bottom mesons
The current HQE predictions for the total widths of the bottom mesons \(B_{d},B^{+},B_{s}\), and their lifetime ratios, are shown in fig. 1 and table 3[27]. The two scenarios correspond to different choices of inputs, including the value of the \(b\)-quark mass and of the non-perturbative parameters associated to the matrix elements of two-quark operators, see for details [27]. While the total decay widths and the ratio \(\tau\big{(}B^{+}\big{)}/\tau\big{(}B_{d}\big{)}\) are only mildly sensitive to this choice, the results for \(\tau\big{(}B_{s}\big{)}/\tau\big{(}B_{d}\big{)}\) are strongly affected, and in one case, a small tension with the experimental value emerges. In
\begin{table}
\begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{Semileptonic modes (SL)} \\ \hline \(\Gamma_{3}^{(3)}\) & _Fael, Schönwald, Steinhauser [6]_ \\ _Czakon, Czarnecki, Dowling [7]_ \\ \hline \(\Gamma_{3}^{(1)}\) & _Alberti, Gambino, Nandi [8]_ \\ _Mannel, Pivovarov, Rosenthal [9]_ \\ \hline \(\Gamma_{6}^{(1)}\) & _Mannel, Moreno, Pirovarov [10]_ \\ \hline \(\Gamma_{7}^{(0)}\) & _Dassinger, Mannel, Turczyk [11]_ \\ \hline \(\Gamma_{8}^{(0)}\) & _Mannel, Turczyk, Uraltsev [12]_ \\ \hline \(\bar{\Gamma}_{6}^{(1)}\) & _Lenz, Rauth [13]_ \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{Non-leptonic modes (NL)} \\ \hline \(\Gamma_{3}^{(2)}\) & _Czarnecki, Slusarczyk, Tachow [14]_\({}^{*}\) \\ \hline \(\Gamma_{3}^{(1)}\) & _Ho-Kim, Pham [15]; Altarelli, Petrarca [16]; Bagan et al. [17]_ \\ _Lenz, Nierste, Osternaire [18]; Krimer, Lenz, Rauth [19]_ \\ \hline \(\Gamma_{5}^{(0)}\) & _Big, Uraltsev, Vainshtein [20]; Blok, Shifman [21]_ \\ \hline \(\Gamma_{6}^{(0)}\) & _Lenz, Piscopo, Rusov [22]; Mannel, Moreno, Pirovarov [23]_ \\ \hline \(\bar{\Gamma}_{6}^{(1)}\) & _Beneke, Buchalla, Greub, Lenz, Nierste [24]_ \\ _Funco, Lubicz, Mescia, Tarantino [25]_ \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{_Beneke, Buchalla, Greub, Lenz, Nierste [24]_} \\ _Funco, Lubicz, Mescia, Tarantino [25]_ \\ \hline \(\bar{\Gamma}_{7}^{(0)}\) & _Gabbiani, Onishchenko, Petrov [26]_ \\ \hline \end{tabular}
\end{table}
Table 1: Status of the perturbative corrections in the HQE. The work with \({}^{*}\) contains only partial results.
this respect, we point out that the two fits to experimental data on inclusive semileptonic \(B\)-meson decays [30, 31]2 find different values of the Darwin parameter \(\rho_{D}^{3}\), which plays a crucial role in the prediction of \(\tau\big{(}B_{s}\big{)}/\tau\big{(}B_{d}\big{)}\). Further insights on the origin of this discrepancy are clearly of utmost importance given the potential that this observable could have as an indirect probe of NP. Moreover, an investigation by lattice QCD of the small size of the bag-parameters of the octet-operators and of the so-called 'eye-contractions' as found in [36, 38], would also be very desirable.
Footnote 2: We stress that corresponding measurements for inclusive semileptonic \(B_{s}\)-decays are still missing.
Scale variation in the leading term \(\Gamma_{3}\), and the values of the non-perturbative parameters, including SU(3)\({}_{F}\) breaking effects, are the main sources of uncertainties for the total widths and the lifetime ratios, respectively. Overall, there is very good agreement between HQE and data, and the clean observable \(\tau\big{(}B^{+}\big{)}/\tau\big{(}B_{d}\big{)}\) could already be used to constrain certain NP operators [44].
### Lifetimes of bottom baryons
The most recent theoretical predictions for the total widths of the \(b\)-baryons \(\Lambda_{b}^{0},\Xi_{b}^{0},\Xi_{b}^{-},\Omega_{b}^{-}\), as well as their lifetime ratios, are displayed in fig. 2 and in table 5, see [45]. Along all the observables considered there is very good agreement between theory and experiments, while the dominant sources of uncertainties are again scale variation and the values of the non-perturbative parameters. In fact, despite the efforts in e.g. [39, 40], the four-quark dimension-six matrix-elements have not
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline & \(B_{d},B^{+}\) & \(B_{s}\) & \(\Lambda_{b}^{0}\) & \(\Xi_{b}^{-},\Xi_{b}^{0},\Omega_{b}^{-}\) \\ \hline \(\langle\mathcal{O}_{5}\rangle\) & _Fits to SL data_[28, 29, 30, 31] & & & \\ _HQET sum rules_[32, 33] & _Spectroscopy_[37] & _Spectroscopy_[20] & _Spectroscopy_[43] \\ & _Lattice QCD_[34, 35] & & & \\ \hline \(\langle\mathcal{O}_{6}\rangle\) & _Fits to SL data_[28, 29, 30, 31] & _Sum rules estimates_[37] & _EOM relation to_\(\langle\tilde{\mathcal{O}}_{6}\rangle\) & _EOM relation to_\(\langle\tilde{\mathcal{O}}_{6}\rangle\) \\ & _EOM relation to_\(\langle\tilde{\mathcal{O}}_{6}\rangle\) & _EOM relation to_\(\langle\tilde{\mathcal{O}}_{6}\rangle\) & & \\ \hline \(\langle\tilde{\mathcal{O}}_{6}\rangle\) & _HQET sum rules_[36] & _HQET sum rules_[38] & _HQET SR_[39]; NRCQM + spectroscopy_[42, 41] & _NRCQM + spectroscopy_[42, 41] \\ \hline \(\langle\tilde{\mathcal{O}}_{7}\rangle\) & _Vacuum insertion approximation_ & ————— & \\ \hline \end{tabular}
\end{table}
Table 2: Status of the available determinations of the non-perturbative matrix elements in the HQE for the bottom sector. The following abbreviations are used: heavy quark effective theory (HQET), sum rules (SR), equation of motion (EOM), non-relativistic constituent quark model (NRCQM).
\begin{table}
\begin{tabular}{|c||c|c||c|} \hline & HQE Scenario A & HQE Scenario B & Exp. value \\ \hline \hline \(\tau\big{(}B^{+}\big{)}/\tau\big{(}B_{d}\big{)}\) & \(1.0855^{+0.0232}_{-0.0219}\) & \(1.0851^{+0.0230}_{-0.0217}\) & \(1.076\pm 0.004\) \\ \hline \(\tau\big{(}B_{s}\big{)}/\tau\big{(}B_{d}\big{)}\) & \(1.0279^{+0.0113}_{-0.0113}\) & \(1.0032^{+0.0063}_{-0.0063}\) & \(0.998\pm 0.005\) \\ \hline \end{tabular}
\end{table}
Table 3: HQE predictions of the lifetime ratios for two different choices of theory inputs, see [27].
yet been computed for all baryons either within lattice QCD or QCD sum rules, and currently only spectroscopy relations based on simplified models of QCD, like the NRCQM, can be consistently used. First principle calculations would be of great importance in order to reduce the overall uncertainties and obtain an independent determination of these non-perturbative parameters.
## 3 The HQE at its limit: lifetimes of charmed hadrons
While the applicability of the HQE appears to be well established in the bottom sector, it is questionable whether it also extends to the charm system, given that \(m_{c}\sim 1\) GeV. In this case the two expansion parameters \(\alpha_{s}(m_{c})\) and \(\Lambda_{\rm QCD}/m_{c}\) become larger, making both the perturbative and the power-correction series a priori less reliable. Furthermore, many of the non-perturbative parameters of the HQE for charmed hadrons are still poorly known and in most of the cases only rough estimates based on symmetry arguments with respect to the bottom sector are available.
It is then interesting to confront the HQE framework with the experimental data for the charm system. The current theoretical predictions for the total widths of the charmed mesons \(D^{0},D^{+},D_{s}^{+}\), their lifetime ratios, as well as the semileptonic branching fractions, are shown in fig. 3 and
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & \(\tau(\Lambda_{b}^{0})/\tau(B_{d})\) & \(\tau(\Xi_{b}^{0})/\tau(\Xi_{b}^{-})\) & \(\tau(\Omega_{b}^{-})/\tau(B_{d})\) \\ \hline \hline HQE & \(0.955\pm 0.014\) & \(0.929\pm 0.028\) & \(1.081\pm 0.042\) \\ \hline \hline Exp. & \(0.969\pm 0.006\) & \(0.929\pm 0.028\) & \(1.080^{+0.118}_{-0.112}\) \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of data and HQE predictions for selected lifetime ratios, based on [45].
Figure 1: Comparison of data and HQE predictions for two different choices of theory inputs, based on [27].
Figure 2: Comparison of HQE predictions and experimental data for \(b\)-baryons [45].
table 5[46]. Although with very large uncertainties, the HQE succeeds in correctly describing the observed patterns, and no indication of a possible breakdown of the framework seems to emerge. These results have been confirmed in the recent study [43], which included as well predictions for the lifetimes of singly charmed baryons. Also in this case, it is found that the HQE can consistently accommodate the experimental data, albeit again within large uncertainties.
## 4 Conclusions
We have presented the status of the lifetimes of heavy hadrons containing a bottom or charm quark within the HQE 3. Along all the observables considered the agreement with the experimental data is good, confirming the HQE as a powerful framework for inclusive heavy hadron decays, contrary to some conclusions in [49]. Moreover, the precision already achieved for several clean quantities in the bottom system is high and comparable with the experimental one. Further improvements in the computation of higher order corrections, namely \(\Gamma_{3,\text{NL}}^{(2)}\), \(\Gamma_{7}^{(0)}\), \(\bar{\Gamma}_{6}^{(2)}\), as well as in the determination of the non-perturbative parameters, will be crucial in order to reduce the theoretical uncertainties, so that in future these observables might also serve as interesting probes of NP.
Footnote 3: We have not discussed hadrons containing two heavy quarks. For studies of the \(B_{c}\) lifetime see [47, 48].
## 5 Acknowledgments
I would like to thank the organisers of DISCRETE 2022 for the invitation, and Alexander Lenz and Aleksey Rusov for helpful comments on the manuscript. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 500314741.
Figure 3: Comparison of HQE predictions and experimental data for charmed mesons, based on [46].
\begin{table}
\begin{tabular}{|c||c|c|} \hline \(\Gamma(D^{+})\) & \(\tau(D^{+})\) & \(\bar{\tau}(D_{s}^{+})/\tau(D^{0})\) \\ \hline \hline HQE & \(2.80^{+0.86}_{-0.90}\) & \(1.00\pm 0.16\) \\ \hline \hline Exp. & \(2.54\pm 0.02\) & \(1.30\pm 0.01\) \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of data and HQE predictions of charmed mesons lifetime ratios, based on [46]. Note that \(\bar{\tau}(D_{s}^{+})\) does not include the semileptonic mode \(D_{s}^{+}\to\tau^{+}\nu_{\tau}\), see for details [46]. |
2303.18083 | Analysis and Comparison of Two-Level KFAC Methods for Training Deep
Neural Networks | As a second-order method, the Natural Gradient Descent (NGD) has the ability
to accelerate training of neural networks. However, due to the prohibitive
computational and memory costs of computing and inverting the Fisher
Information Matrix (FIM), efficient approximations are necessary to make NGD
scalable to Deep Neural Networks (DNNs). Many such approximations have been
attempted. The most sophisticated of these is KFAC, which approximates the FIM
as a block-diagonal matrix, where each block corresponds to a layer of the
neural network. By doing so, KFAC ignores the interactions between different
layers. In this work, we investigate the interest of restoring some
low-frequency interactions between the layers by means of two-level methods.
Inspired from domain decomposition, several two-level corrections to KFAC using
different coarse spaces are proposed and assessed. The obtained results show
that incorporating the layer interactions in this fashion does not really
improve the performance of KFAC. This suggests that it is safe to discard the
off-diagonal blocks of the FIM, since the block-diagonal approach is
sufficiently robust, accurate and economical in computation time. | Abdoulaye Koroko, Ani Anciaux-Sedrakian, Ibtihel Ben Gharbia, Valérie Garès, Mounir Haddou, Quang Huy Tran | 2023-03-31T14:21:53Z | http://arxiv.org/abs/2303.18083v2 | # Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks
###### Abstract
As a second-order method, the Natural Gradient Descent (NGD) has the ability to accelerate training of neural networks. However, due to the prohibitive computational and memory costs of computing and inverting the Fisher Information Matrix (FIM), efficient approximations are necessary to make NGD scalable to Deep Neural Networks (DNNs). Many such approximations have been attempted. The most sophisticated of these is KFAC, which approximates the FIM as a block-diagonal matrix, where each block corresponds to a layer of the neural network. By doing so, KFAC ignores the interactions between different layers. In this work, we investigate the interest of restoring some low-frequency interactions between the layers by means of two-level methods. Inspired from domain decomposition, several two-level corrections to KFAC using different coarse spaces are proposed and assessed. The obtained results show that incorporating the layer interactions in this fashion does not really improve the performance of KFAC. This suggests that it is safe to discard the off-diagonal blocks of the FIM, since the block-diagonal approach is sufficiently robust, accurate and economical in computation time.
D Deep Neural Networks; Natural Gradient Descent; Kronecker Factorization; Two-Level Preconditioning
## 1 Introduction
Deep learning has achieved tremendous success in many fields such as computer vision [19, 24], speech recognition [44, 46], and natural language processing [5, 14], where its models have produced results comparable to human performance. This was made possible thanks not only to parallel computing resources but also to adequate optimization algorithms, the development of which remains a major research area. Currently, the Stochastic Gradient Descent (SGD) method [43] and its variants [33, 40] are the workhorse methods for training DNNs. Their wide adoption by the machine learning community is justified by their simplicity and their relativeley good behavior on many standard optimization problems. Nevertheless, almost all optimization problems arising in deep learning are non-linear and highly non-convex. In addition, the landscape of the objective function may contain huge variations in curvature along
different directions [29]. This leads to many challenges in DNNs training, which limit the effectiveness of first-order methods like SGD.
### Approximations of the FIM in NGD methods
By taking advantage of curvature information, second-order methods can overcome the above-mentioned difficulties and speed up the training of DNNs. In such methods, the gradient is rescaled at each iteration with the inverse of a curvature matrix \(C\), whose role is to capture information on the local landscape of the objective function. Several choices of \(C\) are available: the well-known Hessian matrix, the Generalized Gauss-Newton matrix (GGN) [45], the FIM [1] or any positive semi-definite approximation of these matrices. The advantage of the GGN and FIM over the Hessian is that they are always positive semi-definite, which is not always guaranteed for the Hessian. Despite their theoretical superiority, second-order methods are unfortunately not practical for training DNNs. This is due to the huge computational and memory requirements for assembling and inverting the curvature matrix \(C\). Several paradigms have therefore been devised to approximate the curvature matrix of DNNs. For example, the Hessian-free approach (HF) [27] eliminates the need to store \(C\) by using a Krylov subspace-based Conjugate Gradient (CG) method to solve the linear system involving \(C\). While this approach is memory effective, it remains time-consuming, since one must run at each iteration several steps of CG to converge. Another existing approach is the family of quasi-Newton methods [8, 12, 16, 26, 47] that rely only on gradient information to build a low-rank approximation to the Hessian. Other popular approximations to the curvature matrix are Adagrad [11], RMSprop [49], and Adam [21] which develop diagonal approximations to the empirical FIM. Despite their ease of implementation and scalability to DNNs, both low-rank and diagonal approximations throw away a lot of information and, therefore, are in general less effective than a well-tuned SGD with momentum.
More advanced and sophisticated approximations that have sparked great enthusiasm are the family of Kronecker-factored curvature (KFAC) methods. Evolving from earlier works [20, 25, 36, 38, 41], KFAC methods [18, 30, 31] exploit the network structure to obtain a block-diagonal approximation to the FIM. Each block corresponds to a layer and is further approximated by the Kronecker product of two smaller matrices, cheap to store and easy to invert via the formula \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\). Owing to this attractive feature, KFAC has received a lot of attention and many endeavors have been devoted to improving it. In [3, 37], distributed versions of KFAC were demonstrated to perform well in large-scale settings. The EKFAC method [15] refines KFAC by rescaling the Kronecker factors with a diagonal variance computed in a Kronecker-factored eigenbasis. The TKFAC method [13] preserves a trace-invariance relationship between the approximate and the exact FIM. By removing KFAC's assumption on the independence between activations and pre-activation derivatives, more rigorous Kronecker factorizations can be worked out [22] based on minimization of various errors in the Frobenius norm. Beyond the FIM, the idea of Kronecker factorization can also be extended to the Hessian matrix of DNNs as in KBFGS [17], where the computational complexity is alleviated by approximating the inverse of the Kronecker factors with low-rank updates, as well as the GGN matrix of Multi-Layer Perceptrons (MLP), as shown in [17]. KFAC has also been deployed successfully in the context of Bayesian deep learning [53], deep reinforcement learning [51] and Laplace approximation [42].
### Enhancement of KFAC by rough layer interaction
For computation and memory purposes, KFAC as well as all related variants use only a block-diagonal approximation of the curvature matrix, where each block corresponds to a layer. This results in a loss of information about the correlations between different layers. The question then naturally arises as to whether it is worth trying to recover some of the lost information in hope of making the approximate FIM closer to the true one, thus improving the convergence speed of the optimizer without paying an excessive price.
To this question, Tselepidis et al. [50] provided an element of answer by considering a "coarse" correction to the inverse of the approximate FIM. This additional term is meant to represent the interaction between layers at a "macroscopic" scale, in contrast with the "microscopic" scale of the interaction between neurons inside each layer. Their approach proceeds by formal analogy with the two-level preconditioning technique in domain decomposition [10], substituting the notion of layer for that of subdomain. The difference with domain decomposition, however, lies in the fact that the matrix at hand does not stem from the discretization of any PDE system, and this prevents the construction of coarse spaces from being correctly guided by any physical sense. Notwithstanding this concern, some ready-made recipes can be blindly borrowed from two-level domain decomposition. In this way, Tselepidis et al. [50] reached a positive conclusion regarding the advisability of enriching the approximate FIM with some reduced information about interactions between layers. Nevertheless, their coarse correction is objectionable in some respects, most notably because of inconsistency in the formula for the new matrix (see SS3 for a full discussion), while for the single favorable case on which is based their conclusion, the network architecture selected is a little too simplistic (see SS5 for details). Therefore, their claim should not be taken at face value.
Although he did not initially intend to look at the question as formulated above, Benzing [6] recently brought another element of answer that runs counter to the former. By carefully comparing KFAC and the exact natural gradient (as well as FOOF, a method of his own), he came to the astonishingly counterintuitive conclusion that KFAC outperforms the exact natural gradient in terms of optimization performance. In other words, there is no benefit whatsoever in trying to embed any kind of information about the interaction between layers into the curvature matrix, since even the full FIM seems to worsen the situation. While one may not be convinced by his heuristical explanation (whereby KFAC is argued to be a first-order method), his numerical results eloquently speak for themselves. Because Benzing explored a wide variety of networks, it is more difficult to mitigate his findings.
In light of these two contradictory sets of results, we undertook this work in an effort to clarify the matter. To this end, our objective is first to design a family of coarse corrections to KFAC that do not suffer from the mathematical flaws of Tselepidis et al.'s one. This gives rise to a theoretically sound family of approximate FIMs that will next be compared to the original KFAC. This leads to the following outline for the paper. In SS2, we introduce notations and recall essential prerequisites on the network model, the natural gradient descent, and the KFAC approximation. In SS3, after pointing out the shortcomings of Tselepidis et al.'s corrector, we put forward a series of two-level KFAC methods, the novelty of which is their consistency and their choices of the coarse space. In SS4, we present and comment on several experimental results, which include much more test cases and analysis in order to assess the new correctors as fairly as possible. Finally, in SS5, we summarize and discuss the results
before sketching out some prospects.
## 2 Backgrounds on the second-order optimization framework
### Predictive model and its derivatives
We consider a feedforward neural network \(f_{\theta}\), containing \(\ell\) layers and parametrized by
\[\theta=[\operatorname{vec}(W_{1})^{T},\operatorname{vec}(W_{2})^{T},\ldots, \operatorname{vec}(W_{\ell})^{T}]^{T}\in\mathbb{R}^{p}, \tag{2.1}\]
where \(W_{i}\) is the weights matrix associated to layer \(i\) and "vec" is the operator that vectorizes a matrix by stacking its columns together. We also consider a training data
\[\mathcal{U}=\left\{(x^{(1)},y^{(1)}),\,(x^{(2)},y^{(2)}),\,\ldots,\,(x^{(n)},y ^{(n)})\,|\,(x^{(b)},y^{(b)})\in\mathbb{R}^{d_{x}\times d_{y}},\;1\leq b\leq n\right\}\]
and a loss function \(L(y,z)\) which measures the discrepancy between the actual target \(y\) and the network's prediction \(z=f_{\theta}(x)\) for a given input-target pair \((x,y)\in\mathcal{U}\). The goal of the training problem
\[\operatorname*{argmin}_{\theta\in\mathbb{R}^{p}}h(\theta):=\frac{1}{n}\sum_{ b=1}^{n}L(y^{(b)},f_{\theta}(x^{(b)})) \tag{2.2}\]
is to find the optimal value of \(\theta\) that minimizes the empirical risk \(h(\theta)\). In the following, we will designate by \(\mathcal{D}v=\nabla_{v}L\) the gradient of the loss function with respect to any variable \(v\). Depending on the type of network, its output and the gradient of the loss are computed in different ways. Let us describe the calculations for two types of networks.
#### 2.1.1 Mlp (Multi-Layer Perceptron).
Given an input \(x\in\mathbb{R}^{d_{x}}\), the network computes its output \(z=f_{\theta}(x)\in\mathbb{R}^{d_{y}}\) through the following sequence, known as forward-propagation: starting from \(a_{0}:=x\), we carry out the iterations
\[s_{i}=W_{i}\bar{a}_{i-1},\qquad a_{i}=\sigma_{i}(s_{i}),\qquad\text{for $i$ from $1$ to $\ell$}, \tag{2.3}\]
where \(\bar{a}_{i-1}\in\mathbb{R}^{d_{i-1}+1}\) is \(a_{i-1}\) concatenated with \(1\) in order to capture the bias and \(\sigma_{i}\) is the activation function at layer \(i\). Here, \(W_{i}\in\mathbb{R}^{d_{i}\times(d_{i-1}+1)}\), with \(d_{i}\) the number of neurons in layer \(i\). The sequence is terminated by \(z:=a_{\ell}\). Note that the total number of parameters is necessarily \(p=\sum_{i=1}^{\ell}d_{i}(d_{i-1}+1)\).
The gradient of the loss with respect to the parameters is computed via the back-propagation algorithm: starting from \(\mathcal{D}a_{\ell}=\partial_{z}L(y,z=a_{\ell})\), we perform
\[g_{i}=\mathcal{D}a_{i}\odot\sigma_{i}^{\prime}(s_{i}),\quad\mathcal{D}W_{i}=g _{i}\bar{a}_{i-1}^{T},\quad\mathcal{D}a_{i-1}=W_{i}^{T}g_{i},\quad\text{for $i$ from $\ell$ to $1$}, \tag{2.4}\]
where the special symbol \(g_{i}:=\mathcal{D}s_{i}\) stands for the preactivation derivative. Note that, in the formula for \(\mathcal{D}a_{i-1}\), the last row of \(W_{i}^{T}\) should be removed so that the product in the right-hand side belongs to \(\mathbb{R}^{d_{i-1}}\).
#### CNN (Convolutional Neural Network)
The calculation is governed by the same principle as for MLP, but the practical organization slightly differs from that of MLP. In a convolution layer, the input, which is an image with multiple channels is convolved with a set of filters to produce an output image containing multiple channels. In order to speed up computations, traditional convolution operations are reshaped into matrix-matrix or matrix-vector multiplications using the unrolling approach [9], whereby the input/output data are copied and rearranged into new matrices (see Figure 3).
Assume that layer \(i\) which receives input \(\mathcal{A}_{i-1}\in\mathbb{R}^{c_{i-1}\times T_{i-1}}\), where \(T_{i-1}\) denotes the number of spatial locations and \(c_{i-1}\) the number of channels. Considering \(c_{i}\) filters, each of which involves \(\Delta_{i}\) coefficients, we form a weight matrix \(W_{i}\) of shape \(c_{i}\times(c_{i-1}\Delta_{i}+1)\), where each row corresponds to a single filter flattened into a vector. Note that the additional 1 in the column dimension of \(W_{i}\) is required for the bias parameter. Around each position \(t\in\{1,\dots,T_{i-1}\}\), we define the local column vector \(a_{i-1,t}\in\mathbb{R}^{c_{i-1}\Delta_{i}}\) by extracting the patch data from \(\mathcal{A}_{i-1}\) (cf. [18] for explicit formulas). The output \(\mathcal{A}_{i}\in\mathbb{R}^{c_{i}\times T_{i}}\) is computed by the forward-propagation: for \(t\in\{1,\dots,T_{i}\}\), the \(t\)-th column \(\widetilde{a}_{i,t}\) of \(\mathcal{A}_{i}\) is given by
\[s_{i,t}=W_{i}\bar{a}_{i-1,t},\qquad\widetilde{a}_{i,t}=\sigma_{i}(s_{i,t}), \tag{2.5}\]
where \(\bar{a}_{i-1,t}\in\mathbb{R}^{c_{i-1}\Delta_{i}+1}\) is \(a_{i-1}\) concatenated with 1 in order to capture the bias. In matrix form, let \([\![\mathcal{A}_{i-1}]\!]\in\mathbb{R}^{(c_{i-1}\Delta_{i}+1)\times T_{i}}\) be the matrix whose \(t\)-th column is \(\bar{a}_{i-1,t}\). Then,
\[\mathcal{S}_{i}=W_{i}[\![\mathcal{A}_{i-1}]\!],\qquad\mathcal{A}_{i}=\sigma_{i }(\mathcal{S}_{i}). \tag{2.6}\]
The gradient of the loss with respect to the parameters of layer \(i\) is computed as
Figure 1: Traditional convolution turned into matrix-matrix multiplication with the _unrolling_ approach.
via the back-propagation formulas
\[g_{i,t}=\mathcal{D}\widetilde{a}_{i,t}\odot\sigma^{\prime}_{i}(s_{i,t}),\quad \mathcal{D}W_{i}=\sum_{t=1}^{T_{i}}g_{i,t}\bar{a}_{i-1,t}^{T},\quad\mathcal{D}a _{i-1,t}=W_{i}^{T}g_{i,t}, \tag{2.7}\]
for \(t\in T_{i}\), where the special symbol \(g_{i,t}:=\mathcal{D}s_{i,t}\) stands for the preactivation derivative.
In all cases (MLP or CNN), the gradient \(\nabla_{\theta}L=\mathcal{D}\theta\) of the loss with respect to whole parameter \(\theta\) is retrieved as
\[\mathcal{D}\theta=[\mathrm{vec}(\mathcal{D}W_{1})^{T},\mathrm{ vec}(\mathcal{D}W_{2})^{T},\ldots,\mathrm{vec}(\mathcal{D}W_{\ell})^{T}]^{T}. \tag{2.8}\]
A general descent method to solve the training problem (2.2) is based on the iterates
\[\theta_{k+1}=\theta_{k}-\alpha_{k}[C(\theta_{k})]^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}), \tag{2.9}\]
where \(\alpha_{k}>0\) is the learning rate,
\[\nabla_{\theta}h(\mathcal{S}_{k},\theta_{k})=\frac{1}{|\mathcal{ S}_{k}|}\sum_{(x^{(b)},y^{(b)})\in\mathcal{S}_{k}}\nabla_{\theta}L(y^{(b)},f_{ \theta_{k}}(x^{(b)})) \tag{2.10}\]
is a batch approximation of the full gradient \(\nabla_{\theta}h(\theta_{k})=\frac{1}{n}\sum_{b=1}^{n}\nabla_{\theta}L(y^{(b)},f_{\theta_{k}}(x^{(b)}))\) on a random subset \(\mathcal{S}_{k}\subset\mathcal{U}\), and \(C(\theta_{k})\) is an invertible matrix which depends on the method being implemented.
### Natural Gradient Descent
The _Natural Gradient Descent_ (NGD) is associated with a particular choice for matrix \(C(\theta_{k})\), which is well-defined under a mild assumption.
Hypothesis on the loss function.From now on, we take it for granted that there exists a probability density \(\wp(y|z)\) on \(y\in\mathbb{R}^{d_{y}}\) such that, up to an additive constant \(\nu\), the loss function \(L(y,z)\) takes the form
\[L(y,z)=-\log\wp(y|z)+\nu. \tag{2.11}\]
For instance, if the elementary loss corresponds to the least-squares function
\[L(y,z)=\frac{1}{2}\|y-z\|_{2}^{2},\] (2.12a) then we can take the normal density \[\wp(y|z)=(2\pi)^{-d_{y}/2}\exp(-\tfrac{1}{2}\|y-z\|_{2}^{2}),\] (2.12b) so that \[L(y,z)=-\log\wp(y|z)-\frac{d_{y}}{2}\log(2\pi). \tag{2.12c}\]
Introduce the notation \(p(y|x,\theta)=\wp(y|f_{\theta}(x))\). Then, the composite loss function
\[L(y,f_{\theta}(x))=-\log p(y|x,\theta) \tag{2.13}\]
derives from the density function \(p(y|x,\theta)\) of the model's conditional predictive distribution \(P_{y|x}(\theta)\). As shown above, \(P_{y|x}(\theta)\) is multivariate normal for the standard square loss function. It can also be proved that \(P_{y|x}(\theta)\) is multinomial for the cross-entropy one. The learned distribution is therefore \(\mathsf{P}_{x,y}(\theta)\) with density
\[\mathsf{p}(x,y|\theta)=q(x)p(y|x,\theta), \tag{2.14}\]
where \(q(x)\) is the density of data distribution \(Q_{x}\) over inputs \(x\in\mathbb{R}^{d_{x}}\).
#### Fisher Information Matrix.
The NGD method [1] is defined as the generic algorithm (2.9) in which \(C(\theta)\) is set to the _Fisher Information Matrix_ (FIM)
\[F(\theta) =\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\{\nabla_{\theta} \log\mathsf{p}(x,y|\theta)[\nabla_{\theta}\log\mathsf{p}(x,y|\theta)]^{T}\} \tag{2.15a}\] \[=\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\{\mathcal{D} \theta(\mathcal{D}\theta)^{T}\}\] (2.15b) \[=\mathrm{cov}(\mathcal{D}\theta,\mathcal{D}\theta), \tag{2.15c}\]
where \(\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\) denotes the expectation taken over the prescribed distribution \(\mathsf{P}_{x,y}(\theta)\) at a fixed \(\theta\). To alleviate notations, we shall write \(\mathbb{E}\) instead of \(\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\) from now on. Likewise, we shall write \(F\) instead of \(F(\theta)\) or \(F(\theta_{k})\).
By definition (2.15), the FIM is a covariance matrix and is therefore always positive semi-definite. However, for the iteration
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F^{-1}\nabla_{\theta}h(\mathcal{S}_{k}, \theta_{k}) \tag{2.16}\]
to be well-defined, \(F\) has to be invertible. This is why, in practice, \(C(\theta_{k})\) will be taken to be a regularized version of \(F\) under the form
\[F_{\bullet}=F+\lambda I_{p},\qquad\lambda>0. \tag{2.17}\]
The actual NGD iteration is therefore
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F_{\bullet}^{-1}\nabla_{\theta}h(\mathcal{ S}_{k},\theta_{k}). \tag{2.18}\]
In the space of probability distributions \(\mathsf{P}_{x,y}(\theta)\) equipped with the _Kullback-Leibler_ (KL) divergence, the FIM represents the local quadratic approximation of this induced metric, in the sense that for a small vector \(\delta\in\mathbb{R}^{p}\), we have
\[\mathrm{KL}[\mathsf{P}_{x,y}(\theta)\,\|\,\mathsf{P}_{x,y}(\theta+\delta)]= \frac{1}{2}\delta^{T}F\delta+O(\|\delta\|^{3}), \tag{2.19}\]
where \(\mathrm{KL}[\mathsf{P}||\mathsf{Q}]\) is the KL divergence between the distributions \(\mathsf{P}\) and \(\mathsf{Q}\) As a consequence, the unregularized NGD can be thought of as the steepest descent in this space of probability distributions [2].
### Advantages and drawbacks.
By virtue of this geometric interpretation, the NGD has the crucial advantage of being intrinsic, that is, invariant with respect to invertible reparameterizations. Put another way, the algorithm will produce matching iterates regardless of how the unknowns are transformed. This is useful in high-dimensional cases where the choice of parameters is more or less arbitrary.
Strictly speaking, invariance with respect to parameters only occurs at the continuous level, i.e., in the limit of \(\alpha_{k}\to 0\). This minor drawback does not undermine the theoretical soundness of the method. In fact, the real drawback of the NGD (2.15)-(2.16) lies in the cost of computing and inverting the Fisher matrix. This is why it is capital to consider suitable approximations of the FIM such as KFAC (cf. SS2.3).
Finally and anecodtically, the NGD method can also be viewed as an approximate Newton method since the FIM and the GGN matrix are equivalent when the model predictive distribution \(P_{y|x}(\theta)\) belongs to exponential family distributions [28].
### KFAC approximation of the FIM
From equation (2.15), the FIM can be written as a block-diagonal matrix
\[F=\mathbb{E}[\mathcal{D}\theta(\mathcal{D}\theta)^{T}]=\begin{bmatrix}F_{1,1}& \ldots&F_{1,\ell}\\ \vdots&&\vdots\\ F_{\ell,1}&\ldots&F_{\ell,\ell}\end{bmatrix}\in\mathbb{R}^{p\times p},\] (2.20a) in which each block \[F_{i,j}\] is given by \[F_{i,j}=\mathbb{E}[\mathrm{vec}(\mathcal{D}W_{i})\mathrm{vec}(\mathcal{D}W_{j })^{T}]. \tag{2.20b}\]
One can interpret \(F_{i,i}\) as being second-order statistics of weight derivatives of layer \(i\), and \(F_{i,j}\), \(i\neq j\) as representing the interactions between layer \(i\) and \(j\).
The KFAC method [31] is grounded on the following two assumptions to provide an efficient approximation to the FIM that is convenient for training DNNs.
1. The first one is that there are no interactions between two different layers, i.e., \(F_{i,j}=0\) for \(i\neq j\). This results in the block-diagonal approximation \[F\approx\widetilde{F}=\mathrm{diag}(F_{1,1},F_{2,2},\ldots,F_{\ell,\ell})\] (2.21) for the FIM. At this step, computing the inverse of \(\widetilde{F}\) is equivalent to computing the inverses of diagonal blocks \(F_{i,i}\). Nevertheless, because the diagonal blocks \(F_{i,i}\) can be very large (especially for DNNs with large layers), this first approximation remains insufficient.
2. The second one comes in support of the first one and consists in factorizing each diagonal block \(F_{i,i}\) as a Kronecker product of two smaller matrices, namely, \[F_{i,i}\approx[F_{\mathrm{KFAC}}]_{i,i}=A_{i}\otimes G_{i}.\] (2.22) where the Kronecker product between \(A\in\mathbb{R}^{m_{A}\times n_{A}}\) and \(B\in\mathbb{R}^{m_{B}\times n_{B}}\) is the
matrix of size \(m_{A}m_{B}\times n_{A}n_{B}\) given by
\[A\otimes B=\left[\begin{array}{ccc}A_{1,1}B&\ldots&A_{1,n_{A}}B\\ \vdots&&\vdots\\ A_{m_{A},1}B&\ldots&A_{m_{A},n_{A}}B\end{array}\right]. \tag{2.23}\]
Now, depending on the type of the layer, the computation of the Kronecker factors \(A_{i}\) and \(G_{i}\) may require different other assumptions.
#### 2.2.2 MLP layer.
When layer \(i\) is an MLP, the block \(F_{i,i}\) is given by
\[F_{i,i} =\mathbb{E}[\text{vec}(\mathcal{D}W_{i})\text{vec}(\mathcal{D}W_ {j})^{T}]\] \[=\mathbb{E}[\text{vec}(g_{i}\bar{a}_{i-1}^{T})\text{vec}(g_{i} \bar{a}_{i-1}^{T})^{T}]\] \[=\mathbb{E}[(\bar{a}_{i-1}\otimes g_{i})(\bar{a}_{i-1}\otimes g_{ i})^{T}]\] \[=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T}\otimes g_{i}g_{i}^{T}]. \tag{2.24}\]
From the last equality, if one assumes that activations and pre-activation derivatives are independent, that is, \(a_{i-1}\perp\!\!\!\perp g_{i}\), then \(F_{i,i}\) can be factorized as
\[F_{i,i}\approx[F_{\text{KFAC}}]_{i,i}=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T }]\otimes\mathbb{E}[g_{i}g_{i}^{T}]\,=:A_{i}\otimes G_{i}, \tag{2.25}\]
with \(A_{i}=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T}]\) and \(G_{i}=\mathbb{E}[g_{i}g_{i}^{T}]\).
#### 2.2.3 Convolution layer.
With such a layer, \(F_{i,i}\) is written as
\[F_{i,i} =\mathbb{E}\big{[}\text{vec}(\mathcal{D}W_{i})\text{vec}( \mathcal{D}W_{j})^{T}\big{]}\] \[=\mathbb{E}\bigg{[}\text{vec}\bigg{(}\sum_{t=1}^{T_{i}}g_{i,t} \bar{a}_{i-1,t}^{T}\bigg{)}\text{vec}\bigg{(}\sum_{t=1}^{T_{i}}g_{i,t}\bar{a}_ {i-1,t}^{T}\bigg{)}^{T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} (\bar{a}_{i-1,t}\otimes g_{i,t})(\bar{a}_{i-1,t^{\prime}}\otimes g_{i,t^{ \prime}})^{T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} \bar{a}_{i-1,t}\bar{a}_{i-1,t^{\prime}}^{T}\otimes g_{i,t}g_{i,t^{\prime}}^{ T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} \Omega_{i}(t,t^{\prime})\otimes\Gamma_{i}(t,t^{\prime})\bigg{]}, \tag{2.26}\]
with \(\Omega_{i}(t,t^{\prime})=\bar{a}_{i-1,t}\bar{a}_{i-1,t^{\prime}}^{T}\) and \(\Gamma_{i}(t,t^{\prime})=g_{i,t}g_{i,t^{\prime}}^{T}\). In order to factorize \(F_{i,i}\) into Kronecker product of two matrices, Grosse and Martens [18] resort to three hypotheses. First, similarly to MLP layers, activations and pre-activation derivatives are assumed to be independent. Secondly, postulating spatial homogeneity, the second-order statistics of the activations and pre-activation derivatives at any two spatial locations \(t\) and \(t^{\prime}\) depend only on the difference \(t-t^{\prime}\). Finally, the pre-activation derivatives at any two distinct spatial locations are declared to be uncorrelated, i.e., \(\Gamma_{i}(t,t^{\prime})=0\) for \(t\neq t^{\prime}\).
Combining these three assumptions yields the approximation
\[F_{i,i}\approx[F_{\text{KFAC}}]_{i,i}=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\Omega_{ i}(t,t)\bigg{]}\otimes\frac{1}{T_{i}}\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\Gamma_{i}(t,t )\bigg{]}\,=:A_{i}\otimes G_{i}, \tag{2.27}\]
with \(A_{i}=\mathbb{E}\big{[}\sum_{t=1}^{T_{i}}\Omega_{i}(t,t)\big{]}\) and \(G_{i}=\frac{1}{T_{i}}\mathbb{E}\big{[}\sum_{t=1}^{T_{i}}\Gamma_{i}(t,t)\big{]}\).
**Remark 2.1**.: It should be mentioned that, in the same spirit, a KFAC-type approximation has been developed for the RNN (Recurrent Neural Network), but with much more assumptions. In this work, we do not consider recurrent layers. The readers interested in KFAC for RNN are referred to [30].
Going back to MLP and CNN layers, the matrices \(A_{i}\) and \(G_{i}\) are estimated using Monte Carlo method, with a mini-batch \(\mathcal{B}=\{(x_{1},y_{1}),\ldots,(x_{B},y_{B})\}\), where the targets \(y_{i}\)'s are sampled from the model predictive distribution \(P_{y|x}(\theta)\). Combining the block-diagonal approximation and Kronecker factorization of each block, the approximate FIM becomes
\[F\approx F_{\text{KFAC}}=\text{diag}(A_{1}\otimes G_{1},\,A_{2}\otimes G_{2 },\,\ldots,\,A_{\ell}\otimes G_{\ell}). \tag{2.28}\]
The descent iteration (2.9) with \(C(\theta_{k})=F_{\text{KFAC}}(\theta_{k})\) is now well suited to training DNNs. Indeed, thanks to the Kronecker product properties \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\) and \((A\otimes B)\text{vec}(X)=\text{vec}(BXA^{T})\), it is plain that the product
\[F_{\text{KFAC}}^{-1}\nabla_{\theta}h=\begin{bmatrix}\text{vec}(G_{1}^{-1}( \nabla_{W_{1}}h)A_{1}^{-1})\\ \vdots\\ \text{vec}(G_{\ell}^{-1}(\nabla_{W_{\ell}}h)A_{\ell}^{-1})\end{bmatrix} \tag{2.29}\]
only requires to store and to invert matrices of moderately small sizes.
In practice, invertibility of \(F_{\text{KFAC}}\) must be enforced by a regularization procedure. The usual Tikhonov one, by which we consider \(F_{\text{KFAC}}+\lambda I,\lambda>0\), instead of \(F_{\text{KFAC}}\), is equivalent to adding a multiple of the identity matrix of appropriate size to each diagonal block, i.e., \(A_{i}\otimes G_{i}+\lambda I_{i}\). Unfortunately, this breaks down the Kronecker factorization structure of the blocks. To preserve the factorized structure, the authors of KFAC [31] advocate a heuristic damping technique in which each Kronecker factor is regularized as
\[[F_{\bullet\,\text{KFAC}}]_{i,i}=(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}})\otimes (G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}),\] (2.30a) where \[I_{A_{i}}\] and \[I_{G_{i}}\] denote identity matrices of same size as \[A_{i}\] and \[G_{i}\] respectively, and \[\pi_{i}=\sqrt{\frac{\text{tr}(A_{i})/(d_{i-1}+1)}{\text{tr}(G_{i})/d_{i}}}. \tag{2.30b}\]
The actual KFAC iteration is therefore
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F_{\bullet\,\text{KFAC}}^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}), \tag{2.31a}\]
with
\[F_{\bullet\,\text{KFAC}}=\text{diag}([F_{\bullet\,\text{KFAC}}]_{1,1},\,[F_{ \bullet\,\text{KFAC}}]_{2,2},\,\ldots,\,[F_{\bullet\,\text{KFAC}}]_{\ell,\ell}). \tag{2.31b}\]
## 3 Two-level KFAC methods
Henceforth, the learning rate is assumed to be \(\alpha_{k}=1\). Let
\[\zeta_{k}=\theta_{k}-\theta_{k+1}=[C(\theta_{k})]^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}) \tag{3.1}\]
be the negative increment of \(\theta\) at iteration \(k\) of the generic descent algorithm (2.9). To further alleviate notations, we shall drop the subscript \(k\) and omit the dependence on \(\theta_{k}\). For the regularized NGD (2.18), we have
\[\zeta=F_{\bullet}^{-1}\,\nabla_{\theta}h, \tag{3.2}\]
while for the regularized KFAC method, we have
\[\zeta_{\text{KFAC}}=F_{\bullet\,\text{KFAC}}^{-1}\nabla_{\theta}h, \tag{3.3}\]
being understood that the matrices are regularized whenever necessary.
We want to build a new matrix \(F_{\bullet\,\text{KFAC-2L}}^{-1}\), an augmented version of \(F_{\bullet\,\text{KFAC}}^{-1}\), such that the solution
\[\zeta_{\text{KFAC-2L}}=F_{\bullet\,\text{KFAC-2L}}^{-1}\nabla_{ \theta}h, \tag{3.4}\]
is a better approximation to \(\zeta\) than \(\zeta_{\text{KFAC}}\), namely,
\[\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F}\ll\|\zeta_{\text{KFAC}}- \zeta\|_{F}. \tag{3.5}\]
By "augmented" we mean that, at least partially and at some rough scale, \(F_{\text{KFAC-2L}}^{-1}\) takes into account the information about layer interactions that was discarded by the block-diagonal approximation KFAC. The basic tenet underlying this initiative is the belief that a more accurate approximation to the NGD solution \(\zeta\) at each descent iteration will help the global optimization process to converge faster.
### Analogy and dissimilarity with domain decomposition
The construction philosophy of \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) proceeds by analogy with insights from domain decomposition. To properly explain the analogy, we first need to cast the matrix \(F_{\bullet\,\text{KFAC}}^{-1}\) under a slightly different form.
For each \(i\in\{1,\ldots,\ell\}\), let \(R_{i}\in\mathbb{R}^{p_{i}\times p}\) be the matrix of the restriction operator from \(\mathbb{R}^{p}\), the total space of all parameters, to the subspace of parameters pertaining to layer \(i\), whose dimension is \(p_{i}\). In other words, for \((\xi,\eta)\in\{1,\ldots,p_{i}\}\times\{1,\ldots,p\}\),
\[(R_{i})_{\xi\eta}=\left\{\begin{aligned} & 1&\text{ if }\;\eta=p_{1}+\ldots+p_{i-1}+\xi,\\ & 0&\text{ otherwise.}\end{aligned}\right. \tag{3.6}\]
The transpose \(R_{i}^{T}\in\mathbb{R}^{p\times p_{i}}\) then represents the prolongation operator from the subspace of parameters in layer \(i\) to the total space of all parameters. Obviously, the \(i\)-th diagonal block of the regularized FIM can be expressed as
\[[F_{\bullet}]_{i,i}=R_{i}F_{\bullet}R_{i}^{T}.\]
If there were no approximation of each diagonal block by a Kronecker product, then the block-diagonal approximation of \(F\) would give rise to the inverse matrix
\[F_{\bullet\,\text{block-diag}}^{-1}=\sum_{i=1}^{\ell}R_{i}^{T}[F_{\bullet}]_{i,i}^{-1}R_{i}=\sum_{i=1}^{\ell}R_{i}^{T}(R_{i}F_{\bullet}R_{i}^{T})^{-1}R_{i}. \tag{3.7}\]
In the case of KFAC, it follows from (2.28)-(2.29) that
\[F_{\bullet\,\text{KFAC}}^{-1} =\sum_{i=1}^{\ell}R_{i}^{T}[F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}R _{i} \tag{3.8}\] \[=\sum_{i=1}^{\ell}R_{i}^{T}(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}) ^{-1}\otimes(G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}})^{-1}R_{i}.\]
In the context of the domain decomposition methods to solve linear systems arising from the discretization of PDEs, the spatial domain of the initial problem is divided into several subdomains. The system is then projected onto the subdomains and the local subproblems are solved independently of each other as smaller systems. In this stage, parallelism can be fully taken advantage of by assigning a processor to each subdomain. This produces a local solution on each subdomain. These local solutions are next combined to create an approximate global solution on the overall domain. Algebraically, the whole process is tantamount to using an inverse matrix of a form similar to (3.7)-(3.8) either within a Schwarz-like iterative procedure or as a preconditioner [10]. The counterparts of \([F_{\bullet}]_{i,i}^{-1}\) or \([F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}\) are referred to as _local solvers_.
**Remark 3.1**.: The above analogy is not a perfect one. In domain decomposition, the subdomains are allowed (and even recommended!) to overlap each other, so that an unknown can belong to two or more subdomains. In this case, the restriction operators \(R_{i}\) can be much more intricate than the one trivially defined in (3.6).
A well-known issue with domain decomposition methods of the form (3.7)-(3.8) is the disappointingly slow rate of convergence, which results in a lack of _scalability_[10]: the speed-up factor does not grow proportionally with the number of subdomains (and therefore of processors). The reason is that, as the number of subdomains increases, it takes more iterations for an information local to one subdomain to be propagated and taken into account by the others. The common remedy to this problem is to append a "coarse" correction that enables subdomains to communicate with each other in a faster way. The information exchanged in this way is certainly not complete, but only concerns the low frequencies.
**Remark 3.2**.: In domain decomposition, there is a physical problem (represented by the PDE at the continuous level) that serves as a support for the mathematical and numerical reasoning. This is not the case here, where we have to think in a purely algebraic way.
### Multiplicative vs. additive coarse correction
We are going to present the idea of two-level KFAC in a very elementary fashion. Let \(m\geq\ell\) be an integer and \(R_{0}\in\mathbb{R}^{m\times p}\) be a given matrix. The subspace of \(\mathbb{R}^{p}\) spanned by the columns of \(R_{0}^{T}\in\mathbb{R}^{p\times m}\) is called the _coarse space_. The choice of the coarse space will be discussed later on. For the moment, we can assume that it is known.
The idea is to add to \(\zeta_{\text{KFAC}}\) a correction term that lives in the coarse space, in such a way that the new vector minimizes the error in the \(F_{\bullet}\)-norm with respect to the FIM solution \(\zeta=F_{\bullet}^{-1}\nabla_{\theta}h\). More concretely, this means that for the negative increment, we consider
\[\zeta_{\text{KFAC-2L}}=\zeta_{\text{KFAC}}+R_{0}^{T}\beta^{*}, \tag{3.9}\]
where
\[\beta^{*} =\operatorname*{argmin}_{\beta\in\mathbb{R}^{\ell}}\|(\zeta_{ \text{KFAC}}+R_{0}^{T}\beta)-\zeta\|_{F_{\bullet}}^{2} \tag{3.10a}\] \[=\operatorname*{argmin}_{\beta\in\mathbb{R}^{\ell}}\|(\zeta_{ \text{KFAC}}+R_{0}^{T}\beta)-F_{\bullet}^{-1}\nabla_{\theta}h\|_{F_{\bullet}} ^{2}. \tag{3.10b}\]
The solution of the quadratic minimization problem (3.10) is given by
\[\beta^{*}=(R_{0}F_{\bullet}R_{0}^{T})^{-1}R_{0}(\nabla_{\theta}h-F_{\bullet} \zeta_{\text{KFAC}}), \tag{3.11}\]
provided that the matrix
\[F_{\text{coarse}}:=R_{0}F_{\bullet}R_{0}^{T}\in\mathbb{R}^{m\times m}, \tag{3.12}\]
representing the _coarse operator_, be invertible. This is a small size matrix, insofar as \(m\) will be in practice taken to be equal to \(\ell\) or \(2\ell\), and will in any case remain much smaller than \(p\). This is in agreement with domain decomposition where the size of the coarse system is usually equal to the number of subdomains.
As for the vector
\[r_{\text{KFAC}}:=\nabla_{\theta}h-F_{\bullet}\zeta_{\text{KFAC}}, \tag{3.13}\]
it is referred to as the _residual_ associated to the approximate solution \(\zeta_{\text{KFAC}}\). Plugging (3.11) into (3.9) and recalling that \(\zeta_{\text{KFAC}}=F_{\bullet\,\text{KFAC}}^{-1}\nabla_{\theta}h\), we end up with
\[\zeta_{\text{KFAC-2L}}=F_{\bullet\,\text{KFAC-2L}}^{-1}\nabla_{\theta}h, \tag{3.14}\]
with
\[F_{\bullet\,\text{KFAC-2L}}^{-1}=F_{\bullet\,\text{KFAC}}^{-1}+R_{0}^{T}F_{ \text{coarse}}^{-1}R_{0}(I-F_{\bullet}F_{\bullet\,\text{KFAC}}^{-1}). \tag{3.15}\]
The matrix (3.15) that we propose can be checked to be consistent: if \(F_{\bullet\,\text{KFAC}}^{-1}\) and \(R_{0}^{T}F_{\text{coarse}}^{-1}R_{0}\) are both homogeneous to \(F_{\bullet}^{-1}\), then \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is homogenous to
\[F_{\bullet}^{-1}+F_{\bullet}^{-1}-F_{\bullet}^{-1}F_{\bullet}\,F_{\bullet}^{-1 }=F_{\bullet}^{-1}\]
too. In the language of domain decomposition, the coarse corrector of (3.15) is said to act _multiplicatively_, to the extent that
\[I-F_{\bullet\,\text{KFAC-2L}}^{-1}F_{\bullet}=[I-(R_{0}^{T}F_{\text{coarse}}^{-1 }R_{0})F_{\bullet}][I-F_{\bullet\,\text{KFAC}}^{-1}F_{\bullet}]. \tag{3.16}\]
as can be straightforward verified. If \(G\) is an approximation of \(F_{\bullet}^{-1}\), the matrix \(I-GF_{\bullet}\) measures the quality of this approximation. Equality (3.16) shows that the approximation quality of \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is the product of those of \(R_{0}^{T}F_{\text{coarse}}^{-1}R_{0}\) and \(F_{\bullet\,\text{KFAC}}^{-1}\).
A common practice in domain decomposition is to drop the factor \(I-F_{\bullet\,\text{KFAC}}^{-1}\) (which is equivalent to replacing the residual \(r_{\text{KFAC}}=\nabla_{\theta}h-F_{\bullet}\theta_{\text{KFAC}}\) by \(\nabla_{\theta}h\)). This amounts to approximating \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) as
\[F_{\bullet\,\text{KFAC-2L}}^{-1}\approx F_{\bullet\,\text{KFAC}}^{-1}+R_{0} ^{T}F_{\text{coarse}}^{-1}R_{0}. \tag{3.17}\]
The coarse corrector of (3.17) is said to act _additively_ in domain decomposition. Clearly, the resulting matrix is inconsistent with \(F_{\bullet}^{-1}\): in fact, it is consistent with \(2F_{\bullet}^{-1}\)! No matter how crude it is, this coarse corrector is actually valid as long as \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is used only as a preconditioner in the resolution of the system \(F_{\bullet}\boldsymbol{\zeta}=\nabla_{\theta}h\), which means that we solve instead \(F_{\bullet\,\text{KFAC-2L}}^{-1}F_{\bullet}\boldsymbol{\zeta}=F_{\bullet\, \text{KFAC-2L}}^{-1}\nabla_{\theta}h\) to benefit from a more favorable conditioning but the solution we seek remains the same.
Here, in our problem, \(F_{\bullet}^{-1}\) is directly approximated by \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) and therefore the inconsistent additive coarse corrector (3.17) is not acceptable. Note that Tselepidis et al. [50] adopted this additive coarse correction, in which \(F_{\text{coarse}}\) is approximated as
\[F_{\text{coarse}}\approx R_{0}\bar{F}_{\bullet}R_{0}^{T},\] (3.18a) where \[\bar{F}_{\bullet}\] is the block-diagonal matrix whose blocks \[[\bar{F}_{\bullet}]_{i,j}\] are given by \[[\bar{F}_{\bullet}]_{i,j}=\left\{\begin{array}{ll}\mathbb{E}[\bar{a}_{i-1} \bar{a}_{j-1}^{T}]\otimes\mathbb{E}[g_{i}g_{j}^{T}]&\text{if}\;\;i\neq j,\\ \mathbb{E}[A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}]\otimes\mathbb{E}[G_{i}+\pi_{ i}^{-1}\lambda^{1/2}I_{G_{i}}]&\text{if}\;\;i=j.\end{array}\right. \tag{3.18b}\]
In this work, we focus to the consistent multiplicative coarse corrector (3.15) and also consider the exact value (3.12) for \(F_{\text{coarse}}\).
### Choice of the coarse space \(R_{0}^{T}\)
By the construction (3.9)-(3.10), we are guaranteed that
\[\left\|\zeta_{\text{KFAC-2L}}-\zeta\right\|_{F}^{2}\leq\left\|\zeta_{\text{KFAC }}-\zeta\right\|_{F}^{2} \tag{3.19}\]
for any coarse space \(R_{0}^{T}\), since the right-hand side corresponds to \(\beta=0\). The choice of \(R_{0}^{T}\) is a compromise between having a small dimension \(m\ll p\) and lowering the new error
\[\left\|\zeta_{\text{KFAC-2L}}-\zeta\right\|_{F}^{2}=\left\|-[I-R_{0}^{T}(R_{0} F_{\bullet}R_{0}^{T})^{-1}R_{0}F_{\bullet}][I-F_{\bullet\,\text{KFAC}}^{-1}F_{ \bullet}]\zeta\right\|_{F}^{2} \tag{3.20}\]
as much as possible. But it seems out of reach to carry out the minimization of the latter with respect to the entries of \(R_{0}^{T}\).
In the context of the preconditioner, the idea behind a two-level method is to remove first the influence of very large eigenvalues which correspond to high-frequency modes, then remove the smallest eigenvalues thanks to the second level, which affect greatly the convergence. To do so, we need a suitable coarse space to efficiently deal with this second level [32]. Ideally, we would like to choose the deflation subspace which consists of the eigenvectors associated with the small eigenvalues of the preconditioned operator. However, this computation is more costly than solving a linear system itself.
This leads us to choose the coarse space in an a priori way. We consider the a priori form
\[R_{0}^{T}=\begin{bmatrix}V_{1}&0&\ldots&\ldots&0\\ 0&V_{2}&\ldots&\ldots&0\\ \vdots&\vdots&\ddots&&\vdots\\ 0&0&\ldots&\ldots&V_{\ell}\end{bmatrix}\in\mathbb{R}^{p\times m}, \tag{3.21}\]
where each block \(V_{i}\in\mathbb{R}^{p_{i}\times m_{i}}\) has \(m_{i}\) columns with \(m_{i}\ll p_{i}\), and
\[m_{1}+m_{2}+\ldots+m_{\ell}=m. \tag{3.22}\]
To provide a comparative study, we propose to evaluate several coarse space choices of the form (3.21) that are discussed below.
#### 3.2.2 Nicolaides coarse space.
Historically, this is the first [35] coarse space ever proposed in domain decomposition. Transposed to our case, it corresponds to
\[m_{1}=\ldots=m_{\ell}=1,\qquad m=\ell, \tag{3.23}\]
and for all \(i\in\{1,\ldots,\ell\}\),
\[V_{i}=\begin{bmatrix}1,\ldots,1\end{bmatrix}^{T}\,\in\,\mathbb{R}^{p_{i}}. \tag{3.24}\]
Originally, the motivation for selecting the vector whose all components are equal to \(1\) is that it is the discrete version of a continuous constant field, which is the eigenvector associated with the eigenvalue \(0\) of the operator \(-\nabla\cdot(\kappa\nabla)\) (boundary conditions being set aside). Inserting it into the coarse space helps the solver take care of the lowest frequency mode. In our problem, however, there is no reason for \(0\) to be an eigenvalue of \(F\), nor for \(1\) to be an eigenvector if this is the case. Hence, there is no justification for the Nicolaides coarse space. Still, this choice remains convenient and practical. This is probably the reason why Tselepidis et al. [50] have opted for it.
#### 3.2.3 Spectral coarse space.
This is a slightly refined version of the Nicolaides coarse space. The idea is always to capture the lowest mode [32], but since the lowest eigenvalue and eigenvector are not known in advance, we have to compute them. More specifically, we keep the values (3.23) for the column sizes within \(R_{0}^{T}\), while prescribing
\[V_{i}=\text{eigenvector associated to the smallest eigenvalue of }[F_{\bullet\,\text{KFAC}}]_{i,i} \tag{3.25}\]
for all \(i\in\{1,\ldots,\ell\}\). In our case, an advantageous feature of this definition is that the cost of computing the eigenvectors is "amortized" by that of the inverses of \([F_{\text{KFAC}}]_{i,i}\), in the sense that these two calculations can be carried out simultaneously. Indeed, let
\[A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}=U_{A_{i}}\Sigma_{A_{i}}V_{A_{i}}^{T},\qquad G _{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}=U_{G_{i}}\Sigma_{G_{i}}V_{G_{i}}^{T} \tag{3.26}\]
be the singular value decompositions of \(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}\) and \(G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}\) respectively. Then,
\[[F_{\bullet\,\text{KFAC}}]_{i,i}^{-1} =(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}})^{-1}\otimes(G_{i}+\pi_{i}^ {-1}\lambda^{1/2}I_{G_{i}})^{-1}\] \[=(U_{A_{i}}\Sigma_{A_{i}}V_{A_{i}}^{T})^{-1}\otimes(U_{G_{i}} \Sigma_{G_{i}}V_{G_{i}}^{T})^{-1}\] \[=(U_{A_{i}}\Sigma_{A_{i}}^{-1}V_{A_{i}}^{T})\otimes(U_{G_{i}} \Sigma_{G_{i}}^{-1}V_{G_{i}}^{T}). \tag{3.27}\]
Since \(\Sigma_{A_{i}}\) and \(\Sigma_{G_{i}}\) are diagonal matrices, their inverses are easy to compute. Now, if \(V_{A_{i}}\) and \(V_{G_{i}}\) are the eigenvectors associated to the smallest eigenvalues of \(A_{i}\) and \(G_{i}\) respectively, then the eigenvector associated to the smallest eigenvalue of \([F_{\bullet\,\text{KFAC}}]_{i,i}\) is given by
\[V_{i}=V_{A_{i}}\otimes V_{G_{i}}. \tag{3.28}\]
**Krylov coarse space.** If we do not wish to compute the eigenvector associated to the smallest eigenvalue of \([F_{\bullet\,\text{KFAC}}]_{i,i}\), then a variant of the spectral coarse space could be the following. We know that this eigenvector can be obtained by the inverse power method. The idea is then to perform a few iterations of this method, even barely one or two, and to include the iterates into the the coarse subspace. If \(m_{i}-1\geq 1\) is the number of inverse power iterations performed for \([F_{\bullet\,\text{KFAC}}]_{i,i}\), then we take
\[V_{i}=[v_{i},\ \ [F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}v_{i},\ \ldots,\ \ [F_{\bullet\,\text{KFAC}}]_{i,i}^{-(m_{i}-1)}v_{i}]\,\in\,\mathbb{R}^{p_{i} \times m_{i}} \tag{3.29}\]
where \(v_{i}\in\mathbb{R}^{p_{i}}\) is an arbitrary vector, assumed to not be an eigenvector of \([F_{\bullet\,\text{KFAC}}]_{i,i}\) to ensure that the columns of \(V_{i}\) are not collinear. By appropriately selecting \(v_{i}\), we are in a position to use this approach to enrich the Nicolaides coarse space and the residuals coarse space (cf. next construction).
The increase in the number of columns for \(V_{i}\) is not the price to be paid to avoid the eigenvector calculation: we could have put only the last iterate \([F_{\bullet\,\text{KFAC}}]_{i,i}^{-(m_{i}-1)}v_{i}\) into \(V_{i}\). But since we have computed the previous ones, it seems more cost-effective to use them all to enlarge the coarse space. The larger the latter is, the lower is the minimum value of the objective function. In this work, we consider the simplest case
\[m_{1}=\ldots=m_{\ell}=2,\qquad m=2\ell. \tag{3.30}\]
**Residuals coarse space.** We now introduce a very different philosophy of coarse space, which to our knowledge has never been envisioned before. From the construction (3.9)-(3.10), it is obvious that if the error \(\zeta-\zeta_{\text{KFAC}}\) belongs to the coarse space \(R_{0}^{T}\), that is, if it can be written as a linear combination \(R_{0}^{T}\beta^{\sharp}\) of the coarse matrix columns, then the vector \(\zeta_{\text{KFAC}}+R_{0}^{T}\beta^{\sharp}\) coincides with the exact solution \(\zeta\) and the correction
would be ideally optimal. Although this error \(\zeta-\zeta_{\text{KFAC}}\) is unknown, it is connected to the residual (3.13) by
\[\zeta-\zeta_{\text{KFAC}}=F_{\bullet}^{-1}r_{\text{KFAC}}. \tag{3.31}\]
The residual \(r_{\text{KFAC}}\) is not too expensive to compute. as it consists of a direct matrix-product \(F\zeta_{\text{KFAC}}\). Unfortunately, solving a linear system involving \(F\) as required by (3.31) is what we want to avoid.
But we can just approximate this error by inverting with \(F_{\bullet\text{KFAC}}^{-1}\) instead of \(F_{\bullet}^{-1}\). Therefore, we propose to build a coarse space that contains \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) instead of \(F_{\bullet}^{-1}r_{\text{KFAC}}\). To this end, we split \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) into \(\ell\) segments, each corresponding to a layer. This amounts to choosing the values (3.23) for the column sizes and set the columns of \(R_{0}^{T}\) as
\[V_{i}=[F_{\bullet\text{KFAC}}]_{i,i}^{-1}r_{\text{KFAC}}[i]\in\mathbb{R}^{p_{ i}},\qquad r_{\text{KFAC}}[i]=\text{vec}(\mathcal{D}W_{i})-(F_{\bullet}\zeta_{ \text{KFAC}})[i] \tag{3.32}\]
for \(i\in\{1,\ldots,\ell\}\), where for a vector \(\xi\in\mathbb{R}^{p}\) the notation \(\xi[i]=\xi(p_{i-1}+1:p_{i})\) designates the portion related to layer \(i\). Formulas (3.32) ensure that \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) belongs to the coarse space. Indeed, taking \(\beta=[1,\ldots,1]^{T}\in\mathbb{R}^{\ell}\), we find \(R_{0}^{T}\beta=F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\).
**Taylor coarse space.** The previous coarse space is the zeroth-order representative of a family of more sophisticated constructions based on a formal Taylor expansion of \(F_{\bullet}^{-1}\), which we now present but which will not be implemented. Setting
\[E=I-F_{\bullet\text{KFAC}}^{-1}F_{\bullet} \tag{3.33}\]
and observing that \(F_{\bullet}=F_{\bullet\text{KFAC}}(I-E)\), we have
\[F_{\bullet}^{-1}=(I-E)^{-1}F_{\bullet\text{KFAC}}^{-1}=(I+E+\ldots+E^{q-1}+ \ldots)F_{\bullet\text{KFAC}}^{-1}. \tag{3.34}\]
The formal series expansion in the last equality rests upon the intuition that \(E\) measures the approximation quality of \(F_{\bullet}^{-1}\) by \(F_{\bullet\text{KFAC}}^{-1}\) and therefore can be assumed to be small. Multiplying both sides by the residual \(r_{\text{KFAC}}\) and stopping the expansion at order \(q-1\geq 0\), we obtain the approximation
\[(I+E+\ldots+E^{q-1})F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}} \tag{3.35}\]
for the error \(F_{\bullet}^{-1}r_{\text{KFAC}}=\zeta-\zeta_{\text{FAC}}\), which is also the ideal correction term. As earlier, we impose that this approximate correction vector (3.35) must be contained in the coarse space \(R_{0}^{T}\). This suggests to extract the components in layer \(i\) of the vectors
\[\big{\{}F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ EF_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ \ldots,\ E^{q-1}F_{\bullet\text{KFAC}}^{-1}\big{\}}\]
and assign them to the columns of \(V_{i}\). In view of (3.33), the space spanned by the above vectors is the same as the one spanned by
\[\big{\{}F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ (F_{\bullet\text{KFAC}}^{-1}F_{ \bullet})F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ \ldots,\ (F_{\bullet\text{KFAC}}^{-1}F_{\bullet})^{q-1}F_{ \bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\big{\}}.\]
Consequently, we can take
\[m_{1}=\ldots=m_{\ell}=q,\qquad m=q\ell, \tag{3.36}\]
and
\[V_{i}=[w_{1}[i],\;w_{2}[i],\;\ldots,\;w_{q}[i]]\in\mathbb{R}^{p_{i} \times m_{i}} \tag{3.37}\]
where
\[w_{1}=F_{\bullet\,\mathrm{KFAC}}^{-1}r_{\mathrm{KFAC}}\in\mathbb{R}^{p}, \qquad w_{j+1}=F_{\bullet\,\mathrm{KFAC}}^{-1}F_{\bullet\,}w_{j}\in\mathbb{R}^ {p}, \tag{3.38}\]
for \(1\leq j\leq q-1\). The case \(q=1\) degenerates to the residuals coarse space. From (3.38), we see that upgrading to the next order is done by multiplying by \(F_{\bullet}\), an operation that mixes the layers.
For the practical implementation of these coarse spaces, we need efficient computational methods for two essential building blocks, namely, the matrix-vector product \(F_{\bullet}u\) and the coarse operator \(F_{\mathrm{coarse}}\). These will be described in appendix SSA.
### Pseudo-code for two-level KFAC methods
Algorithm 1 summarizes the steps for setting up a two-level KFAC method.
```
Input:\(\theta_{0}\) (Initial point), \(k_{\mathrm{max}}\) (maximum number of iterations), and \(\alpha\) (learning rate) Output:\(\theta_{k_{\mathrm{max}}}\) for\(k=0,1,\ldots,k_{\mathrm{max}}-1\)do \(\bullet\) Compute an estimate \(\nabla_{\theta}h(\mathcal{S}_{k},\theta_{k})\) of the gradient on a mini-batch \(\mathcal{S}_{k}\) randomly sampled from the training data; \(\bullet\) Compute \(\theta_{\mathrm{KFAC}}=F_{\bullet\,\mathrm{KFAC}}^{-1}\nabla_{\theta}h( \mathcal{S}_{k},\theta_{k})\); \(\bullet\) Choose a coarse space \(R_{0}^{T}\) and compute the associated coarse correction \(R_{0}\beta^{*}=R_{0}^{T}(F_{\mathrm{coarse}})^{-1}R_{0}r_{\mathrm{KFAC}}\); \(\bullet\) Compute \(\theta_{\mathrm{KFAC-2L}}=\theta_{\mathrm{KFAC}}+R_{0}\beta^{*}\); \(\bullet\) Update \(\theta_{k+1}=\theta_{k}-\alpha\theta_{\mathrm{KFAC-2L}}\); end for
```
**Algorithm 1**High-level pseudo-code for a two-level KFAC method
## 4 Numerical results
In this section, we compare the new two-level KFAC methods designed in SS3 with the standard KFAC [18, 31] from the standpoint of convergence speed. For a thorough analysis, we also include the two-level KFAC version of Tselepidis et al. [50] and baseline optimizers (ADAM and SGD).
We run a series of experiments to investigate the optimization performance of deep auto-encoders, CNNs, and deep linear networks. Since our primary focus is on convergence speed rather than generalization, we shall only be concerned with the ability of optimizers to minimize the objective function. In particular, we report only training losses for each optimizer. To equally treat all methods, we adopt the following rules.
We perform a Grid Search and select hyper-parameters that give the best reduction to the training loss. Learning rates for all methods and damping parameters for KFAC and two-level KFAC methods are searched in the range
\[\{10^{-4},\,10^{-3},\,10^{-2},\,10^{-1},\,10^{0},\,10^{1},\,10^{2},\,10^{3},\,10^{ 4}\}.\]
For each optimizer, we apply the Early Stopping technique with patience of 10 epochs i.e. we stop training the network when there is no decrease in the training loss during 10 consecutive epochs). We also include weight decay with a coefficient of \(10^{-3}\) for all optimizers.
All experiments presented in this work are performed with PyTorch framework [39] on a supercomputer with Nvidia Ampere A100 GPU and AMD [email protected] CPU. For ease of reading, the following table explains all abbreviations of two-level KFAC methods that we will use in the figure legends.
### Deep auto-encoder problems
The first set of experimental tests performed is the optimization of three different deep auto-encoders, each trained with a different dataset (CURVES, MNIST, and FACES). Note that due to the difficulty of optimizing the underlying networks, these three auto-encoder problems are commonly used as benchmarks for evaluating new optimization methods in the deep learning community [7, 22, 27, 31, 48]. For each problem, we train the network with three different batch sizes.
Figure 2 shows the obtained results. The first observation is that, as expected, natural gradient-based methods (KFAC and two-level KFAC methods) outperform baseline optimizers (ADAM and SGD). The second and most important observation is that, for each of the three problems, regardless of the batch size, the training curve of KFAC and those of all two-level KFAC methods (the one of Tselepidis et al. [50] and those proposed in this work) are overlaid, which means that taking into account the extra-diagonal terms of the Fisher matrix through two-level decomposition methods does not improve the convergence speed of KFAC method. This second observation is quite puzzling, since theoretically two-level methods are supposed to offer a better approximation to the exact natural gradient than KFAC does and therefore should at least slightly outperform KFAC in terms of optimization performance. Note that we repeated these experiments on three different random seeds and obtained very similar results.
These surprising results are in line with the findings of Benzing [6], according to which KFAC outperforms the exact natural gradient in terms of optimization performance. This suggests that extra-diagonal blocks of the FIM do not contribute to improving the optimization performance, and sometimes even affect it negatively.
\begin{table}
\begin{tabular}{l l} \hline \hline Optimizer & Name abbreviation \\ \hline Two-level KFAC with Nicolaides coarse space & NICO \\ Two-level KFAC with spectral coarse space & SPECTRAL \\ Two-level KFAC with residuals coarse space & RESIDU \\ Two-level KFAC with Krylov Nicolaides coarse space & KRY-NICO \\ Two-level KFAC with Krylov residuals coarse space & KRY-RESIDU \\ Two-level KFAC of Tselepidis et al. [50] & PREVIOUS \\ \hline \hline \end{tabular}
\end{table}
Table 1: Name abbreviations of two-level KFAC optimizers.
### Convolution neural networks
The second set of experiments concerns the optimization of three different CNNs namely Resnet 18 [19], Cuda-convnet and Resnet 34 [19]. We consider in particular Cuda-convnet which is the architecture used to evaluate the original KFAC method in [18]. It must be mentioned that it contains 3 convolution layers and one MLP layer. We train Cuda-convnet on CIFAR10 dataset [23] with a batch size equal to 256, and Resnet 18 on CIFAR100 [23] with a batch size equal to 128. Finally, we train Resnet 34 on the SVHN dataset [34] with a batch size equal to 512.
For these CNNs (see Figure 3), we arrive at quite similar observations and conclusions to those we mention for deep auto-encoder problems. In particular, like in [50], when considering CNNs, we do not observe any significant gain in the convergence speed of KFAC when we enrich it with cross-layer information through two-level decomposition methods. Once again, these results corroborate the claims of Benzing [6] and suggest that we do not need to take into account the extra diagonal blocks of the FIM.
Figure 2: Comparison of KFAC against two-level KFAC methods on the three deep auto-encoder problems (CURVES **top** row, MNIST **middle** row and FACES **bottom** row). Three different batch sizes are considered for each problem (each column corresponds to a different batch size).
### Deep linear networks
The last experiments concern relatively simple optimization problems: linear networks optimization. We consider two deep linear networks. These tests are motivated by the results obtained by Tselepidis et al. [50] for their two-level method. Indeed, for an extremely simple linear network with 64 layers (each layer contains 10 neurons and a batch normalization layer) trained with randomly generated ten-size input vectors, they outperform KFAC in terms of optimization performance. Here, we first consider the same architecture but train the network on the Fashion MNIST dataset [52](since we could not use the same dataset). Then, we consider another linear network that contains 14 layers with batch normalization, with this time much larger layers. More precisely we consider the following architecture: \(784-1000-900-800-700-600-500-400-300-200-100-50-20-10\). We train this second network on the MNIST dataset. Both networks are trained with a batch size of 512.
Figure 4 shows the training curves obtained in both cases. Here we observe like in [50] an improvement in the optimization performance of two-level optimizers over KFAC. However, this gain remains too small and only concerns simple linear networks that are not used for practical applications. We therefore do not encourage enriching KFAC with two-level methods that require additional computational costs.
Figure 4: Optimization performance evaluation of KFAC and Tow-level KFAC optimizers on two different deep linear networks.
Figure 3: Optimization performance evaluation of KFAC and two-level KFAC methods on three different CNNs.
### Verification of error reduction for linear systems
In the above experiments, two-level methods do not seem to outperform KFAC in terms of optimization performance. We thus wish to check that at each descent iteration, the negative increment \(\zeta_{\text{KFAC-2L}}\) obtained with the coarse correction is indeed closer to that of the regularized natural gradient one \(\zeta\) than the negative increment \(\zeta_{\text{KFAC}}\) corresponding to the original KFAC. In other words, we want to make sure that inequality (3.19) holds numerically.
For \(\beta\in\mathbb{R}^{m}\), let
\[\mathfrak{E}(\beta)=\|\zeta_{\text{KFAC}}+R_{0}^{T}\beta-\zeta\|_{F_{\bullet}}^ {2} \tag{4.1}\]
be the function to be minimized at fixed \(R_{0}^{T}\) in the construction (3.9)-(3.10), where it is recalled that
\[\zeta=F_{\bullet}^{-1}\nabla_{\theta}h,\qquad\zeta_{\text{KFAC}}=F_{\bullet}^{ -1}\nabla_{\theta}h.\]
Note that
\[\mathfrak{E}(0)=\|\zeta_{\text{KFAC}}-\zeta\|_{F_{\bullet}}^{2} \tag{4.2}\]
is the squared \(F_{\bullet}\)-distance between the KFAC increment and that natural gradient one, regardless of \(R_{0}^{T}\). Meanwhile, if \(\beta^{*}\) is taken to be the optimal value (3.11), then
\[\mathfrak{E}(\beta^{*})=\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F_{\bullet}}^{2}. \tag{4.3}\]
To see whether (3.19) is satisfied, the idea is to compute the difference \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) and check that it is negative. The goal of the game, however, is to avoid using the unknown natural gradient solution \(\zeta\). Owing to the identity \(\|a\|^{2}-\|b\|^{2}=(a-b,a+b)\) for the \(F_{\bullet}\)-dot product, this difference can be transformed into
\[\mathfrak{E}(\beta^{*})-\mathfrak{E}(0) =\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F_{\bullet}}^{2}-\|\zeta_{ \text{KFAC}}-\zeta\|_{F_{\bullet}}^{2}\] \[=(\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}},\,\zeta_{\text{KFAC-2L }}+\zeta_{\text{KFAC}}-2\zeta)_{F_{\bullet}}\] \[=\|\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}}\|_{F_{\bullet}}^{2} +2(\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}},\,\zeta_{\text{KFAC}}-\zeta)_{F _{\bullet}}\] \[=\|R_{0}^{T}\beta^{*}\|_{F_{\bullet}}^{2}+2(R_{0}^{T}\beta^{*},\, \zeta_{\text{KFAC}}-\zeta)_{F_{\bullet}}\] \[=\big{\langle}F_{\bullet}R_{0}^{T}\beta^{*},R_{0}^{T}\beta^{*} \big{\rangle}+2\big{\langle}R_{0}^{T}\beta^{*},\,F_{\bullet}(\zeta_{\text{KFAC }}-\zeta)\big{\rangle}, \tag{4.4}\]
where \(\langle\cdot,\cdot\rangle\) denotes the Euclidean dot product. But
\[F_{\bullet}(\zeta_{\text{KFAC}}-\zeta)=F_{\bullet}\zeta_{\text{KFAC}}-\nabla _{\theta}h=-r_{\text{KFAC}} \tag{4.5}\]
is the opposite of the residual (3.13), which can be computed without knowing \(\zeta\). Finally, the desired difference can also be computed as
\[\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)=\big{\langle}R_{0}F_{\bullet}R_{0}^{ T}\beta^{*},\,\beta^{*}\big{\rangle}-2\big{\langle}R_{0}^{T}\beta^{*},\,r_{\text{KFAC}} \big{\rangle}. \tag{4.6}\]
For the two-level method of Tselepidis-Kohler-Orvieto [50], the correction reads
\[\zeta_{\text{TKO}}=\zeta_{\text{KFAC}}+R_{0}^{T}\beta_{\text{TKO}}^{*}\] (4.7a) with \[\beta_{\text{TKO}}^{*}=(R_{0}F_{\bullet}R_{0}^{T})^{-1}R_{0}\nabla_{\theta}h \tag{4.7b}\]
instead of \(\beta^{*}\), the KFAC-2L value (3.11). The difference \(\mathfrak{E}(\beta_{TKO}^{*})-\mathfrak{E}(0)\) is then given by a formula similar to (4.6) in which \(\beta^{*}\) is simply replaced by \(\beta_{\text{TKO}}^{*}\).
We compute the error \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) associated to various two-level methods in the experiments conducted above. More specifically, we do it for the three deep auto-encoder problems and also for a CNN (cuda-convnet). The results obtained are shown in Figure 5. The observation is that all two-level methods proposed in this work as well as the TKO two-level method [50] have the gap have negative gaps \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) throughout the optimization process. This implies that two-level methods solve the linear system (3.2) more accurately than KFAC does. It also means that the approximate natural gradients obtained with Two-level methods are closer to the exact natural gradient than the one obtained with KFAC is.
Figure 5: Evolution of \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) during training for each of the two-level methods considered. All methods proposed in this work as well as the TKO two-level method [50] have the gap \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) negative throughout the training process.
## 5 Conclusion and discussion
In this study, we sought to improve KFAC by incorporating extra-diagonal blocks using two-level decomposition methods. To this end, we proposed several two-level KFAC methods, with a careful design of coarse corrections. Through several experiments, we came to the conclusion that two-level KFAC methods do not generally outperform the original KFAC method in terms of optimization performance of the objective function. This implies that taking into account the interactions between the layers is not useful for the optimization process.
We also numerically verified that, at the level of the linear system of each iteration, the increment provided by any two-level method is much closer to the exact natural gradient solution than that obtained with KFAC, in a norm naturally associated with the FIM. This reveals that closeness to the exact natural gradient does not necessarily results in a more efficient algorithm. This observation is consistent with Benzing's previous claim [6] that KFAC outperforms the exact natural gradient in terms of optimization performance.
The fact that incorporating extra-diagonal blocks does not improve or often even hurts the optimization performance of the initial diagonal approximation could be explained by a negative interaction between different layers of the neural network. This suggests ignoring extra-diagonal blocks of the FIM and keeping the block-diagonal approximation, and if one seeks to improve the block-diagonal approximation, one should focus on diagonal blocks as attempted in many recent works [7; 13; 15; 22].
It is worth pointing out that the conclusion of Tspelepedis et al. [50] on the performance of their proposed two-level method seems a little hasty. Indeed, the authors only ran two different experiments: the optimization of a CNN and a simple linear network. For the CNN network, they did not observe any improvement. For the linear network they obtain some improvement in the optimization performance. Their conclusion is therefore based on this single observation.
Finally, we recall that as is the case for almost every previous work related to natural gradient and KFAC methods [6; 7; 18; 31], the one undertaken in this paper is limited to the optimization performance of the objective function. It will thus be interesting to investigate the generalization capacity of these methods (including KFAC). Since the study of generalization requires a different experimental framework [6; 54; 55], we leave it as a prospect. Our findings and those of Benzing [6] imply that it can be interesting to explore the use of even simpler approximations of the FIM. More precisely, after approximating the FIM by a block diagonal matrix as in KFAC, one can further approximate each full diagonal block by an inner sub-blocks diagonal matrix (see for instance [4]). This approach will save computational time and probably maintain the same level of optimization performance.
|
2309.12293 | Taxonomy for Physics Beyond Quantum Mechanics | We propose terminology to classify interpretations of quantum mechanics and
models that modify or complete quantum mechanics. Our focus is on models which
have previously been referred to as superdeterministic (strong or weak),
retrocausal (with or without signalling, dynamical or non-dynamical),
future-input-dependent, atemporal and all-at-once, not always with the same
meaning or context. Sometimes these models are assumed to be deterministic,
sometimes not, the word deterministic has been given different meanings, and
different notions of causality have been used when classifying them. This has
created much confusion in the literature, and we hope that the terms proposed
here will help to clarify the nomenclature. The general model framework that we
will propose may also be useful to classify other interpretations and
modifications of quantum mechanics. This document grew out of the discussions
at the 2022 Bonn Workshop on Superdeterminism and Retrocausality. | Emily Adlam, Jonte R. Hance, Sabine Hossenfelder, Tim N. Palmer | 2023-09-21T17:54:31Z | http://arxiv.org/abs/2309.12293v2 | # Taxonomy for Physics Beyond Quantum Mechanics
###### Abstract
We propose terminology to classify interpretations of quantum mechanics and models that modify or complete quantum mechanics. Our focus is on models which have previously been referred to as superdeterministic (strong or weak), retrocausal (with or without signalling, dynamical or non-dynamical), future-input-dependent, atemporal and all-at-once, not always with the same meaning or context. Sometimes these models are assumed to be deterministic, sometimes not, the word deterministic has been given different meanings, and different notions of causality have been used when classifying them. This has created much confusion in the literature, and we hope that the terms proposed here will help to clarify the nomenclature. The general model framework that we will propose may also be useful to classify other interpretations and modifications of quantum mechanics.
This document grew out of the discussions at the 2022 Bonn Workshop on Superdeterminism and Retrocausality.
## 1 Introduction
Quantum mechanics, despite its experimental success, has remained unsatisfactory for a variety of reasons, notably its tension with locality. Different authors have formulated their unease with quantum mechanics in different ways. Analysing the origin of this unease is not the purpose of this present article. The purpose is instead to sort out the confusion in the terminology used to describe this unease.
We will below introduce a framework to distinguish between interpretations of quantum mechanics and modifications thereof. Our hope is that it may offer researchers a guide to classify their own approach. Our aim here is not to judge the promise or correctness of any of these approaches, but to make it easier to communicate among each other about what we are working on in the first place. An accompanying paper [1] will discuss some applications of this terminology to showcase its use.
This paper is organised as follows. In the next section we will outline the general model framework and most importantly introduce different types of models. We believe that this classification of what we even mean by a model in and by itself will serve to alleviate the confusion of nomenclature. We will then, in Section 3 and 4 introduce some properties that such models typically have. In Section 5, we will briefly explain how to use this classification scheme, and then we conclude in Section 6. A list of acronyms can be found in the appendix.
## 2 General Model Framework
We will start with explaining the general framework that we want to use in the following, so that we can distinguish between different types of models and their properties.
### Calculational Models
At its most rudimentary level, the task of physics is to provide a useful method for calculating predictions (or postdictions) for observables in certain scenarios. A scenario is most often an experiment, and this is the situation we are typically concerned with in quantum foundations. But in some areas of physics--such as cosmology and astrophysics--one does not actually conduct experiments, but observes instead a natural variety of instances that occurred in the past. For this reason, we will use the more neutral term "scenario." A scenario is not itself a mathematical structure; it is the real-world system that we want to describe using mathematical structures.
The basic logical flow of such a calculation is outlined in Fig. 1. We will call the center piece of this calculation a **calculational model**, or **c-model** for short. We do not call it a "computational model" because "computational" suggests a numerical simulation, which is an impression we want to avoid. A calculational model could be numerical, but it could also be purely analytic. For more on the relation between calculational models and computer models, see Section 2.6.
We will in the following use a notation in which curly brackets \(\{\cdot\}\) denote sets of mathematical assumptions. The set \(\overline{\{\cdot\}}\) is the set of all assumptions that can be derived from \(\{\cdot\}\) (including the elements of \(\{\cdot\}\) themselves).
The different components of the modelling framework have the following properties:
**Inputs (of a calculational model):** The inputs \(\mathcal{I}\) of a calculational model are an (at most countably infinite) set of mathematical assumptions--each of which is an input, \(I_{i}\)--that describes the scenario. To be part of the inputs, an assumption must differ between at least two scenarios. We will denote this set as \(\mathcal{I}:=\{I_{i}\}\).
Loosely speaking, you can think of the inputs as the mathematical representation of a scenario. A typical example might be the temperature in the laboratory, or the frequency of a laser. But the inputs of a model do not necessarily have to correspond to definite observable properties of the scenario. They could also be expressing ignorance about certain properties of the
scenario and thus be random variables with probability distributions, or they might indeed be entirely unobservable. We will come back to this point in Section 3.1.
The inputs of a c-model are often assignments of values to variables. For example, a c-model may work with an unspecified variable \(x\) that is an element of some space. The input may then assign the value \(x=3\) to this variable for a specific scenario. For such an input, we will refer to the value it assigns as the **input value**. We want to stress that the input value is not the same as the input. The input is "\(x=3\)", the input value is "\(3\)".
A typical example for a c-model that we are all familiar with would be the Harmonic oscillator. In this case, the model would be the differential equation \(m\ddot{x}=-kx\) and a complete set of inputs would be value assignments for \(k\) and \(m\), plus two initial values for, say, \(x(t=0)\) and \(\dot{x}(t=0)\).
But inputs of a calculational model are not necessarily value assignments. They might also be constraint-equations, or boundary conditions, or something else entirely. For example, when studying stellar equilibrium, it is quite common to enter an equation of state as the input to the Tolmann-Oppenheimer Volkov (TOV) equations. In this case, the TOV equations are the same for all stellar objects that one may want to consider; they are hence part of the model. The equation of state, on the other hand, changes from one type of star to another; it is hence part of the input.
While our main interest is in models that describe the real world, it is also possible to study a model's properties with scenarios that do not exist in reality. We will refer to those as **hypothetical scenarios**. They include, but are not limited to, counterfactual realities, as well as universes with different constants of nature.
**Calculational model:** A set \(\mathcal{C}\) of mathematical assumptions \(A_{x}\) that are independent of the scenario. We will denote this set as \(\mathcal{C}:=\{A_{x}\}\).
**Setup:** The union of a calculational model and its inputs. \(\mathcal{F}:=\mathcal{I}\;\cup\;\mathcal{C}\).
**Outputs (of a calculational model):** All mathematical statements that can be deduced from setup, but from neither the model nor its inputs in isolation \(O=\overline{F}\setminus(\overline{\mathcal{I}}\;\cup\;\overline{\mathcal{C}})\).
Figure 1: The logical flow of the model framework.
Predictions from observables are obtained from the outputs of the model. However, not all outputs of a model need to be observable. The prime example is quantum mechanics, where the outputs contains the time-evolution of the wavefunction, but the wavefunction itself is not observable. But there are many other examples, such as the production of gravitational waves by a black hole merger. Given suitable inputs, the model (General Relativity) will output a mathematical description for the creation and propagation of gravitational waves, but we only measure their arrival on Earth, and only measure that through the waves' influence on matter, which is a small part of all the model outputs. And like the inputs, the outputs of a model do not necessarily have to result in definite values for observables; they could instead merely give rise to probability distributions for values of observables.
For a calculational model to be useful, we further need a prescription to encode a scenario with specific inputs, and a way to identify the observables from the outputs. This identification, since it is not restricted to the mathematical world, is not one that we can strictly speaking define. Science, after all, is not just mathematics. What property this identification with the real world must have is an interesting question, but it is one of many that we will not address here, because it is not relevant for what follows.
When using hypothetical scenarios, these have to be distinguished from the setup as a matter of definition. If the hypothetical scenarios are not identified as such by definition, it becomes impossible to tell whether one changes the hypothetical scenario or the model.1
Footnote 1: An example for this is the case of the Standard Model with its fundamental constants not fixed but taken as variable inputs. This cannot be a correct model for scenarios in our universe because in our universe the constants are constant, hence they can’t be inputs. One can, however, consider hypothetical scenarios with different values of the constants, often interpreted as a type of multiverse. But of course one could alternatively interpret these hypothetical scenarios as different versions of the Standard Model that, alas, happen to not agree with our observations. The point is that whether the Standard Model with fundamental constants that don’t agree with our observations describes a hypothetical alternative universe, or is just a wrong model for our universe, is a matter of definition.
Another property, relevant for our purposes, that we want the setup of a calculational model to have is: That they are **irreducible**, in the sense that we cannot split the combined set \(\mathcal{C}:=\{I_{j},A_{x}\}\) into two sets \(\mathcal{C}_{1}:=\{I_{j^{1}}^{1},A_{x^{1}}^{1}\}\) and \(\mathcal{C}_{2}:=\{I_{j^{2}}^{2},A_{x^{2}}^{2}\}\), where \(\{I_{j}\}=\{I_{j^{1}}^{1}\}\ \cup\ \{I_{j^{1}}^{2}\}\) and \(\{A_{x^{1}}^{1}\}=\{A_{x^{1}}^{1}\}\ \cup\ \{A_{x^{2}}^{2}\}\), so that both \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are each setups of calculational models and the combination of their outputs is the same as the outputs of \(\mathcal{C}\). A simpler way to put this is that, if we split the setup of an irreducible model into two, some of the output can no longer be calculated. This approach is reminiscent of identifying particles in quantum field theory from the irreducible representations of the Poincare group.
We need this requirement because otherwise we could just join different setups to form a new one, which would make it impossible to classify any. Note that this does not mean the composition of two setups is not a setup. On the contrary, a composition of two setups _is_ a setup, it's just that the combined setup is no longer irreducible. The issue is, if we allowed reducible combinations of setups, then we could not meaningfully assign properties to them. It would be like asking which fruit is fruit salad. Once we have however succeeded in identifying properties of irreducible setups, we can join those with the same properties together, and meaningfully assign the same property to the reducible setup. Using the above fruit example, if we have
identified several fruits as apples, we can join them and be confident we have apple salad.
A particularly relevant case of a reducible setup is one in which some inputs or assumptions can be removed without changing anything about the outputs. This may be because the assumptions are not independent (in the sense that some can be derived from the others), or because an assumption is simply not used to calculate the outputs.
One might be tempted to add to the requirements of a model that its assumptions are consistent. However, as is well known, for recursively enumerable sets, Godel's theorem [2] tells us that we cannot in general prove the consistency of the assumptions. We might then try to settle on the somewhat weaker requirement that at least the assumptions should not be known to be inconsistent. However, it sometimes happens in physics that a model works well in certain parameter ranges despite being inconsistent in general. An example may be the Standard Model without the Higgs field and its boson [3]. For this reason, we will here not add any requirement about consistency. One may justify this by taking a purely instrumental approach. We only care whether a set of assumptions is any good at describing observations.
The setup of a calculational model whose inputs are all value assignments that, alas, have not been assigned is what is usually called a model in the causal models literature [4, 5].
### Mathematical Models
We defined a calculational model and its inputs as sets of mathematical assumptions. Such sets can be expressed in many different ways that are mathematically equivalent. We will call an equivalence class of calculational models a **mathematical model, or m-model** for short. We will denote this equivalence class with \([\cdot]_{\mathrm{m}}\) and refer to it as an **m-class**; it is a set of sets. For example, if \(\mathcal{C}=\{A_{x}\}\) is a calculational model, then \(\mathcal{M}:=[\mathcal{C}]_{\mathrm{m}}\) is the m-model that encompasses all mathematically equivalent formulations
\[[\mathcal{C}]_{\mathrm{m}}:=\{\{B_{y}\}:\{B_{y}\}\Leftrightarrow\{A_{x}\}\}\;. \tag{1}\]
Calculational models in the same m-class will be called **m-equivalent**. By a mathematical equivalence of the sets ("\(\Leftrightarrow\)"), we here mean that either one can be derived from the other. We thereby adopt the notion of model equivalence advocated by Glymour [6, 7]2
Footnote 2: Glymour uses the term “theory equivalence”, not “model equivalence”. This is not a relevant distinction for the purposes of this present paper: please see section 2.5 for discussion.
It was argued by Weatherall [8, 9] that mathematical equivalence might over-distinguish models in certain circumstances, and that it might be better to use a weaker form of structural equivalence based on category theory. We do not use this proposal of categorial equivalence here, not because we object to it, but because it is neither widely accepted nor understood. Most importantly, it is at present not practical because few physicists would know how to apply it. Mathematical equivalence, in contrast, is widely understood and comparably straightforward (though not necessarily easy) to prove.
Among the models we will discuss here, mathematical models are closest to what physicists typically mean by a "model". They do not distinguish between the particular mathematical for
mulations that one might use to make a calculation. For example, we could express Maxwell's Equations using differential forms with \(\mathrm{d}\star,\mathrm{d}\wedge\) and \(\star\mathrm{d}\) operations, or we could do it with the more old-fashioned \(\vec{\nabla},\vec{\nabla}\cdot\) and \(\vec{\nabla}\times\) operators. These two definitions would be two different calculational models, but the same mathematical model. A similar equivalence covers switching from the Schrodinger picture to the Heisenberg picture in quantum mechanics. We believe that most physicists would not call these different models.
The inputs of a mathematical model are likewise an equivalence class: It is the class of all sets of assumptions which change with the scenario that, together with the mathematical model, produce equivalent outputs.
The inputs of a mathematical model are usually not just the set of assumptions that are mathematically equivalent to to the inputs of one of the calculational models. This is because the assumptions of the model itself may render certain inputs equivalent that are not equivalent without the model. An example that will be relevant for what follows is that a model with a time-reversible evolution operator will produce identical outputs for input states at times \(t_{1}\) and \(t_{2}>t_{1}\), if the input state at \(t_{2}\) is the forward evolution of the state from \(t_{1}\). In this case the input is m-equivalent, though the inputs at different times are not equivalent to each other without the model.
We will refer to different calculational models as **representations** of their m-class.
**Representation (of an m-model):** A calculational model in the m-class of the c-model.
In [10], the same concept was referred to as a "reformulation".
### Physical Models
The previous definitions do the heaviest lifting for the model framework, and the remaining ones are now straight-forward. It can happen that mathematical models are different, but they nevertheless give rise to the same observables for all scenarios. We will combine all such mathematical models into yet another, even bigger class and say they constitute the same **physical model, or p-model** for short. We will denote this equivalence class with \([\cdot]_{\mathrm{p}}\) and refer to it as the **p-class**. Note that we could take either the p-equivalence class of a computational model or that of a mathematical model. Models in the same p-class will be called **p-equivalent**.
By calling these models physical, we do not mean to imply that we only consider observable properties as physically real. The reader should not take our nomenclature to imply any statement about realism or empiricism. We call them physical just because they describe what physicists in practice can distinguish with physical measurements.
This now gives us a way to define what we mean by an interpretation:
**Interpretation (of a p-model):** All mathematical models in the p-class.
Note that according to this nomenclature, each interpretation has its own representations. That is, a c-model is a representation of an m-class, but more specifically it is a representation of a particular interpretation.
For example, if we take the Copenhagen Interpretation (CI) in one of its common axiomatic formulations, that set of axioms will constitute a particular representation, hence, a c-model. The Copenhagen Interpretation per se is the class of all mathematically equivalent models. We can then further construct the physical equivalence class of the Copenhagen Interpretation, which we will in the following refer to as Standard Quantum Mechanics (SQM:=[CI]\({}_{\rm p}\)). Any other mathematical model that is physically but not mathematically equivalent to the Copenhagen Interpretation is also an interpretation of SQM.
For the purposes of this paper we will not need to specify exactly what we mean by Copenhagen Interpretation. However, we will take it to be only a first-quantised theory. That is, it is not a quantum field theory. If one were to specify further details, one would in particular have to decide whether one considers the relativistic version, or only the non-relativistic case, as some alternative interpretations work only non-relativistically[11].
That we have interpretations and not just representations is a possibility which appears in quantum mechanics because it has outputs that are unobservable in principle. A different computational model which affects only unobservable outputs might not be mathematically equivalent and yet give rise to the same physics.
### Empirical Models
Finally, we define an **empirical model, or e-model**, \(\mathcal{E}\), as the class of all physical models that cannot be distinguished by observations so far. We will denote this class as \([\cdot]_{\rm e}\), refer to it as an **empirical class or e-class**. Models in the same e-class are **empirically equivalent or e-equivalent**. We will call anything in the same empirical class as SQM, but not in the same physical class, a modification of quantum mechanics:
**Modification or Completion or Extension (of a p-model):** Another p-model that is not physically equivalent but so far empirically equivalent.
Fig. 2 summarises the relations between these various types of models.
We here use the terms modification, completion, and extension loosely, and will not distinguish them below because it is not necessary for what follows. In the literature these terms are used with a somewhat different meaning. A modification or extension of a model typically employs similar mathematical structures, whereas a completion introduces new mathematical structures. However, the distinction between the two is not always clear. Again, just how to distinguish them is an interesting question in its own right, but not one we will address here.
With a slight abuse of terminology that is however unlikely to cause misunderstandings, we can also use the above definition to refer to an m-model or a c-model as a modification. That is, an m-model (c-model) is a modification of another m-model (c-model) if the two are not physically equivalent, but so far empirically equivalent.
One could further refine the class of empirically equivalent models by specifying various sets of experiments. For example, we could ask what models can be distinguished by experiments in the near future, and what exactly we mean by near future. Or we could talk about
models that can be distinguished by experiment in principle, and then discuss whether, say, waiting one hundred billion years is possible in principle. Or whether it is possible in principle to make measurements inside a black hole, and so on. All of these are very interesting questions; however they will not concern us in the following, so we will not elaborate on them.
### Theories
We will in the following not distinguish between models and theories. Loosely speaking, a theory is a class of models that can be used for a large number of scenarios, congruent with the semantic approach of Suppes [12] and van Frassen [13]. However, physicists don't tend to use the terms theory and model in a well-defined way.
For example, we use the terminology of Quantum Field Theory in general, and the Standard Model in particular. We refer to General Relativity as a theory, and to \(\Lambda\)CDM as a model. This agrees with what we said above. But we also refer to Fermi's theory of beta decay, and the BCS theory, though those would better be called models. To make matters worse, we also sometimes call supersymmetry a theory, when it's really a property of a class of m-models, and so on. For the purposes of this paper, we will not need to distinguish theories from models, so we will not bother with a precise definition of the term "theory."
### Computer Models and Simulations
Another type of model which is commonly used by scientists of all disciplines are computations performed on computers. There are two ways to think of these models.
One is that they are really simulations that represent one real-world system with another
Figure 2: A mathematical model \(\mathcal{M}_{ijk}\) is an equivalence class of computational models \(\mathcal{C}_{ijkl}\). A physical model \(\mathcal{P}_{ij}\) is an equivalence class of mathematical models \(\mathcal{M}_{ijk}\). An empirical model \(\mathcal{E}_{i}\) is an equivalence class of physical models \(\mathcal{P}_{ij}\).
real-world system. That is to say, they are not models in the sense that we have considered here--the models we considered here are mathematical. A suitably programmed computer is instead a physical stand-in for the system one wants to make a prediction for. This is also how quantum simulations work [14, 15].
However, there is another way to look at computational models, which is to use their program as a definition for a calculational model. This then falls into the classification scheme we discussed above. But in this case, the computational model is typically not mathematically equivalent to the analytical, calculational model that one used for a scenario. This is because computers are in almost all cases digital, and only approximate the continuum. The exception are certain types of analog computers, which however can better be understood as simulations again.
That is to say, when one wants to classify a computer model according to our scheme, one should take its algorithmic definition as a set of assumptions, and use that to define a calculational model and its inputs. This calculational model, defined by its executable algorithm, will be different from the calculational model that uses an analytic expression.
(One can then further ask to what accuracy will the algorithm, when executed on a physical computer, approximate the output of the analytical model. This is a relevant point which was previously brought up in [16]. While an analytically defined calculational model might be time-reversible, an algorithmic approximation of it run on a computer will in general no longer be time-reversible. This is because errors will build up differently depending along which direction of time one runs the algorithm. This is particularly obvious for time-evolutions which are chaotic. In such cases, the forward-evolution and the backward-evolution of the algorithm as executed on the computer will actually be two different calculational models, and both are different from the analytical calculational model that they approximate).
## 3 General Model Properties
In this section, we will discuss some properties that we can assign to models and their inputs. The point of this section is to specify which properties are m-class properties (do not change with mathematical redefinitions), and which are c-class properties (can change with mathematical redefinition).
We want to stress that we will _not_ prove that these assignments are correct. To do that, we would have to add many more assumptions (e.g., about what we mean by space, and time, and measurements, and so on). What we will do instead is specify what we believe is the way that an expression has been previously dominantly used in the literature, and this specification will then in turn imply properties for the concepts we did not specify.
To make this procedure clear, if we state for example that the property of "time-reversibility" does not change with a mathematical redefinition, then this implicitly means: If it did change, we would not refer to the property as time-reversibility. That is, there are certain properties of models which we want to not depend on just exactly how we write down the mathematics.
Quite possibly, some readers will disagree with our some of our assessments. This is fine.
Our aim here is not to end the discussion, but to propose a language in which we can talk about these properties in the first place.
The term "model" without further specification (c/m/p/e) will refer to any type of model.
### Atemporal Properties
Following the terminology of Cavalcanti and Wiseman [17], we will call an m-model "deterministic" if its observables can be calculated with certainty from the outputs obtained from the setup:
**Deterministic:** An m-model is deterministic if the probabilities for predictions of observables are all either 0 or 1.
This property was termed "holistic determinism" in [18] and differs from a more common definition of determinism, that connects one moment in time with a later one. We will elaborate on this in the next subsection. For now, we further follow [17] and distinguish determinism from predictability:
**Predictable:** A m-model is predictable if it is deterministic, and the inputs are all derived from observable properties of the scenario.
Both determinism and predictability are m-model properties, because we expect a redefinition of the mathematics that one uses to process inputs to not change predictions for observable output. If that was the case, we would assume something is wrong with our idea of what is observable.
The distinction between deterministic and predictable is that a model may have inputs that are unknowable in principle. Typically these are value assignments for variables that are usually referred to as "hidden variables". A model with such hidden variables, according to the above terminology, may be deterministic and yet unpredictable.
**Hidden Variables:** Hidden variables, that we will denote \(\kappa\), are input values to a c-model that cannot be inferred from any observation on the scenario.
We want to stress that these hidden variables are in general not localised and sometimes not even localisable in any meaningful way. We will say more about localisable variables in the next subsection, but a simple example is that we could use Fourier-transforms of space-time variables, or just extensive quantities that are properties of volumes. Note that hidden variables are defined for c-models, not for m-models. This is because hidden variables can be redefined into (non-localised) random variables with an equivalent mathematical outcome. That is to say, mathematically it makes no difference whether a variable was unknowable or indeed random.
While it may sound confusing at first to distinguish determinism from predictability, it will be useful in what follows. Indeed, the reader may have recognised that Bohmian Mechanics is deterministic yet not predictable. In Bohmian Mechanics, observables can be calculated with certainty if the inputs are specified, yet the inputs are also assumed to be partly unobservable in
principle, so predictions can still not be made with certainty. Since determinism is a property of an m-model that cannot be removed with a re-definition, it follows that Bohmian Mechanics is not a representation of SQM. We elaborate on this further in the companion paper [1].
### Local and Temporal Properties
We will now add some properties of models that we commonly use in physics, starting with those that refer to locations in space and time. Clearly we can only speak of locations in space and time if a model has some notion of space and time, and a distance measure on them, to begin with. In the simplest case, that would be the usual space-time manifold and its metric distance. But it could alternatively be some other structure that performs a similar function, such as a lattice, or a discrete network with a suitably defined metric.
To make sense of space and time we will in the following consider only a restricted m-class associated with a calculational model, that is one in which space and time are consistently defined throughout the entire class. To see why this is necessary, imagine if we were to redefine one direction of "space" as "time" and vice versa. This is mathematically possible, but it makes no physical sense. It would screw up any notion of locality and causality just by nomenclature. We will hence assume that space and time and a distance measure on them are consistently defined throughout the m-class.
However, sometimes a model just does not have a description of space-time or a notion of locality. This might sound odd, but is not all-that-uncommon in the foundations of quantum mechanics, where many examples deal with qubit states that do not propagate and are not assigned space-time locations [19]. We will refer to such models as alocal:
**Alocal:** An m-model that does not have a well-defined notion of space, space-time, locality, or propagation.
One cannot fix a lack of definition by switching to a different but mathematically equivalent definition, so if a c-model is alocal, then so will be any other c-model its m-class.
For alocal models, we cannot make any statements about whether they are local or not. Similarly, a model may just not have a notion of time or a time-evolution. This again is not all that uncommon in the foundations of quantum mechanics. Many elaborations on correlations and classicality bounds, for example, do not specify a time-evolution for states; and the process matrix formalism is specifically designed to study possible quantum processes without a predefined time order [20, 21].
**Atemporal:** An m-model that does not have a well-defined notion of time.
For models which have a proper notion of space-time, and a distance measure on it, we then want to identify how local they are. For this, we first have to identify the input values that can be assigned to space-times which Bell coined "local beables" [22]. According to Bell, local beables "can be assigned to some bounded space-time region". We will take "assigned to" to
mean that the variable is the value of a function whose domain is space-time, or whatever the stand-in for space-time is in the model at hand.
For example, if space-time is parameterised by coordinates \((\vec{x},t)\), and we have a function \(f:(\vec{x},t)\to\mathbb{C}\), then \(f(\vec{x},t)\) is a local beable. The domain of the SQM wavefunction is generically configuration space and it is therefore not a local beable, though under certain circumstances local beables can be obtained from it (such as the single-particle wavefunction in position space). Local beables are not necessarily observable.
Bell's definition of a local beable makes the assignment to a space-time region optional ("can be assigned"). However, if the assignment was optional, then it could be omitted, and a model with an assumption that can be omitted is reducible. Since we only deal with irreducible models, we therefore already know that if a variable is assigned to a space-time region, then this assignment is actually necessary. For this reason we define
**Local Beable:** An input value of a c-model that is assigned to a compact region of space-time.
In the following, \(\mathcal{I}(A)\) will refer to the local beables in space-time region \(A\).
It becomes clear here why we required our models to be irreducible: so that it is actually necessary, and not just possible, to assign the local beables to space-time regions. It has for example been rediscovered a few times independently [23, 24] that any theory can be made local (in pretty much any reasonable definition of the term) by copying the local beables from any space-time location into any other space-time location and postulating them to "be there" without any other consequences.
Concretely, suppose we have localised variables \(f(x,t)\) over a space-time manifold \(M\), then we can just define the state \(f(x,t)\otimes f(x^{\prime},t^{\prime})\in M\otimes M\) and call that a local beable at \((x,t)\), just that now we have the information about any \((x^{\prime},t^{\prime})\) also available at the same point. Depending on perspective, one might consider such a model as either ultra-local or ultra-nonlocal. It is ultra-local in that we do not need to know the state at any other location than \((x,t)\) to calculate the time-evolution. It is ultra-nonlocal because any point in space-time contains a copy of the entire universe.
A similar problem would occur with parameters, such as \(\hbar\), that could be transformed into fields \(\hbar:(x,t)\to\delta(x-x^{\prime})\delta(t-t^{\prime})\) and then be used as inputs with random locations, leading to seemingly 'non-local' interactions in the Hamiltonian.
Such procedures to localise beables create new c-models that are in different m-classes, because the copying procedure is an additional assumption that could not have been derived from the original version of the c-model. Since such a localising assumption is unnecessary to calculate outputs, the setup of such a model is reducible and, as we have previously remarked, properties of reducible setups are ambiguous.
A word of caution is in order here. As mentioned above in the elaboration on alocality, many discussions of violations of Bell's inequalities do not explicitly state locality assumptions. These assumptions might hence appear unnecessary. This is indeed correct if one merely looks at the inequality. However, if one wants to draw conclusions about locality from a violation of
Bell's inequality, one needs to at least make a statement about which parts of the experiment are causally connected or space-like separated. That is, we suspect that many alocal models do implicitly require locality statements to arrive at the desired conclusions. One should not be deceived by only looking at the explicitly stated assumptions.
Having introduced local beables, we can now define an alternative:
**All-At-Once (AAO) input:** Input of a c-model that constrains or defines properties of local beables for at least two temporal regions, and that cannot be decomposed into inputs in separate temporal regions.
Typical examples of such inputs are event relations, consistency requirements for histories, evolution laws, temporal boundary conditions, or superselection rules. Something as mundane as an integral over time that depends on the scenario would also be an all-at-once input.
**All-At-Once (AAO) model:** A c-model that uses all-at-once input.
The use of all-at-once input is _a priori_ a property of c-models. That is to say, it might be possible to reformulate a model with AAO input into a mathematically equivalent model that does not have this property.
The principle of least action in classical mechanics, for example, uses All-At-Once input (the action), but given that the Lagrangian fulfills suitable conditions, the principle can equivalently be expressed by the Euler-Lagrange equations which do not require AAO input. Another example may be the cosmological model introduced in [25], which uses a constraint on the space-time average of the Lagrangian density. The authors refer to this as non-local, and in some sense it is, but it is more importantly also an all-at-once input.
Now that we have localised variables, we can define a notion of locality. We will first leave aside causality and introduce a notion of Continuity of Action, as done in [16]. The idea is that if one wants to calculate the outputs \(\mathcal{O}(A)\) for a region \(A\), then it is sufficient to have all the information on a closed hypersurface, \(S_{1}\), surrounding the region, and adding local beables from another region outside \(S_{1}\) provides no further information.
**Continuity of Action (CoA, locality):** An m-model has continuous action or fulfills Continuity of Action or is local if it fulfils the condition \(P(\mathcal{O}(A)|\mathcal{I}(S_{1}),\mathcal{I}(B))=P(\mathcal{O}(A)| \mathcal{I}(S_{1}))\) for any region \(S_{1}\) that encompasses \(A\) but not \(B\) (see Fig. 3.)
**Non-locality:** An m-model is non-local if it violates CoA.
Note that fulfilling CoA does not mean that the outputs in \(A\) are determined by the input values on \(S_{1}\) to begin with. It's just that adding information from \(B\) doesn't make a difference.
A natural question to ask here is just how small the regions should be for the model to have continuous action. This is a most excellent question but luckily one that we do not need to answer because it resides on the level of empirical adequacy. A model in which variables cannot be localised sufficiently much will simply not reproduce certain observations. (For example, a model with a spatial resolution so rough that it cannot distinguish two detectors from each other
can only give a total probability for both, not for each separately.) The typical size of the regions is what is often referred to as the coarse-graining scale of a model.
A subtlety of the definition for Continuity of Action was pointed out in [26], which is that some combinations of inputs may be mathematically possible, but are not in the physical state space of the model and hence do not correspond to any scenario which can occur in reality. This might happen for example because certain combinations of variables are just forbidden by a mathematical constraint (as happens with the Fermi exclusion principle). In this case, one might have a situation where e.g. \(P(\mathcal{I}(S_{1}),\mathcal{I}(B))=1\) and \(P(\mathcal{I}(S_{1}),\mathcal{I}(B)^{\prime})=0\) just because the latter combination is incompatible with the assumptions of the model [27]. It would then seem that \(P(A|\mathcal{I}(S_{1}),\mathcal{I}(B))\neq P(A|\mathcal{I}(S_{1}),\mathcal{I }(B)^{\prime})\) and CoA is violated. However, as argued in [26], it is rather meaningless to talk about violations of locality for configurations which do not physically exist. Hence, if a model has input constraints, one should only apply the locality requirement to inputs that lead to physically possible scenarios.
We define CoA as an m-model property because, if it could be removed with a mathematical redefinition, we believe most physicists would not accept the definition as meaningful.
The term "non-locality" has been used to refer to many other definitions in the foundations of physics in general, and quantum mechanics in particular. For example, in field theories, non-locality often refers to dynamical terms of higher order, a definition that is also used in General Relativity [28]. In quantum field theories, non-locality usually refers to the failure of operators to commute outside the light cone. Even in quantum foundations, non-locality may refer to different properties. For example, entanglement itself is sometimes considered non-local despite being locally created [29]. The latter in particular has created a lot of confusion, because while it has been experimentally shown that entanglement is a non-local correlation and does in fact exist, this does not imply that nature is non-local in the sense that the term has been used in Bell's theorem, which has of course not been shown [30].
Surveying all those different notions of non-locality is beyond the scope of this present work. However, we want to stress that as definitions, neither of these notions of non-locality are wrong. They are just that: definitions. We chose our definition to resemble closely the "spooky action at a distance" that Einstein worried about.
Figure 3: Continuity of Action.
According to our definition, a calculational model fulfills CoA even if it can't be directly evaluated whether it fulfils the requirement on the conditional probabilities, so long as a mathematically equivalent reformulation of the model fulfills it. The most relevant example for our purposes is that \([\mathrm{CI}]_{\mathrm{m}}\) does not fulfil CoA. This is because making a measurement in \(B\) provides information about the measurement outcome in \(A\) that was not available in \(S_{1}\). This is Einstein's "spooky action at a distance".
That is, with the terminology we have introduced so far, the Copenhagen Interpretation and all mathematically equivalent formulations are equally non-local. The question is then merely whether this is something that we should worry about. If one can understand the wavefunction as an epistemic state (a state of knowledge), then its non-local update is not _a priori_ worrisome.
Continuity of action loosely speaking means that localisable variables can only influence their nearest neighbours. However, it does not require that this influence lies within the light cones. To arrive at a stronger condition, we will therefore now as is customary assign the light-cones and their insides to a space-time region, \(A\). We will denote with \(L(A)\) the union of all space-time points that are light-like or time-like related to any point in \(A\).
We can then cut out regions from the closed surface \(S_{1}\) using the light-cones: \(S_{2}:=S_{1}\cap L(A)\) and arrive at a stronger version of Continuity of Action :
**Strong Continuity of Action (locally subluminal):** An m-model has Strong Continuity of Action if local beables outside the forward and backward light-cones of a region play no role for calculating outputs in that region \(P(\mathcal{O}(A)|\mathcal{I}(S_{2}),\mathcal{I}(B))=P(\mathcal{O}(A)|\mathcal{ I}(S_{2}))\;\;\forall\;S_{2}\) that do not overlap with the light cones of \(B\), ie \(S_{2}\cap L(B)=\emptyset\) (see Fig. 4).
And correspondingly,
**Weak Continuity of Action (locally superluminal):** An m-model has Weak Continuity of Action if it fulfils Continuity of Action but not Strong Continuity of Action.
Weak Continuity of Action, roughly speaking, means that influences happen locally, but sometimes superluminally. What we call Strong Continuity of Action was called Einstein locality in
Figure 4: Strong Continuity of Action.
[31]. Local and non-local models can further be distinguished into those which are superluminal and subluminal. It it is rather uncommon to consider subluminal non-locality, but it will be helpful in what follows to clearly distinguish non-locality from superluminality. We can define subluminal non-locality by requiring that the only local beables necessary to find out what happens at \(A\) are those within or on the light-cones of \(A\), and adding the the light-cones of \(B\) and their inside brings no further information.
**Non-locally subluminal:** An m-model is non-locally subluminal if it is non-local, but local beables outside light-cones of a region play no role for calculating outputs in that region \(P(\mathcal{O}(A)|\mathcal{I}(L(A)),\mathcal{I}(L(B)))=P(\mathcal{O}(A)| \mathcal{I}(L(A))\).
And consequently
**Non-locally superluminal:** An m-model is non-locally superluminal if it is non-local, but not non-locally subluminal.
For a summary of those four terms, see Figure 5.
We should also mention here that strong CoA is a weaker criterion than confining CoA inside the light cones, because of the requirement that one only considers regions \(S_{2}\) which do
Figure 5: Difference between local, non-local, and superluminal.
not overlap with the light cones of region \(B\). The reason for this requirement--as noted by Bell already--is that otherwise the outputs in \(B\) might well provide additional information that correlates with the inputs from \(S_{2}\) (and hence the outputs in \(A\)) without influences ever leaving the light-cones (see Appendix for explanation). But of course, if one extends the light cones of regions \(A\) and \(B\) far enough, they will always overlap. This means that one can always try to explain violations of Strong Continuity of Action by locating an origin of correlations between \(A\) and \(B\) in the overlap of the light cones.3 We will come back to this later.
Footnote 3: This is often called a common “cause”. However, all that is required here is a correlation, not a causation.
### Properties of Temporal Order
Like with the local and temporal properties, to talk about temporal order, a model needs to have been provided with additional information. We need not only a notion of time, but also an arrow of time that allows us to distinguish past and future. This arrow of time is often not explicitly defined but implicitly assumed. Typically it comes in because we assume that an experimenter can chose a setting in the future, but not one in the past. That is, the arrow of time sneaks in with what we consider to be a possible scenario.
Since an arrow of time is _a priori_ a matter of definition, we have to specify that this definition has to be consistent for all mathematically equivalent expressions of a setup.
There are many notions of arrows of time that have been discussed in the literature [32, 33] and our aim here is not to unravel this discussion. We will merely note that a model needs to have such a notion for the following properties to be well-defined.
Like before, it is possible to have a model that just does not have a temporal order, or that does not distinguish past and future. Indeed, this is the case for many of the simplest models that we deal with, such as an undamped harmonic oscillator, or the two-body problem in Newtonian gravity.
**Acausal:** An m-model is acausal if it does not have a well-defined arrow of time, and hence no notion of past or future.
An atemporal model is also acausal--one cannot have an arrow of time without having time to begin with. This feature is exhibited in the process matrix formalism, which can even be used to model processes for which it is impossible to specify a well-defined arrow of time[20, 21]. One might plausibly argue that a model which isn't time-reversible necessarily has an arrow of time, and hence can't be acausal, but then we didn't say who or what defines that arrow of time, so we cannot draw this conclusion.
As stressed earlier, some properties of models only make sense with an arrow of time that orders times. We now come to the first of those:
**Temporally Deterministic**: A c-model is temporally deterministic if it is both deterministic, and has an arrow of time, according to which calculating localisable output values at time \(t^{\prime}\) does not require inputs that are local beables at \(t>t^{\prime}\). An m-model is temporally deterministic if at least one of its c-models is.
This is a complicated way of saying that, in a temporarily deterministic model, a future state can be calculated with certainty from a past state, but not necessarily the other way round. This notion of temporally deterministic is what is often meant by the term 'deterministic'. Note that a temporally deterministic model might have other inputs besides value-assignments for local beables. In particular, a model with all-at-once inputs may still be temporally deterministic.
We have defined temporal determinism of an m-model from the requirement that at least one of its c-models has that property, because temporal determinism is easy to remove by redefining all variables so that they mix different times, or using (partially) time-like boundary conditions.
In the Copenhagen Interpretation (CI), so long as no measurement occurs, the state of the wavefunction at time \(t_{a}\) is temporally determined by the state of the wavefunction at time \(t_{b}\neq t_{a}\) and the Hamiltonian operator. The state of the wavefunction after measurement, on the other hand, is generically not determined by the state of the wavefunction before measurement. Hence, the CI (c-model) is not temporally deterministic.
It does not follow from this that standard quantum mechanics (SQM), which we defined as \([\text{CI}]_{\text{m}}\), is also not temporarily deterministic. However, this is so because--as we saw earlier--SQM is not deterministic to begin with, so it cannot be temporarily deterministic either.
For temporal models we can further define:
**Time-reversible**: An m-model is time-reversible if it is both temporally deterministic, and also deterministic under the replacement \(t\to-t\), where \(t\) measures time.
As mentioned earlier, this definition implicitly makes a statement about what properties we expect of time, and hence cannot stand on its own. That is not its purpose. Its purpose is rather to capture what properties physical properties like time and time-reversibility should have.
Time-reversibility should not be confused with invariance under time-reversal, which is a stronger requirement, but one that we will not consider here. Just because a model is time-reversible, does not mean that its time-reversed version is the same as the original.
Next we recall a term previously-introduced in [16]:
**Future Input Dependence (FID)**: A c-model has a FID if, to produce output for time \(t\), it uses local beables from at least one time \(t^{\prime}>t\) for at least one scenario.
FID is a property of the setup of the c-model, and may simply be a matter of choice. For example, in any time-reversible c-model, one can replace a future input with a past input and get the same outputs. We define it here because it was previously used in [16] and we want to make a connection to this definition below. However, we also want to define a somewhat-stronger property:
**Future Input Requirement (FIR)**: A c-model has a FIR if there is at least one scenario for which producing outputs for time \(t\) requires inputs from \(t^{\prime}>t\). An m-model has a FIR if all c-models in its equivalence class have a FIR.
Another way to phrase this is that a c-model with a FIR has at least one scenario for which the input cannot be entirely chosen in the past. A c-model with a FIR cannot be temporally
deterministic. However, a model that is not temporally deterministic does not necessarily have an FIR.
A well-known example for a Future Input Requirement is a space-time that is not globally hyperbolic, in which case the initial value problem is ill-posed. The time-evolution can then not be calculated unless further input is provided about what will happen.
Another example might be a model which evaluates possible policy paths to limit global temperature increase to certain goals within a certain period of time (say, 2, 3, or 4\({}^{\circ}\)C by 2050). Such a model requires specifying the desired boundary condition in the future. Of course in this case one is dealing with hypothetical scenarios, but the example illustrates that Future Input Requirements are used in practical applications.
### Properties of Causal Order
For models with a temporal order, we can distinguish the past/backward and the future/forward light-cones of a region \(A\), that we will denote \(L_{\mathrm{p}}(A)\) and \(L_{\mathrm{f}}(A)\), respectively. It is \(L(A)=L_{\mathrm{p}}(A)\cup L_{\mathrm{f}}(A)\), and we define \(S_{3}:=S_{1}\cap L_{\mathrm{p}}(A)\), the intersection of the shielding region \(S_{1}\) with the past light-cone. With this, we can further refine Continuity of Action to a notion of local causality:
**Local Causality:** An m-model is locally causal if it fulfils Strong Continuity of Action with local beables localised in the past light cone: \(P(\mathcal{O}(A)|\mathcal{I}(S_{3}),\mathcal{I}(B))=P(\mathcal{O}(A)| \mathcal{I}(S_{3}))\;\;\forall\;S_{3}\) that do not overlap with the light cones of \(B\), ie \(S_{1}\cap L(B)=\emptyset\) (see Fig. 6).
We want to stress that this notion of local causality is based on the mathematical structure of space-time, and so mixing it with other notions of causality can result in confusion. In particular, space-time causality is not _a priori_ the same notion of causality that is used in large parts of the philosophical literature, which concerns itself with the question of cause and effect (one currently popular realisation of which is the one based on causal models, that we will refer to as interventionist causality [4, 5]).
Figure 6: Local causality.
According to space-time causality, of two causally related events, the one in the past is the cause by definition. One thus has to be careful when interpreting local causality using other notions of causality. This is especially important to keep in mind when we classify retrocausality.
If one interprets "causal" as referring to space-time causality, then the term "retrocausal" suggests influences that go inside the light cones, but backwards in time. The term retrocausal, however, does not necessarily imply locality. For example, the Transactional Interpretation [34, 35], often referred to as retrocausal, is also non-local.
We will hence proceed by endowing the classification of the four local properties, summarized in Figure 5 with an additional temporal distinction that is either forward in time or backward in time, according to the arrow of time that we have assumed exists. We would like to stress that since we have an arrow of time, we can meaningfully distinguish forward and backward in time also outside the light-cones. The reader may want to think of the arrow of time as a preferred slicing in space-time. This will give us a total of 8 distinctions. It is then straightforward to define:
**Local Retrocausality:** An m-model is locally retrocausal if it fulfils Strong Continuity of Action and has a future input requirement.
**Non-local causality:** An m-model is non-locally causal if it is non-local, subluminal and has no future input requirement.
**Non-local Retrocausality:** An m-model is non-locally retrocausal if is non-local, subluminal and has a future input requirement.
As we noted already, the term "causality" might either refer to the light-cone structure of space-time or to interventionist causality. However, since the term "local causality" is extremely widely used and refers to space-time causality, we will use the term "retrotemporal" to refer to models with a future input requirement that do not respect the light-cone structure. This gives us the remaining four definitions:
**Locally Superluminal Retrotemporal:** An m-model is locally superluminal retrotemporal if it fulfils Weak Continuity of Action and has a future input requirement.
**Locally Superluminal Temporal:** An m-model is locally superluminal temporal if it fulfils Weak Continuity of Action but has no future input requirement.
**Non-Locally Superluminal Retrotemporal:** An m-model is non-locally superluminal retrotemporal if is non-locally superluminal and has a future input requirement.
**Non-Locally Superluminal Temporal:** An m-model is non-locally superluminal temporal if it is non-local and superluminal, but has no future input requirement.
The above eight terms are summarized in Fig. 7.
Since one can combine forward and backward causes to a zigzag [36], a locally retrocausal model might appear to be superluminal. However, whether such combinations are possible depends on the model.
The above types of retrocausality and retrotemporality are properties of the mathematical structure: A future input requirement cannot be removed by a mathematical redefinition, it is therefore a property of a representation of a model. An example for a locally retrocausal model might be General Relativity in space-times that have time-like closed curves.
However, there is a weaker notion of retrocausality that concerns the use of a c-model rather than its mathematical structure, the associated m-model. We can again distinguish four cases, which are the same as above, except that instead of a future input requirement, we merely have a future input dependence:
**Local Pseudo-Retrocausality:** A c-model is locally pseudo-retrocausal if it fulfils Strong CoA, and has a future input dependence.
One can similar define the terms Non-Local Pseudo-Retrocausality, Locally Superluminal Pseudo-Retrotemporarity, and Non-Locally Superluminal Pseudo-Retrotemporality, by taking the definition without the "Pseudo" and replacing the future input requirement with a future input dependence.
The reason we use the prefix "pseudo" is because according to our earlier definition a future input dependence is a matter of choice. It can be replaced with a past input, at least in principle.
Figure 7: Types of retrocausality.
This does not mean that a future input dependence is unimportant, however. This is because it could be that removing the future input much increases the complexity of using the model. That is, future input dependence is linked to the question of whether a model is fine-tuned which will be further explored in the companion paper [1].
Causal and retrocausal non-locality can only be distinguished in the presence of an arrow of time, which for all practical purposes defines a space-time slicing. The same is the case for superluminal causal and superluminal retrocausal models--absent an arrow of time, they cannot be told apart, since Lorentz-transformations can convert one into the other.
We want to stress that pseudo-retrocausality is a property of a c-model, but not a property of an m-model. A retrocausal c-model cannot be temporally deterministic. A pseudo-retrocausal c-model, in contrast, can be temporally deterministic. It follows that temporal determinism is a simple way to tell pseudo-retrocausality from retrocausality. Note that pseudo-retrocausality was referred to just as retrocausality in [16]. The practical use of pseudo-retrocausality will be discussed in the accompanying paper [1].
The reader who at this point despairs over the many different types of retrocausality will maybe understand now why the literature on the topic is so confusing, and hopefully also why a common terminology is necessary.
Some properties about causal structure just come from the definition of the arrow of time. In particular, we have to distinguish oriented and non-oriented arrows of time. A non-oriented arrow of time is one that allows forming loops in time (Fig. 8):
**Dynamical Retrocausality:** An m-model is dynamically retrocausal if its retrocausality is due to a non-oriented arrow of time.
Dynamical retrocausality may or may not be due to a space-time structure with closed time-like curves. It is important to emphasise that dynamical retrocausality is a property of the model, and not a property of the way the model uses inputs. Depending on how the arrow of time is defined, it may not be particularly meaningful. What makes an arrow of time meaningful, however, is not a question we want to unravel here.
Just for completeness, we also define:
Figure 8: Orientable and non-orientable arrows of time.
**Counter-Causal:** An m-model is counter-causal if if is locally causal after replacing \(t\to-t\).
A model with such a property would makes one strongly suspect that the arrow of time was just defined the wrong way round to begin with. However, possibly there were other reasons to define an arrow of time that way.
A general comment is in order here, which is that the term "retrocausal" is somewhat linguistically confusing. It does not so much refer to causes generally going backwards, but rather to them sometimes going against an arrow of time that was defined from something else. That is, it is really a mix of different directions of time that mark a retrocausal model, the already mentioned property that has previously been referred to as the possibility of zigzags in space-time [36]. Note that the zigzag property itself is defined against the presumed-to-exist GR arrow of time.
### Agent-based Properties
Bell [22] further introduces local beables that are **controllable**, and those which are **not controllable**, a distinction that we will also use below. This notion is somewhat vague, but for our purposes we do not need a precise definition. We will take a controllable local beable to be one whose value can in practice be set by the action of an experimentalist--typically this is a detector setting. For a more mathematically minded definition, see [37]. Note that for a local beable to be controllable, an experimenter need not have free will, whatever that might mean, and they also do not have to control the local beable themselves; it could be done by some kind of apparatus.
If controllable input is correlated with observable output, we will speak of signalling. Signalling is particularly interesting if it is outside the forward light cones.
**Superluminal Signalling:** A c-model allows superluminal signalling if it is superluminal and has controllable inputs which are local beables in a region \(A\) that are correlated with observable outputs that are local beables in a region outside \(L(A)\). An m-model allows superluminal signalling if at least one c-model in its class does.
SQM is non-local but does not allow superluminal signalling [38].
Since the time-order of space-like separated events can change under Lorentz-transformations, superluminal signalling is usually assumed to imply the possibility of signalling back in time. However, non-local signalling does not necessarily imply the possibility of signalling back in time when one has a time-order given by an arrow of time. In General Relativity, for example, this may be a time-like vector field. One can then constrain non-local signals to only be forward in time relative to the vector field. For this reason, the two phenomena--signalling non-locally and signalling back in time--are distinct.
Let us then move on to signals that actually travel back in time:
**Retrocausal Signalling:** A c-model allows retrocausal signalling if is retrocausal and there is at least one scenario for which observables at time \(t\) are correlated with
required controllable inputs from \(t^{\prime}>t\). An m-model allows retrocausal signalling if at least one c-model in its equivalence class does.
This retrocausality signalling could either be local or non-local. Like for the non-signalling case discussed in Section 3.4, causal and retrocausal non-local signalling can only be distinguished in the presence of an arrow of time, and the same is the case for temporal and retrotemporal superluminal signalling, since Lorentz-transformations can mix both cases.
The problem with retrocausal signalling is that the observables at time \(t\) are, well, observable. If they can be affected by input at a later time \(t^{\prime}>t\), then the result may disagree with what one had already observed. This is what opens the door to causal paradoxes.
We will not introduce a notion of pseudo-retrocausal signalling, as that would be a technically possible definition, but rather oxymoronic. If a future input was removable and therefore not necessary for an earlier observable, then no signal was sent (though in such a case an agent might still have an illusion of signalling).
## 4 Specific Model Properties
In this section we will now introduce properties that are specific to models typically used in the foundations of quantum mechanics.
Using the terminology of the previous sections, we can summarise the key conundrum of standard quantum mechanics (SQM) by saying that models mathematically equivalent to the Copenhagen Interpretation, \([\mathrm{CI}]_{\mathrm{m}}\), do not fulfil Strong Continuity of Action. However, SQM also has the odd property that observables generically do not have definite values before a measurement. This opens the possibility that one may still find a physical or empirical equivalent to the Copenhagen Interpretation that fulfils Strong Continuity of Action.
Of course, one may be interested in interpretations or modifications of quantum mechanics for other reasons. For example, one may want to find a realist interpretation, or to return to determinism. However, the models we are here mainly concerned with are those which reinstate Strong Continuity of Action. Such models usually introduce new input variables. Using the terminology of [39], we will call them additional variables:
**Additional Variables:**: Input values for an interpretation or modification of \([\mathrm{CI}]_{\mathrm{m}}\) that have no equivalent expression in \([\mathrm{CI}]_{\mathrm{m}}\).
Additional variables are not necessarily localised, or even localisable, and they are not necessarily hidden, though all of that might be the case. E.g., in Bohmian Mechanics, particle positions are localisable, localised, and hidden. Bohmian Mechanics, however, as we noted previously, does not fulfil Continuity of Action.
One might worry here that since the variables are "additional", they are unnecessary to produce the same output as SQM, and therefore just make a model's setup reducible. However, this does not have to be the case, because one normally introduces the additional variables to remove another assumption from \([\mathrm{CI}]_{\mathrm{m}}\). This is typically the measurement update postulate, since it is the one that leads to a violation of Strong Continuity of Action [40].
The purpose of additional variables reveals itself when one interprets the violation of Strong Continuity of Action in the Copenhagen Interpretation as being due to the epistemic character of the wavefunction. One assumes that the real (ontic) state is not the wavefunction, but one that respects Continuity of Action, one just does not know which ontic state one is dealing with until a measurement was made. Quantum mechanics, in this picture, is just an incomplete description of nature.
We know from Bell's theorem that all such ensemble models with additional variables which determine the measurement outcome will violate an assumption to this theorem commonly known as statistical independence [41]:
**Statistical Independence:** An m-model fulfils statistical independence if \(P(\lambda|X,Y)=P(\lambda)\), where \(\lambda\) is (a set of) local beables on \(S_{3}\) and \(X\) and \(Y\) quantify the settings of two detectors in regions \(A\) and \(B\) (compare with Fig. 6).
From this we can tell that all local interpretations or modifications of quantum mechanics can be classified by the ways in which they violate statistical independence4. We will hence refer to them all as SI-violating models.
Footnote 4: It is sometimes argued that the Many Worlds Interpretation is a counterexample to this claim [42]. However, as laid out in the companion paper [1], the Many Worlds Interpretation is either not empirically equivalent to SQM, or violates Continuity of Action.
Statistical independence is also sometimes referred to as "measurement independence," or the "free choice assumption" or the "free will assumption" in Bell's theorem. In rare occasions we have seen it being referred to as "no conspiracy". In recent years, theories which violate statistical independence have also been dubbed "contextual" [43], though the class of contextual models is larger than just those which violate statistical independence (there is more "context" to an experiment than its measurement setting).
Not all models that are being used in quantum foundations reproduce all predictions of quantum mechanics. Many of them only produce output for certain experimental situations, typically Bell-type tests, interferometers, or Stern-Gerlach devices. We can then ask, in a hopefully obvious generalization of the above classification, whether these models are either representations or interpretations of \([\mathrm{CI}]_{\mathrm{m}}\) as it applies to the same experiments.
Traditionally, a distinction has been made between SI-violating models that are either retrocausal or superdeterministic. But this distinction has remained ambiguous for three reasons.
First because--as we have seen already--retrocausality itself has been used for a bewildering variety of cases. If one then defines superdeterminism as those SI-violating models which are not retrocausal, one gets an equally bewildering variety. Second, since Bell (who coined the term "superdeterminism") did not distinguish retrocausality from superdeterminism, one could reasonably argue that superdeterminism should be equated with SI-violation in general, and then consider retrocausality to be a variant of superdeterminism. Third, not everyone agrees that superdeterministic models have to be deterministic to begin with [44, 45].
There is no way to define the term so that it agrees with all the ways it has previously been used. We will therefore just propose a definition that we believe agrees with the way it has most
widely been used, based on the following reasoning:
1. We are not aware of any superdeterministic model which is not also deterministic. Leaving aside that it is terrible nomenclature to speak of "non-deterministic superdeterminism", there is not even an example for it. For this reason, we will assume that superdeterministic models are deterministic.
2. We want a model that fulfils Strong Continuity of Action, because this is the major reason why violations of statistical independence are interesting, and it is also the context in which Bell coined the expression.
3. Most of the literature seems to consider superdeterminism and retrocausality as two disjoint cases, so we will do the same, even though Bell seems to not have used this distinction when he coined the term. This point together, with the previous one, implies that the model has to be locally causal.
4. We want a distinction that refers to the m-model, not to its particular realization as a c-model to avoid new ambiguities.
5. The model should reproduce the predictions of quantum mechanics, at least to a reasonable extent. This assumption is relevant because without it Newtonian mechanics would also be superdeterministic which makes no sense.
6. It's a one-world model that violates statistical independence.
Some readers might argue that requirement 6 is strictly speaking unnecessary because it follows from Bell's theorem given the previous 5 requirements. However, since we did not explicitly list the assumptions to Bell's theorem, and it is somewhat controversial which of those have to be fulfilled in any case, we add it as an extra requirement.
Taking this together, we arrive at the following definition:
**Superdeterminism:** An m-model with additional variables which is deterministic, locally causal, violates statistical independence, and is empirically equivalent to standard quantum mechanics (is in [CI]\({}_{\rm e}\)).
With this definition, a superdeterministic model is distinct from a retrocausal model. However, a superdeterministic c-model can still be pseudo-retrocausal. This distinction has previously been made in [46]. The authors of this paper used the term "Soft Superdeterminism" for what in our nomenclature would be a pseudo-retrocausal superdeterministic c-model, and they use the term "Hard Determinism" which in our nomenclature would be a not pseudo-retrocausal, superdeterministic c-model. In both cases though, the m-class belonging to the model is superdeterministic, and not retrocausal.
The notion of common-cause superdeterminism refers even more specifically to a certain type of superdeterministic models that are not pseudo-retrocausal. In these models, one assumes that the additional variables which determine the measurement outcomes in a Bell-type experiment are correlated with the settings, because they had a common cause in the past.
We want to emphasise that the additional requirement of common-cause superdeterminism is that there was a _common_ cause, a more or less localised event that serves as what some would call an "explanation" for the correlation that gives rise to violations of statistical independence. This is opposed to the general superdeterministic models in which the correlation does not require explanation, but rather _is_ the explanation (for our observations). The correlation that gives rise to violations of statistical independence of course always has a space-time cause--in the most extreme case that would be the initial condition at the beginning of the universe. But in general the hidden variables and detector settings have no common cause in the interventionist sense, nor do they need have to have one.
That said, this particular type of common-cause superdeterminism can be tested by using methods to choose the detector settings that are so far apart from each other that any common cause would have had to be in the very early universe. This is the idea behind the Cosmic Bell test [47, 48], sketched in Fig. 9.
However, while common-cause superdeterminism is a logical possibility, we are not aware of any superdeterministic model which is of this type, or of anyone who has advocated such a model.
Unfortunately, the results of Cosmic Bell tests are often overstated in the literature, quite frequently claiming to "close the superdeterminism loophole", rather than just testing a particular type of superdeterministic model, which no one works on to begin with.
The relation between pseudo-retrocausality and finetuning (a.k.a. "conspiracies") will be explored in a companion paper [1].
Figure 9: Sketch of Cosmic Bell test with source S and two detectors. The settings of the detectors D\({}_{1}\) and D\({}_{2}\) are chosen by using photons from distant astrophysical sources, usually quasars. Any past cause that could have given rise to the observed correlations must then have been very far back in time. Light cones indicated by gray shading.
## 5 Classification
We want our classification scheme to be practically useful, so we will here provide a short guide to how it works.
The question we want to address is this:
Supposed you have a c-model for quantum mechanics--that is the thing you are doing your calculations with--what should you call it? We will assume that your model is presently empirically equivalent to standard quantum mechanics. (If it isn't, you have bigger problems than finding a name for it.) You then proceed as follows:
1. To classify the model, you first have to make sure that its setup irreducible. If it is reducible, the setup can't be classified, so please remove all assumptions that are not necessary to calculate outputs. Assumptions that state the "physical existence" (whatever that may be) of one thing or another are typically unnecessary for any calculation.
2. Figure out whether your model is physically equivalent to SQM. If it is not, it's a **modification**. If it is, it's an **interpretation**.
3. Figure out whether your model is mathematically equivalent to any already known interpretation. If it is, your model is a **representation** (of the interpretation it is mathematically equivalent to). If it's not, you have a representation of a new interpretation.
4. Go through the list of properties in Sections 3 and 4, and note down whether your model has them, starting with the m-model properties, then the c-model properties.
## 6 Summary
We have proposed a classification scheme for models in the foundations of quantum mechanics. Its most central element is the distinction between different types of models: calculational, mathematical, physical, and empirical. After distinguishing these different classes of models, we have defined some of their properties that are discussed most commonly in the foundations of quantum mechanics, with special attention to those concerning locality and causality. We hope that the here proposed terminology can aid to clarify which problems of quantum mechanics can and cannot be solved by interpretation.
### Acknowledgements
We want to thank Louis Vervoort and Ken Wharton for valuable feedback and all participants of the 2022 Bonn workshop on Superdeterminism and Retrocausality for helpful discussion. JRH acknowledges support from Hiroshima University's Phoenix Postdoctoral Fellowship for Research, the University of York's EPSRC DTP grant EP/R513386/1, and the UK Quantum Communications Hub, funded by the EPSRC grants EP/M013472/1 and EP/T001011/1.
## Acronyms
CI: Copenhagen Interpretation
CoA: Continuity of Action
FID: Future Input Dependence
FIR: Future Input Requirement
SQM: Standard Quantum Mechanics
SI: Statistical Independence
## Appendix
If the region \(S_{3}\) was allowed to intersect with the inside of the past-lightcone of \(B\), then in a theory which is not temporally deterministic, correlations could be created later which were not contained on \(S_{3}\). In this case then, local beables at \(B\) could provide extra information for what happens at \(A\) though the information got there locally and inside the light-cones. For illustration, see Figure 10.
|
2309.05937 | The Gas Accretion Rate of Star-forming Galaxies over the last 4 Gyr | Star-forming galaxies are believed to replenish their atomic gas reservoir,
which is consumed in star-formation, through accretion of gas from their
circumgalactic mediums (CGMs). However, there are few observational constraints
today on the gas accretion rate in external galaxies. Here, we use our recent
measurement of the scaling relation between the atomic hydrogen (HI) mass
$M_{HI}$ and the stellar mass $M_*$ in star-forming galaxies at $z \approx
0.35$, with the relations between the star-formation rate (SFR) and $M_*$, and
the molecular gas mass $M_{Mol}$ and $M_*$, and the assumption that
star-forming galaxies evolve along the main sequence, to determine the
evolution of the neutral gas reservoir and the average net gas accretion rate
onto the disks of star-forming galaxies over the past 4 Gyr. For galaxies with
$M_* \gtrsim 10^9 M_{\odot}$ today, we find that both $M_*$ and $M_{HI}$ in the
disk have increased, while $M_{Mol}$ has decreased, since $z \sim 0.35$. The
average gas accretion rate onto the disk over the past 4 Gyr is similar to the
average SFR over this period, implying that main-sequence galaxies have
maintained a stable HI reservoir, despite the consumption of gas in
star-formation. We obtain an average net gas accretion rate (over the past 4
Gyr) of $\approx 6 M_{\odot} yr^{-1}$ for galaxies with the stellar mass of the
Milky Way. At low redshifts, $z \lesssim 0.4$, the reason for the decline in
the cosmic SFR density thus appears to be the inefficiency in the conversion of
atomic gas to molecular gas, rather than insufficient gas accretion from the
CGM. | Apurba Bera, Nissim Kanekar, Jayaram N. Chengalur, Jasjeet S. Bagla | 2023-09-12T03:24:17Z | http://arxiv.org/abs/2309.05937v2 | # The Gas Accretion Rate of Star-forming Galaxies over the last 4 Gyr
###### Abstract
Star-forming galaxies are believed to replenish their atomic gas reservoir, which is consumed in star-formation, through accretion of gas from their circumgalactic mediums (CGMs). However, there are few observational constraints today on the gas accretion rate in external galaxies. Here, we use our recent measurement of the scaling relation between the atomic hydrogen (Hi) mass \(\rm M_{HI}\) and the stellar mass \(\rm M_{*}\) in star-forming galaxies at \(z\approx 0.35\), with the relations between the star-formation rate (SFR) and \(\rm M_{*}\), and the molecular gas mass \(\rm M_{Mol}\) and \(\rm M_{*}\), and the assumption that star-forming galaxies evolve along the main sequence, to determine the evolution of the neutral gas reservoir and the average net gas accretion rate onto the disks of star-forming galaxies over the past 4 Gyr. For galaxies with \(\rm M_{*}\gtrsim 10^{9}\,M_{\odot}\) today, we find that both \(\rm M_{*}\) and \(\rm M_{HI}\) in the disk have increased, while \(\rm M_{Mol}\) has decreased, since \(z\approx 0.35\). The average gas accretion rate onto the disk over the past 4 Gyr is similar to the average SFR over this period, implying that main-sequence galaxies have maintained a stable Hi reservoir, despite the consumption of gas in star-formation. We obtain an average net gas accretion rate (over the past 4 Gyr) of \(\approx\) 6\(\,\rm M_{\odot}\) yr\({}^{-1}\) for galaxies with the stellar mass of the Milky Way. At low redshifts, \(z\lesssim 0.4\), the reason for the decline in the cosmic SFR density thus appears to be the inefficiency in the conversion of atomic gas to molecular gas, rather than insufficient gas accretion from the CGM.
Galaxy evolution -- Radio spectroscopy -- Neutral atomic hydrogen 0000-0002-4002-8870]Aparba Bera
## 1 Introduction
Neutral gas is the primary constituent of the interstellar medium (ISM) of star-forming galaxies. It provides the raw material for star-formation, and is consumed in the process. The gas reservoir is expected to be replenished through accretion of gas from the circumgalactic medium (CGM) onto the 'disks' of galaxies. The accretion may occur either through cooling of the hot virialized gas in the CGM (the "hot mode"; e.g. Rees & Ostriker, 1977; White & Rees, 1978) or through gas inflow along cold filaments (the "cold mode"; e.g. Binney, 1977; Birnboim & Dekel, 2003; Keres et al., 2005). However, observational evidence for gas accretion in external galaxies has been scarce, partly because the inflowing gas is diffuse and difficult to detect, and partly due to the lack of unambiguous signatures of accretion. Indirect evidence of gas accretion onto galaxy disks has been found in several recent studies (e.g. Cheung et al., 2016; Spring & Michalowski, 2017; Kleiner et al., 2017; Rahmani et al., 2018; Zahedy et al., 2019). Indeed, insufficient gas accretion to replenish the neutral gas reservoir of galaxies has been proposed to explain the observed decline in the cosmic star-formation rate (SFR) density at \(z\lesssim 1\)(e.g. Chowdhury et al., 2020, 2022a, 2020).
Scoville et al. (2017) introduced an interesting approach to determine the gas accretion rate as a function of redshift, using dust continuum measurements to infer the ISM masses of galaxies (via an assumed dust-to-gas ratio) and then fitting for the dependence of the ISM mass on the galaxy redshift, stellar mass, and offset from the star-forming main
sequence (see also Scoville et al., 2023). They combined the above scaling relation with the assumption of the continuity of main-sequence evolution to infer the gas accretion rate in main-sequence galaxies. We note that the continuity of the main sequence is a standard assumption in the literature (e.g. Renzini, 2009; Peng et al., 2010; Leitner, 2012a; Speagle et al., 2014; Ciesla et al., 2017), with support from both observational evidence (e.g. Rodighiero et al., 2011) and hydrodynamical simulations (e.g. Sparre et al., 2015; Tacchella et al., 2016). However, the dust-to-gas ratio is known to depend critically on galaxy metallicity; the calibration of the ISM mass is thus only applicable to massive galaxies, with near-solar metallicity (stellar mass, \(\rm M_{*}\gtrsim 2\times 10^{10}~{}M_{\odot}\) at high redshifts; Scoville et al., 2017). Further, even for galaxies with near-solar metallicity in the central regions, the dust-to-gas ratio in the outer disk (which contains a significant fraction of the atomic phase) would be lower than in the central regions (Draine et al., 2007). As noted by Scoville et al. (2017, 2023), the inferred ISM masses are applicable to the inner disks of galaxies, where the assumption of near-solar metallicity is reasonable. The total ISM mass is hence likely to be under-estimated by this approach.
The atomic phase (made up of mainly atomic hydrogen, Hi, and helium) is known to dominate the neutral gas reservoir in main-sequence star-forming galaxies at \(z\approx 0\), accounting for \(\gtrsim 85\%\) of the neutral gas mass (e.g. Saintonge et al., 2017; Catinella et al., 2018). Measurements of the dependence of the Hi mass of galaxies on the stellar mass and redshift would thus provide a more reliable way of determining the gas accretion rate, compared to the estimates of the total ISM mass. In the local Universe, the dependence of the Hi mass (\(\rm M_{\rm Hi}\)) on the stellar mass (\(\rm M_{*}\)) has been determined via studies of individual galaxies in the Hi 21 cm line (e.g. Catinella et al., 2018; Parkash et al., 2018). Unfortunately, the weakness of this line, the main probe of the Hi mass of galaxies, has meant that it is very difficult to determine such Hi scaling relations at cosmological distances via studies of individual galaxies.
We have recently applied the technique of Hi 21 cm stacking (Zwaan, 2000; Chengalur et al., 2001) to Hi 21 cm data from a deep Giant Metrewave Radio Telescope (GMRT) survey of the Extended Groth Strip (EGS; Bera et al., 2019; Bera et al., 2022) to determine the \(\rm M_{\rm Hi}-M_{*}\) scaling relation for star-forming galaxies at \(z\approx 0.35\)(Bera et al., 2023, see also Sinigaglia et al. (2022); Chowdhury et al. (2022c)). In this _Letter_, we use this \(\rm M_{\rm Hi}-M_{*}\) relation, with the main-sequence relation, the scaling relation between molecular gas mass and stellar mass, and the assumption of the continuity of main-sequence evolution, to study the evolution of the different baryonic components of star-forming galaxies over the past 4 Gyr, and to determine the average gas accretion rate over this period.1
Footnote 1: Throughout this work, we use a flat \(\Lambda\)-cold dark matter (\(\Lambda\)CDM) cosmology, with (\(\rm H_{0}\), \(\Omega_{m}\), \(\Omega_{\Lambda}\)) = (70 km s\({}^{-1}\) Mpc\({}^{-1}\), 0.3, 0.7). Further, all stellar mass and SFR estimates assume a Chabrier initial mass function (Chabrier, 2003).
## 2 The evolution of the baryonic content of galaxies
The baryonic mass of a galaxy is made up of stars, atomic gas, molecular gas, and ionized gas, with a small contribution from interstellar dust. While the stars and the neutral atomic and molecular gas are predominantly found in the disk of a galaxy, the ionized gas is found in both the disk and the CGM. The baryonic content of a galaxy increases due to accretion of gas from the CGM and the intergalactic medium, and can decrease due to gas outflows driven by supernovae or stellar winds. The net amount of gas accreted over a given time is the difference between the amount of gas accreted and the amount of gas lost in outflows; we will combine these effects to describe the evolution of the net accreted gas mass, \(\rm M_{acc}\). The change in the total baryonic content in the disk of a galaxy over a given time can then be written as
\[\rm M_{acc}=\Delta M_{*}/(\it I-f_{return})+\Delta M_{mol}+\Delta M_{atom}+ \Delta M_{ion} \tag{1}\]
where \(\Delta M_{*}\), \(\Delta M_{\rm atom}\), \(\Delta M_{\rm mol}\), and \(\Delta M_{\rm ion}\) are the changes in, respectively, the stellar mass \(\rm M_{*}\), the atomic gas mass \(\rm M_{atom}\equiv 1.38\times M_{\rm Hi}\), the molecular gas \(\rm M_{mol}\), and the ionized gas mass \(\rm M_{ion}\), over this time. \(\rm M_{\rm Hi}\) is the Hi mass and the factor of 1.38 in \(\rm M_{atom}\) accounts for the contribution of helium. Finally, the factor \((1-f_{return})\) accounts for the fraction of stellar mass that is returned to the gas phase (see below; Leitner & Kravtsov, 2011; Scoville et al., 2017).
In the above equation, the stellar mass of a star-forming galaxy is expected to increase with time, at the expense of the neutral gas mass, along with some mass loss due to stellar winds and supernovae. The molecular gas mass increases at the expense of the atomic gas mass, and decreases due to star-formation. The neutral atomic gas mass decreases via conversion to molecular gas, but increases through accretion onto the disk. Finally, the ionized gas mass decreases due to conversion to the neutral atomic phase, but increases due to stellar and supernova-driven outflows. We will neglect the ionized gas mass in what follows, as its mass in the disk is expected to be much lower than the neutral gas mass (e.g. Draine, 2011).
Using the main-sequence relation and its redshift evolution, and the \(\rm M_{H}-M_{*}\) and \(\rm M_{mol}-M_{*}\) scaling relations at any pair of redshifts, we can determine the changes in the stellar mass, the neutral atomic gas mass, and the neutral molecular gas mass of galaxies over the redshift range, as a function of their stellar mass. Finally, substituting for \(\rm\Delta M_{*}\), \(\rm\Delta M_{mol}\), and \(\rm\Delta M_{atom}\) in Equation 1 would yield the net gas mass accreted by galaxies between the two redshifts, and thus the average net gas accretion rate. We will apply this formalism to the redshift range \(z\approx 0-0.35\), to determine the net average gas accretion rate over the last 4 Gyr.
### Stellar mass build-up along the main sequence
Star-forming galaxies are known to show a tight correlation between the SFR and \(\rm M_{*}\), known as the main sequence (e.g. Madau & Dickinson, 2014). The main-sequence relation has been shown to exist out to \(z\approx 6\)(e.g. Popesso et al., 2023), and is known to evolve with redshift; the evolution has been described using various parametric forms (e.g. Whitaker et al., 2012, 2014; Lee et al., 2015; Leslie et al., 2020; Popesso et al., 2023). Star-forming field galaxies are thought to evolve along the main sequence, i.e. as their stellar mass increases, their SFR changes accordingly to keep them on the main sequence. The knowledge of the redshift evolution of the main-sequence relation can hence be used to trace the stellar-mass history of present-day main-sequence galaxies over their lifetimes (see, e.g., Renzini, 2009; Peng et al., 2010; Leitner, 2012; Speagle et al., 2014; Scoville et al., 2017, 2023).
Following Scoville et al. (2017, 2023), we will restrict ourselves to main-sequence galaxies and assume the principle of continuity of main-sequence evolution. We will ignore both major mergers2 (which can remove galaxies from the main sequence) and the quenching of star-formation activity. Assuming that today's main-sequence galaxies were also on the main sequence 4 Gyr ago, i.e. at \(z\approx 0.35\), we can use the redshift-dependent main-sequence relation to estimate the net change in their stellar masses from \(z\approx 0.35\) to \(z\approx 0\). The increase in the stellar mass of a main-sequence galaxy from \(z\approx 0.35\) to the present epoch is given by
Footnote 2: The rate of major mergers is not significant for star-forming galaxies in the stellar mass range considered in this work (e.g. Rodríguez-Gomez et al., 2015); the stellar mass growth is dominated by star-formation for these galaxies (Guo & White, 2008).
\[\rm\Delta M_{*}\equiv M_{*,0}-M_{*,0.35}=(1-f_{return})\int_{z=0.35}^{0}SFR(M_{* },\!z)\,dz \tag{2}\]
where \(\rm M_{*,0.35}\) and \(\rm M_{*,0}\) are the initial (at \(z\approx 0.35\)) and final (at \(z=0\)) stellar masses of the galaxy, respectively, \(\rm SFR(M_{*},z)\) is the redshift-dependent main-sequence relation, and \(f_{return}\) is the fraction of stellar mass that is returned to the gas phase via stellar winds or supernovae. The value of \(f_{return}\) depends on the initial mass function and typically lies in the range \(0.27-0.41\)(see, e.g., Madau & Dickinson, 2014). Here, we assume \(f_{return}=0.3\), applicable for a Chabrier initial mass function (Leitner & Kravtsov, 2011)3. We also assume that this processed gas is not available for further star-formation (e.g. Scoville et al., 2017). We use the redshift-dependent main-sequence relation of Whitaker et al. (2012)4,
Footnote 3: Our results do not change significantly if a different value of \(f_{return}\), within the range \(0.27-0.41\), is assumed.
Footnote 4: Errors associated with the main-sequence relation have been ignored in this work. Uncertainties in the scaling relations dominate the total errors in our final results.
\[\rm log[SFR]=\alpha(z)[\rm log(M_{*}/M_{\odot})-10.5]+\beta(z)\,, \tag{3}\]
where \(\alpha(z)=0.70-0.13z\) and \(\beta(z)=0.38+1.14z-0.19z^{2}\), to determine the stellar mass at \(z\approx 0.35\) of main-sequence galaxies with present-day stellar mass \(\rm M_{*,0}\). We restrict to galaxies with \(\rm M_{*,0}\geq 10^{9}~{}M_{\odot}\), for which the local \(\rm M_{H}-M_{*}\) scaling relation has been robustly measured today (e.g. Catinella et al., 2018; Parkash et al., 2018). Using Equation 2, this corresponds to galaxies with stellar masses \(\gtrsim 10^{8.5}~{}M_{\odot}\) at \(z\approx 0.35\).
### Molecular gas scaling relations
The evolution of the molecular gas content of main-sequence galaxies has been quantified through measurements of the redshift-dependent scaling relation between \(\rm M_{mol}\) and \(\rm M_{*}\)(e.g. Genzel et al., 2015; Tacconi et al., 2018). The molecular gas mass of galaxies is typically estimated from the CO rotational lines, the far-infrared dust continuum emission, or the \(\approx 1\) mm dust continuum (see Tacconi et al., 2020, for a review). We used the redshift-dependent scaling relation5 connecting the ratio of the molecular gas mass to the stellar mass, \(\rm\mu_{Mol}\equiv[M_{mol}/M_{*}]\) of a galaxy to its stellar mass (Tacconi et al., 2020),
Footnote 5: This relation includes the contribution of helium (Tacconi et al., 2020).
\[\rm log\left[\mu_{Mol}\right]=A+B\left[\rm log(1+z)-F\right]^{2}+D[\rm log(M_{* }/M_{\odot})-10.7] \tag{4}\]
where \(A=0.06\pm 0.20\), \(B=-3.3\pm 0.2\), \(D=-0.41\pm 0.03\), and \(F=0.65\pm 0.05\), to determine the molecular gas mass of a main-sequence galaxy from its stellar mass.6 For each galaxy with present-day stellar mass M\({}_{*,0}\), we can combine Equations 3 and 4 to determine its molecular gas mass at \(z=0\) and \(z=0.35\), and thus estimate the change in its molecular gas mass \(\Delta\)M\({}_{\rm mol}\) between \(z=0.35\) and \(z=0\) from the relation
Footnote 6: Note that we assume that the offset of each galaxy from the main sequence is zero.
\[\Delta\)M\({}_{\rm mol}={\rm M}_{{\rm Mol},0}-{\rm M}_{{\rm Mol},0.35}=\mu_{{\rm Mol },0}\;{\rm M}_{*,0}-\mu_{{\rm Mol},0.35}\;{\rm M}_{*,0.35} \tag{5}\]
where M\({}_{*,0.35}\) and M\({}_{*,0}\) are again the initial (at \(z=0.35\)) and final (at \(z=0\)) stellar masses, respectively, and \(\mu_{{\rm Mol},z}\) can be inferred from Equation 4.
Figure 1: Net changes in the average [A] stellar mass (M\({}_{*}\)), [B] molecular gas mass (M\({}_{\rm mol}\)), and [C] atomic gas mass (M\({}_{\rm atom}\)) in the disks of main-sequence galaxies from \(z\sim 0.35\) to \(z\sim 0\) are shown as functions of their present-day stellar mass (M\({}_{*,0}\), bottom axis) and their initial stellar mass (M\({}_{*,0.35}\), top axis). The shaded regions show the 68% confidence intervals.
Figure 2: Fractional changes, with respect to their respective present day values, in the average [A] stellar mass (\(\Delta\)M\({}_{*}\)/M\({}_{*,0}\)), [B] molecular gas mass (\(\Delta\)M\({}_{\rm mol}\)/M\({}_{\rm mol,0}\)), and [C] atomic gas mass (\(\Delta\)M\({}_{\rm atom}\)/M\({}_{\rm atom,0}\)) in the disks of main sequence galaxies from \(z\sim 0.35\) to \(z\sim 0\) are shown as functions of their present-day stellar mass (M\({}_{*,0}\), bottom axis) and their initial stellar mass (M\({}_{*,0.35}\), top axis). The shaded regions show the 68% confidence intervals.
We note that the galaxies used to measure the scaling relation parameters have stellar masses in the range M\({}_{*}=10^{9}-10^{12.2}\) M\({}_{\odot}\)(Tacconi et al., 2020). We assume that the same scaling relation is also applicable to lower-mass galaxies, with M\({}_{*}\approx 10^{8.5}\) M\({}_{\odot}\), which are part of the EGS sample at \(z\approx 0.35\)(Bera et al., 2023).
### Atomic gas scaling relations
For neutral atomic gas, the \(\rm M_{H{\textsc{i}}}-M_{*}\) scaling relation is known at \(z\approx 0\) from direct Hi 21 cm emission studies of individual galaxies (e.g. Catinella et al., 2018; Parkash et al., 2018). We will use the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation obtained for blue, star-forming galaxies of the xGASS sample (Catinella et al., 2018; Bera et al., 2023)7
Footnote 7: Note that using the \(\rm M_{H{\textsc{i}}}-M_{*}\) scaling relation of Parkash et al. (2018) does not significantly change our results.
\[\rm log(M_{H{\textsc{i}}}/M_{\odot})=(8.934\pm 0.036)+(0.516\pm 0.030)\left[\rm log (M_{*}/M_{\odot})-9.0\right]\,. \tag{6}\]
At present, there are no estimates of the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at cosmological distances, \(z\gtrsim 0.1\), based on Hi 21 cm studies of individual galaxies. However, we have recently used the GMRT to carry out a deep Hi 21 cm emission survey of the EGS, which yielded an estimate of the "mean" \(\rm M_{H{\textsc{i}}}-M_{*}\) relation in blue, star-forming galaxies at \(z\approx 0.35\) with \(\rm M_{*}=10^{8.0}-10^{10.4}\) M\({}_{\odot}\), based on stacking the Hi 21 cm emission from galaxies in different stellar-mass bins (Bera et al., 2023). As noted by Bera et al. (2023), this mean relation may be combined with an assumed lognormal scatter in the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation to infer the "median" \(\rm M_{H{\textsc{i}}}-M_{*}\) scaling relation, which can be directly compared to the scaling relation obtained from a fit to Hi 21 cm emission measurements in individual galaxies. Assuming a lognormal scatter in the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at \(z\approx 0.35\) that is equal to that at \(z\approx 0\) in the xGASS sample, Bera et al. (2023) give the following median \(\rm M_{H{\textsc{i}}}-M_{*}\) scaling relation for blue, star-forming galaxies at \(z\approx 0.35\) in the EGS,
\[\rm log(M_{H{\textsc{i}}}/M_{\odot})=(8.977\pm 0.069)+(0.183\pm 0.104)\left[\rm log (M_{*}/M_{\odot})-9.0\right]. \tag{7}\]
The net change in the atomic gas mass of a main-sequence galaxy from \(z=0.35\) to \(z=0\) can then be estimated using the relation
\[\rm\Delta M_{\rm atom}=1.38\left[\rm M_{H{\textsc{i}},0}(M_{*,0})-M_{H{\textsc{ i}},0.35}(M_{*,0.35})\right]\,, \tag{8}\]
where \(\rm M_{H{\textsc{i}},0.35}\) and \(\rm M_{H{\textsc{i}},0}\) are the initial (at \(z=0.35\)) and final (at \(z=0\)) Hi masses, respectively, \(\rm M_{*,0.35}\) and \(\rm M_{*,0}\) are the initial and final stellar masses respectively, and \(\rm M_{H{\textsc{i}},z}(M_{*,z})\) is the \(\rm M_{H{\textsc{i}}}-M_{*}\) scaling relation (i.e. Equation 7) at the redshift \(z\).
## 3 The evolution of the baryonic composition of galaxies from \(z\approx 0.35\)
We have used Equations 2-8 to determine the changes in the stellar mass, the molecular gas mass, and the atomic gas mass in the disks of present-day main-sequence galaxies between \(z\approx 0.35\) and \(z=0\). As noted earlier, we assume continuity of main-sequence evolution, i.e. that the galaxies evolve along the main sequence (Scoville et al., 2017, 2023).
Figures 1[A-C] show, respectively, the changes in the stellar mass \(\rm\Delta M_{\rm*}\), the molecular gas mass \(\rm\Delta M_{\rm mol}\), and the atomic gas mass \(\rm\Delta M_{\rm atom}\), over the redshift range \(z=0.35\) to \(z=0\), as a function of the stellar mass of galaxies today, \(\rm M_{*,0}\). Figures 2[A-C] show, respectively, the fractional changes in the above three quantities (relative to their present-day values) over the same period as a function of the stellar mass today. It is clear (see Figures 1[A] and 2[A]) that the stellar masses of today's main-sequence galaxies have increased significantly over the past 4 Gyr. Relatively low-mass galaxies, with \(\rm M_{*,0}\approx 10^{9}\) M\({}_{\odot}\), have acquired \(\approx 70\%\) of their current stellar mass since \(z\approx 0.35\), while high-mass galaxies, with \(\rm M_{*,0}\approx 10^{10.5}\) M\({}_{\odot}\), have acquired \(\approx 30\%\) of their present stellar mass during this period.
Conversely, Figure 1[B] shows that the molecular gas content of all galaxies has declined significantly over the last 4 Gyr. The fractional decline relative to the present-day molecular gas mass is seen (in Figure 2[B]) to be the highest for the highest-mass galaxies, with \(\rm\Delta M_{\rm mol}/M_{\rm mol,0}\lesssim-1\) for \(\rm M_{*,0}\gtrsim 10^{9.5}\) M\({}_{\odot}\). In other words, the molecular gas reservoir of present-day main-sequence galaxies has been steadily consumed by star-formation activity since \(z\approx 0.35\).
Finally, the solid blue curves (and blue shaded regions) in Figures 1[C] and 2[C] show the evolution of the atomic gas mass and the fractional atomic gas mass, relative to today's atomic gas mass, of galaxies over the redshift range \(z\approx 0.35-0\). We find that \(\rm\Delta M_{\rm atom}\) is always positive, implying a net increase in the atomic gas mass of galaxies over the last 4 Gyr for all stellar masses. The fractional change in the atomic gas mass (relative to the atomic gas mass today) is low (\(\approx 10\%\)) for low-stellar-mass galaxies (with \(\rm M_{*,0}\approx 10^{9}\) M\({}_{\odot}\)), but substantial (\(\approx 70\%\)) for the highest-stellar-mass galaxies today.
Figure 3[A] plots the change in the total baryonic mass \(\Delta\)M\({}_{\rm baryon}\) of main-sequence galaxies from \(z\approx 0.35\) to \(z=0\) against their stellar mass today, M\({}_{*,0}\). We note that \(\Delta\)M\({}_{\rm baryon}\equiv\) M\({}_{\rm acc}\), the net average gas mass accreted over the last \(\approx 4\) Gyr (i.e. the difference between the gas mass accreted and the gas mass lost due to winds or outflows). The figure shows that all galaxies with stellar mass in the range \(\approx 10^{9}-10^{10.6}\) M\({}_{\odot}\) today have increased their total baryonic mass over the last four Gyr, with the increase being larger for higher stellar masses. However, the fractional change in the total baryonic mass, relative to the baryonic mass today (i.e. M\({}_{\rm acc}\)/M\({}_{\rm baryon,0}\)), is seen in Fig. 3[B] to be approximately constant, \(\approx 40\%\), across the above stellar mass range.
For a galaxy of a given stellar mass today, the ratio of M\({}_{\rm acc}\) to the elapsed time \(\Delta t\) between any two redshifts gives the time-averaged net gas accretion rate \(\langle\dot{\rm M}\rangle_{\rm acc}\) between the two redshifts. Fig. 4 plots (dashed black curve) the above average net gas accretion rate from \(z\approx 0.35\) to \(z\approx 0\) (i.e. over the last \(\approx 4\) Gyr) as a function of the stellar mass of galaxies today, M\({}_{*,0}\); the grey shaded region shows the 68% confidence interval. The dashed red curve shows the average SFR (\(\equiv\Delta\)M\({}_{*}/\left[(1-{\rm f}_{\rm return})\Delta{\rm t}\right]\)) over the same period, again as a function of the stellar mass today, while the dashed blue curve and blue shaded region show the average net formation rate of molecular hydrogen (i.e. the difference between the formation rate and the destruction rate). The red and black curves are in good agreement, within the errors: this indicates that the average rate of net accretion of gas onto main-sequence galaxies over the last \(\approx 4\) Gyr is sufficient to balance the average SFR in these galaxies. Star-forming galaxies on the main sequence, with M\({}_{*}\approx 10^{9}-10^{10.6}\) M\({}_{\odot}\) today, have thus accreted substantial amounts of gas over the last \(\approx 4\) Gyr to replenish their neutral gas reservoir, maintaining a stable (indeed, slightly increasing) Hi reservoir, despite the continuous consumption of Hi in the star-formation process. We emphasize that this is very unlike the situation towards the end of the epoch of galaxy assembly, \(z\approx 1\), where Chowdhury et al. (2022b) find evidence that insufficient gas accretion is the cause of the decline in the SFR density at \(z<1\).
Conversely, it is clear from Fig. 4 that the average net rate of H\({}_{2}\) formation over \(z\approx 0-0.35\) is significantly lower than both the average SFR and the average gas accretion rate. Indeed, the net rate of molecular gas formation is negative, indicating that the atomic-to-molecular gas conversion does not keep pace with the conversion of molecular
Figure 3: [A] Net change in the average baryonic mass (\(\Delta\)M\({}_{\rm baryon}\equiv\) M\({}_{\rm acc}\)) in the disks of main-sequence galaxies from \(z\sim 0.35\) to \(z\sim 0\) as a function of their present-day stellar mass (M\({}_{*,0}\), bottom axis) and their initial stellar mass (M\({}_{*,0.35}\), top axis). [B] The fractional change in the average baryonic mass, with respect to the present-day baryonic mass, as a function of the present-day stellar mass (bottom axis) and the initial stellar mass (top axis). The shaded regions in the figures show the 68% confidence intervals for the corresponding curves.
gas to stars. This implies that it is the inefficient conversion of atomic hydrogen to molecular hydrogen that is likely to be the main cause of the decline in the cosmic SFR density over the last 4 Gyr.
We note that there could be an environmental dependence to the various scaling relations (e.g. Cortese et al., 2011; Catinella et al., 2013). The main-sequence and \(\rm M_{mol}-M_{*}\) scaling relations are predominantly based on field galaxies (e.g. Whitaker et al., 2012; Tacconi et al., 2020). We have used the Sloan Digital Sky Survey-DR8 group catalog of Tempel et al. (2012) to find that roughly half of the blue xGASS galaxies used to determine the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at \(z\approx 0\) are field galaxies (see also Catinella et al., 2013). We note that this estimate of \(\approx 50\%\) of the xGASS galaxies being field objects is likely to be a lower limit as many of the group galaxies are in "groups" with only \(2-3\) members, and may thus well be field galaxies (Tempel et al., 2012). Similarly, Gerke et al. (2012) have used the Voronoi-Delaunay group finder to classify DEEP2 galaxies: of the 260 EGS galaxies used to determine the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at \(z\approx 0.35\) and that have been classified by Gerke et al. (2012), \(\approx 70\%\) are field galaxies (and a significant number of the remaining systems are in groups with \(2-3\) members). It thus appears unlikely that the environmental dependence of the scaling relations would significantly affect our results.
The stellar mass of the Milky Way today is \(\rm M_{*,0}=(6.08\pm 1.14)\times 10^{10}\,M_{\odot}\)(Licquia & Newman, 2015). This lies beyond the stellar mass range at \(z=0\) (\(\rm M_{*,0}\approx 10^{9}-10^{10.6}\,M_{\odot}\)) covered by our results. Assuming that we can extrapolate the Hi scaling relation at \(z\approx 0.35\) to a stellar mass of \(\rm M_{*}=10^{10.7}\,M_{\odot}\), we can estimate the average net gas accretion rate of Milky Way-like galaxies over the last \(\approx 4\) Gyr. The results are shown as the dotted curve in Fig. 4. We obtain an average net gas accretion rate of \(\approx 6\,M_{\odot}\) yr\({}^{-1}\) (indicated by the star in the figure), over the past 4 Gyr for main-sequence galaxies with the stellar mass of the Milky Way. This is broadly consistent with estimates of the total gas accretion rate onto the Milky Way (e.g. Fox et al., 2014; Richter et al., 2017).
In passing, as noted by Bera et al. (2023), we emphasize that the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation used here is based on a relatively small number of galaxies, and a small cosmic volume, and could hence well be affected by cosmic variance. The possibility of cosmic variance in this relation would affect the present results as well. A wide-field determination of
Figure 4: The time-averaged SFR (dashed red curve), the average net molecular gas formation rate (dashed blue curve) and the average net gas accretion rate of the disks of present-day main sequence galaxies (dashed black curve) over the past 4 Gyr are shown as functions of their present-day stellar mass (\(\rm M_{*,0}\), bottom axis) and their initial stellar mass (\(\rm M_{*,0.35}\), top axis). The dotted curves show the results for the average rates on extrapolating the Hi scaling relation of Bera et al. (2023) to a stellar mass of \(\rm M_{*}=10^{10.7}\,M_{\odot}\) at \(z\approx 0.35\). The star indicates the average net gas accretion rate for galaxies with stellar masses equal to that of the Milky Way. The shaded regions show the 68% confidence intervals for the average net gas accretion rate and the average net molecular gas formation rate.
the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at intermediate redshifts would allow a better estimate of the average gas accretion rate onto the disks of galaxies, using the approach described here.
## 4 Summary
We present a formalism to determine the evolution of the baryonic composition of star-forming galaxies between any two redshifts, based on the main-sequence relation between SFR and stellar mass, the scaling relation between molecular gas mass and stellar mass, the scaling relation between atomic gas mass and stellar mass, and the assumption that star-forming galaxies continuously evolve along the main sequence. We apply this formalism to our recent estimate of the \(\rm M_{H{\textsc{i}}}-M_{*}\) relation at \(z\approx 0.35\), to determine the average changes in the stellar, molecular gas, and atomic gas contents of the disks of star-forming galaxies from \(z\approx 0.35\) to \(z=0\), as a function of the galaxy stellar mass today, for the stellar mass range \(\rm M_{*,0}=10^{9.0}-10^{10.6}\,M_{\odot}\). We find that the stellar and atomic gas masses of today's main-sequence galaxies have both increased since \(z\approx 0.35\), while the molecular gas masses of these galaxies have declined over the same period. The fractional increase in the stellar masses (relative to the present-day stellar mass) is \(\approx 30-70\%\), with larger fractional increases at lower stellar masses, while the fractional increase in the atomic gas masses (relative to the present-day atomic gas mass) is \(\approx 10-70\%\), with a larger fractional increase at high atomic gas masses.
We combine the changes in the stellar mass, the molecular gas mass, and the atomic gas mass to determine the net change in the baryonic mass of main-sequence galaxies over the last 4 Gyr. We find that the fractional net increase in the baryonic mass of these galaxies, relative to the present-day baryonic mass, is \(\approx 40\%\) for stellar masses today of \(\approx 10^{9}-10^{10.6}\,M_{\odot}\). Finally, we determine the average net gas accretion rate of star-forming galaxies over the last 4 Gyr, finding average net accretion rates of \(\approx 0.2-5\,M_{\odot}\) yr\({}^{-1}\), similar to the average SFR over this period. The average net gas accretion rate for Milky Way-like galaxies is \(\approx 6\,M_{\odot}\) yr\({}^{-1}\) since \(z\approx 0.35\). We thus find that main-sequence galaxies accrete sufficient amounts of gas over the last \(\approx 4\) Gyr to maintain a stable (and slightly increasing) Hi reservoir, with the gas accretion compensating for the gas consumption via star-formation. The observed decline in the cosmic SFR density over the last \(\approx 4\) Gyr thus appears to arise due to the inefficient conversion from Hi to H\({}_{2}\), which does not sufficiently replenish the amount of molecular gas consumed in the process of star-formation.
## Acknowledgments
We thank the staff of the GMRT who have made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. AB and NK thank Aditya Chowdhury for many discussions on Hi 21 cm stacking that have contributed to this paper. We also thank an anonymous referee whose detailed comments on an earlier version of the manuscript improved the paper. NK acknowledges support from the Department of Science and Technology via a Swarnajayanti Fellowship (DST/SJF/PSA-01/2012-13). AB, NK, & JNC also acknowledge the Department of Atomic Energy for funding support, under project 12-R&D-TFR-5.02-0700.
|
2309.05089 | Some gradient theories in linear visco-elastodynamics towards dispersion
and attenuation of waves in relation to large-strain models | Various spatial-gradient extensions of standard viscoelastic rheologies of
the Kelvin-Voigt, Maxwell's, and Jeffreys' types are analyzed in linear
one-dimensional situations as far as the propagation of waves and their
dispersion and attenuation. These gradient extensions are then presented in the
large-strain nonlinear variants where they are sometimes used rather for purely
analytical reasons either in the Lagrangian or the Eulerian formulations
without realizing this wave-propagation context.The interconnection between
these two modeling aspects is thus revealed in particular selected cases. | Tomáš Roubíček | 2023-09-10T17:32:28Z | http://arxiv.org/abs/2309.05089v2 | # Some gradient theories in linear visco-elastodynamics towards dispersion and attenuation of waves
# Some gradient theories in linear visco-elastodynamics towards dispersion and attenuation of waves
in relation to large-strain models
Tomas Roubicek
**Abstract**. Various spatial-gradient extensions of standard viscoelastic rheologies of the Kelvin-Voigt, Maxwell's, and Jeffreys' types are analyzed in linear one-dimensional situations as far as the propagation of waves and their dispersion and attenuation. These gradient extensions are then presented in the large-strain variants where they are sometimes used rather for purely analytical reasons either in the Lagrangian or the Eulerian formulations without realizing this wave-propagation context.
**AMS Subject Classification.** 35Q74, 74A30, 74B20, 74J05, 76N30.
**Keywords**. visco-elastic rheology, nonsimple-material models, spatial gradients, wave propagation, Kramers-Kronig relations, dispersion, attenuation/absorption, finite strains.
## 1 Introduction
Many visco-elastic rheological models of both the solid and the fluidic types are used in continuum-mechanical modelling. Besides purely elastic Hook-type rheology, the basic options are Kelvin-Voigt (solid) and Maxwell (fluidic). The simplest combination then leads to the standard solid (Zener or Poynting-Thomson) or Jeffreys' fluid (also called anti-Zener). Of course, further more intricate combinations are also popular but out of the scope of this paper.
These standard rheologies involving the strains and the strain rates are often enhanced by some higher spatial gradients or some additional memory-like time effects. We will focus on the former option. Many higher-gradient visco-elastic models can be found in literature. In particular, they can serve well for fitting the dispersion and attenuation (absorption) of elastic waves to particular experimental observation or for facilitation of rigorous proofs of a well-posedeness of large-strain variants of such models. These very different aspects have been scrutinized respectively in different communities of mechanical engineers and physicists (like seismologists or material scientists) or applied mathematicians (analysts or numericians). As a result, the relation between these two aspects is not addressed in literature.
This paper aims to reveal some connections between these two aspects of selected gradient theories in basic visco-elastic models. The velocity dispersion and attenuation of waves is quite impossible to analyze effectively in nonlinear situations and, naturally, we will analyze it in linear models in one-dimensional situations. After presenting it for three standard
rheologies of simple linear materials in Section 2, we will analyze selected nonsimple linear materials in Section 3; the adjective "nonsimple" refers to various higher-order gradients involved in models of such materials. Some surprising effects are presented in the perspective of usage of these gradient theories in nonlinear large-strain variants, which is later briefly surveyed in Section 4.
Basic types of _dispersion_ is _normal_ (the high-frequency waves propagate slower faster than low-frequency ones) versus _anomalous dispersion_ (the high-frequency waves propagate faster than low-frequency ones); of course, in terms of the wavelength, the dependence is the opposite. Besides, some models are nondispersive or with a general dispersion nonmonotonically dependent on the frequency (or the wavelength).
The other important attribute of wave propagation is their possible _attenuation_. It is roughly quantified by a so-called _quality factor_, briefly _Q-factor_. Among various definitions (originally devised rather for oscillating electric circuits), a mechanically suitable definition is \(2\uppi\times\) ratio between the kinetic or strain stored energy and energy dissipated per cycle. Like dispersion, also Q-factor may depend on the frequency (or wavelength) of waves. A very low Q-factor means that waves cannot propagate at all. Conversely, Q-factor \(+\infty\) means that there is no attenuation and the model is fully conservative as far as the energy of waves is concerned. These attributes can be arbitrarily combined by various rheological models.
The basic rheological viscoelastic models are often enhanced by various higher spatial gradients. These spatial-gradient enhancements are useful also for mathematical reasons especially in nonlinear situations arising at large strains, but they can also serve in modelling various internal length scales and various velocity dispersion and attenuation of elastic waves. There are many possibilities. Let us sort some options scrutinized in this paper qualitatively (together with some standard "simple" rheologies) in Table 1:
For completeness, let us mention that another way to build viscoelastic models is by using various integral operators either in space or in time, cf. [12, 13, 27, 28, 38, 57]. We will intentionally focus on rheologies governed by classical differential equations with higher gradients exclusively in space, with the aim to enlighten applications of gradient theories used in large-strain continuum-mechanical models.
## 2 Linear 1-D simple rheologies
We will consider and analyze the 1-dimensional models, which gives also information for isotropic multidimensional cases where a split into the volumetric and the deviatoric parts (relevant respectively for the longitudinal and shear waves) should be made, cf. e.g. [57]. The departure is the nondispersive, fully conservative elastodynamic model governed by the wave equation \(\varrho\frac{\partial^{2}}{\partial t^{2}}u-Cu_{xx}=0\) where \(u\) is the displacement, \(\varrho>0\) is the mass density and
\(C>0\) is the elastic modulus. Equivalently, in the rate formulation
\[\varrho\frac{\partial v}{\partial t}=Ce_{x}\quad\text{ and }\quad\frac{ \partial e}{\partial t}=v_{x}\,, \tag{1}\]
where \(e\) is the strain and \(v\) a velocity, meaning \(e=u_{x}\) and \(v=\frac{\partial}{\partial t}u\). Here the index \(x\) means (and will mean) the partial derivative \(\partial/\partial x\). In terms of the stress \(\sigma=Ce\), it can also be written as the system
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}\quad\text{ and }\quad\frac{ \partial\sigma}{\partial t}=Cv_{x}\,, \tag{2}\]
the linear relation \(\sigma=Ce\) being referred as a _Hook law_. The energetics behind this system is based on the stored energy \(\varphi=\varphi(e)=\frac{1}{2}Ce^{2}\) and the kinetic energy \(\frac{1}{2}\varrho v^{2}\).
A conventional way to calculate dispersion and attenuation in linear media is to use the ansatz
\[u=\mathrm{e}^{\mathrm{i}(wt+x/\lambda)} \tag{3}\]
with the angular frequency \(w=\omega+\mathrm{i}\gamma\) considered complex with \(\omega,\gamma\in\mathbb{R}\) with physical dimension \(1/\mathrm{s}\) and the real-valued wavelength \(\lambda\) with physical dimension meters; here \(\mathrm{i}=\sqrt{-1}\) denotes the imaginary unit. This means \(u=\mathrm{e}^{-\gamma}\mathrm{e}^{\mathrm{i}(\omega t+x/\lambda)}\), which reveals that \(\gamma\) determines the attenuation of the sinusoidal wave propagating with the speed \(\omega\lambda\). Using it for (1), i.e. using \(\varrho\frac{\partial^{2}}{\partial t^{2}}u=-\varrho w^{2}u\), and \(Cu_{xx}=-Cu/\lambda^{2}\), we can conclude that simply \(\gamma=0\) and \(\varrho\omega^{2}=C/\lambda^{2}\), from which we can see the wave velocity \(v=\omega\lambda=\sqrt{C/\varrho}\).
Sometimes, an alternative variant to (3) with a real-valued angular frequency \(w\) but complex-valued wave number \(k=1/\lambda\) can be used, cf. [14, Sect.2.3].
\begin{table}
\begin{tabular}{|c||c|c|} \hline dispersion\(\backslash\)Q-factor & \(+\infty\) (conservative) & \(<+\infty\) \\ \hline \hline normal & & Kelvin-Voigt (4) \\ & & with gradients (22) or (29) \\ & & with stress diffusion (61) \\ & & with kinematic gradient (69) \\ \hline anomalous & (29) with \(D=0\) & Maxwell (12) \\ & & with dissipative gradient (40) \\ \hline general & & Jeffreys (18) \\ & & Kelvin-Voigt \\ & & with mixed-gradient (35) \\ & & with stress gradient (53) \\ & & Maxwell with \\ & & conservative gradient (44) \\ \hline nondispersive & wave equation (1) & special dissipative gradient Kelvin-Voigt (29) \\ & & stress diffusion (64) \\ & & special kinematic gradient (68) \\ \hline \end{tabular}
\end{table}
Table 1: Basic classification of the dispersive models in this paper.
We further involve viscosity, using one (or possibly two) linear dashpot element(s) whose stress/strain response is time-dependent, governed by \(\sigma=D\frac{\partial}{\partial t}e\) with the viscosity \(D>0\), to be organized in parallel or in series, cf. Figure 1.
### Kelvin-Voigt visco-elastodynamic model
Let us start with the basic Kelvin-Voigt rheology. It involves the viscous dashpot parallel to the elastic element as depicted in Figure 1-left. It means an additive decomposition of the stress, which expands the elastic stress \(\sigma=Ce\) in (2) by a viscous part as \(\sigma=D\frac{\partial}{\partial t}e+Ce\). Thus the latter equation in (2) expands as \(\frac{\partial}{\partial t}\sigma=\frac{\partial}{\partial t}v_{x}+Cv_{x}\). This leads to the dispersive wave equation, i.e. in the 1-dimensional variant
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}-D\frac{\partial u_{ xx}}{\partial t}-Cu_{xx}=0\,. \tag{4}\]
Having in mind the ansatz (3), we have \(\varrho\frac{\partial^{2}}{\partial t^{2}}u=-\varrho w^{2}u\), \(D\frac{\partial}{\partial t}u_{xx}=-\mathrm{i}Dwu/\lambda^{2}\), and \(Cu_{xx}=-Cu/\lambda^{2}\), so that the one-dimensional dispersive wave equation (4) yields the algebraic condition
\[\varrho w^{2}-\mathrm{i}\,\frac{D}{\lambda^{2}}\,w-\frac{C}{ \lambda^{2}}=0\,. \tag{5}\]
When substituting \(w=\omega+\mathrm{i}\gamma\) so that \(w^{2}=\omega^{2}-\gamma^{2}+2\mathrm{i}\omega\gamma\), we obtain two algebraic equations for the real and the imaginary part each, called _Kramers-Kronig's relations_[29, 31]. More specifically, here
\[\varrho(\omega^{2}-\gamma^{2})=\frac{C}{\lambda^{2}}-\frac{D}{ \lambda^{2}}\gamma\quad\text{ and }\quad 2\varrho\gamma=\frac{D}{\lambda^{2}}\,. \tag{6}\]
From the latter equation we can read that \(\gamma=D/(2\varrho\lambda^{2})\) and then the former equation yields \((\lambda\omega)^{2}=K/\varrho+\gamma^{2}\lambda^{2}-D\gamma/\varrho=C/\varrho -\gamma^{2}\lambda^{2}=C/\varrho-D^{2}/(4\varrho^{2}\lambda^{2})\). Realizing that the speed of waves is \(v=\omega\lambda\), we obtain
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}-\frac{D^{2}}{4\varrho^{2} \lambda^{2}}}\,, \tag{7}\]
Figure 1: Schematic 1D-diagrammes of 3 basic rheologies considered in this paper.
which gives a _normal dispersion_ for sufficiently long waves, namely having a length \(\lambda>\lambda_{\mbox{\tiny{CRIT}}}\) with the critical wavelength \(\lambda_{\mbox{\tiny{CRIT}}}=D/(2\sqrt{\varrho C})>0\).
Referring to the ansatz (3), the amplitude decreases (per second) by a factor \(\mbox{e}^{-\gamma}\), which means by \(\mbox{e}^{-\gamma/\omega}\) per one cycle, realizing that one cycle lasts \(1/\omega\) seconds. The energy (depending quadratically on the amplitude) thus decreases by a factor \(\mbox{e}^{-2\gamma/\omega}\) per one cycle, i.e. the loss of energy is \(1-\mbox{e}^{-2\gamma/\omega}\). Thus, not entirely standardly, let us understand the Q-factor as
\[\mbox{Q-factor}\ \sim\frac{1}{1-\mbox{e}^{-2\gamma/\omega}}\,. \tag{8}\]
Counting \(\omega=v/\lambda\), it also means \(1/(1-\mbox{e}^{-2\lambda\gamma/v})\). Thus, taking \(\gamma=D/(2\varrho\lambda^{2})\) from (6), the quality factor can be understood as
\[\mbox{Q-factor}\ \sim\ \frac{1}{1-\mbox{e}^{-D/(\varrho\lambda v(\lambda))}} \quad\mbox{with}\quad v(\lambda)\ \mbox{ from (\ref{eq:1})}\,. \tag{9}\]
### Maxwellian visco-elastodynamic model
Let us proceed with the Maxwellian rheology which yields dispersion of an opposite character, called _anomalous_. It uses the connection of spring and dashpot in series, as depicted in Figure 1-mid, i.e. it employs the Green-Naghdi [24]_additive decomposition_ of the total strain \(e=u_{x}\) into the elastic and the inelastic strains, denoted by \(e_{\mbox{\tiny{m}}}\) and \(e_{\mbox{\tiny{m}}}\), respectively. The inelastic strain is in a position of a so-called _internal variable_. The constitutive equations are
\[u_{x}=e_{\mbox{\tiny{m}}}+e_{\mbox{\tiny{m}}}\quad\mbox{and}\ \ \sigma=Ce_{\mbox{\tiny{m}}}=D\frac{\partial e_{\mbox{\tiny{m}}}}{ \partial t}\,. \tag{10}\]
From this, we can eliminate \(e_{\mbox{\tiny{m}}}\) by differentiating the Hook law as \(\frac{\partial}{\partial t}\sigma=C\frac{\partial}{\partial t}e_{\mbox{\tiny {m}}}\) and the additive decomposition written in rates using the velocity \(v=\frac{\partial}{\partial t}u\) as \(v_{x}=\frac{\partial}{\partial t}e_{\mbox{\tiny{m}}}+\frac{\partial}{\partial t }e_{\mbox{\tiny{m}}}\), which gives
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}\quad\mbox{ and }\quad\frac{1}{C}\frac{\partial\sigma}{\partial t}+\frac{\sigma}{D}=v_{x}\,. \tag{11}\]
This leads to the dispersive (so-called telegraph) wave equation
\[\frac{1}{C}\frac{\partial^{2}\sigma}{\partial t^{2}}+\frac{1}{D}\frac{ \partial\sigma}{\partial t}-\frac{1}{\varrho}\sigma_{xx}=0\,. \tag{12}\]
Alternatively, we can eliminate \(\sigma\) which leads to the same dispersive equation but in terms of \(u\) instead of \(\sigma\). Using (12) in terms of \(u\) with the ansatz (3), we obtain \(w^{2}/C-\mbox{i}w/D-1/(\varrho\lambda^{2})=0\). In terms of the real-valued coefficients \(\omega\) and \(\gamma\), it gives
\[\frac{\omega^{2}{-}\gamma^{2}}{C}+\mbox{i}\frac{2\omega\gamma}{C}-\mbox{i} \frac{\omega}{D}+\frac{\gamma}{D}-\frac{1}{\varrho\lambda^{2}}=0\,, \tag{13}\]
from which we obtain the _Kramers-Kronig relations_ here as
\[\frac{1}{C}(\omega^{2}-\gamma^{2})+\frac{1}{D}\gamma-\frac{1}{\varrho\lambda^{2} }=0\,\,\,\,\,\,\,\,\,\,\,\mbox{and}\,\,\,\,\,\,\,\,\,\,\frac{2}{C}\gamma=\frac{ 1}{D}, \tag{14}\]
Now the latter equation tells us that the attenuation coefficient \(\gamma=C/(2D)\) is independent of frequency, which is related to the low-attenuation (hyperbolic) character of Maxwell materials even under high-frequency waves, in contrast to the parabolic rheologies as the Kelvin-Voigt one. The former equation in (14) yields \(\omega^{2}=C/(\varrho\lambda^{2})+\gamma^{2}-C\gamma/D=C/(\varrho\lambda^{2})- \gamma^{2}=C/(\varrho\lambda^{2})-C^{2}/(4D^{2})\). Realizing that the speed of wave is \(v=\omega\lambda\), we obtain
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}-\frac{C^{2}}{4D^{2}}\lambda^{2}} \tag{15}\]
for \(\lambda\leq\lambda_{\rm crit}:=2D/\!\sqrt{\varrho C}\), which reveals an _anomalous dispersion_, i.e. the i.e. the high-frequency waves propagate faster than low-frequency ones. It should be noted that waves with length longer than \(\lambda_{\rm crit}\) cannot propagate through such 1-dimensional Maxwellien media since the fluidic character of such media starts dominating for ultra-low frequency waves. Conversely, ultra-short-length waves propagate with velocity nearly as nondispersive solid \(\sqrt{C/\varrho}\). In fact, (12) is a so-called telegraph equation which is known to exhibit a _hyperbolic character_ with only weak attenuation. This in particular contrasts with e.g. in Kelvin-Voigt materials where high-frequency vibrations or waves are highly attenuated. Like in (9), the Q-factor \(1/(1-{\rm e}^{-2\lambda\gamma/v})\) now takes \(\gamma=C/(2D)\), i.e.
\[\mbox{Q-factor}\;\sim\;\frac{1}{1-{\rm e}^{-C\lambda/(Dv(\lambda))}}\,\,\,\, \,\,\mbox{with}\,\,\,\,\,\,v(\lambda)\,\,\,\mbox{from (\ref{eq:15})}\,. \tag{16}\]
### Jeffreys visco-elastodynamic model
More general viscoelastic rheologies may yield more general (nonmonotone) dispersion. Let us illustrate it on the Jeffreys rheology, as in Figure 1-right. Beside homogeneous media, such model is used for porous media (Maxwellian polymers or rocks) filled with Newtonian fluids [55]. It combines the additive strain decomposition in (10) with the additive stress decomposition as in the Kelvin-Voigt model, resulting to the constitutive equations
\[\sigma=\sigma_{1}+\sigma_{2},\,\,\,\,\,\,\,\,u_{x}=e_{{}_{\rm m}}+e_{{}_{\rm m }},\,\,\,\,\,\,\,\sigma_{1}=D_{1}\frac{\partial u_{x}}{\partial t},\,\,\,\,\, \,\sigma_{2}=D_{2}\frac{\partial e_{{}_{\rm m}}}{\partial t}=Ce_{{}_{\rm m}}, \tag{17}\]
which is to be completed by the momentum equation \(\varrho\frac{\partial}{\partial t}v=\sigma_{x}\).
To reveal a dispersive wave equation, we elliminate \(u\) from \(\frac{\partial}{\partial t}\sigma/C+\sigma/D_{2}=D_{1}\frac{\partial^{2}}{ \partial t^{2}}u_{x}/C+(1+D_{1}/D_{2})\frac{\partial}{\partial t}u_{x}\), by differentiating it in time and by substituting \(\frac{\partial^{3}}{\partial t^{3}}u_{x}=\frac{\partial}{\partial t}\sigma_{xx }/\varrho\) and \(\frac{\partial^{2}}{\partial t^{2}}u_{x}=\sigma_{xx}/\varrho\). This gives the dispersive equation
\[\frac{1}{C}\frac{\partial^{2}\sigma}{\partial t^{2}}+\frac{1}{D_{2}}\frac{ \partial\sigma}{\partial t}-\frac{D_{1}}{\varrho C}\frac{\partial\sigma_{xx }}{\partial t}-\left(1+\frac{D_{1}}{D_{2}}\right)\frac{1}{\varrho}\sigma_{xx }=0\,. \tag{18}\]
Exploiting the ansatz (3) for (18) written for \(u\) instead of \(u\), we obtain \(w^{2}/C-{\rm i}w/D_{2}-{\rm i}D_{1}w/(\varrho C\lambda^{2})-(1{+}D_{1}/D_{2})/( \varrho\lambda^{2})=0\). In terms of the real-valued coefficients \(\omega\) and \(\gamma\), it gives \((\omega^{2}{-}\gamma^{2}+2{\rm i}\omega\gamma)/C+(\gamma-{\rm i}\omega)/D_{2}+ (\gamma-{\rm i}\omega)D_{1}/(\varrho C\lambda^{2})-(1{+}D_{1}/D_{2})/(\varrho \lambda^{2})=0\). The Kramers-Kronig relations are here
\[\frac{\omega^{2}{-}\gamma^{2}}{C}+\frac{\gamma}{D_{2}}+\frac{\gamma D_{1}}{ \varrho C\lambda^{2}}-\frac{D_{1}{+}D_{2}}{\varrho D_{2}\lambda^{2}}=0\quad \mbox{ and }\quad\frac{2\gamma}{C}-\frac{1}{D_{2}}-\frac{D_{1}}{\varrho C \lambda^{2}}=0\,. \tag{19}\]
From this we obtain the expression for \(v=\omega\lambda\) as
\[v(\lambda)=\!\sqrt{\left(1{+}\frac{D_{1}}{D_{2}}\right)\!\frac{C}{\varrho}+ \gamma^{2}\lambda^{2}-\frac{\gamma D_{1}}{\varrho}-\frac{\gamma C}{D_{2}} \lambda^{2}}\quad\mbox{with }\ \gamma=\frac{C}{2D_{2}}+\frac{D_{1}}{2\varrho\lambda^{2}} \tag{20}\]
and
\[\mbox{Q-factor }\sim\ \frac{1}{1-{\rm e}^{-2\lambda\gamma/v(\lambda)}}\quad \mbox{with }v(\lambda)\mbox{ and }\gamma=\gamma(\lambda)\mbox{ from }(\ref{eq:1})\,. \tag{21}\]
Note that, for \(D_{1}=0\), the Jeffreys model turns into the Maxwell rheology, i.e. the dispersive wave equation (18) naturally turns (12) with \(D=D_{2}\) and thus also (20) turns into (15). Conversely, for \(D_{2}=\infty\), the Jeffreys model turns into the Kelvin-Voigt rheology, i.e. the dispersive wave equation (18) naturally turns (4) with \(D=D_{1}\) and thus also (20) turns into (7).
For illustration, a comparison of all these three rheological models is in Fig. 2. Let us note that the velocity here never exceed the wave speed \(\sqrt{C/\varrho}\) of the fully inviscid nondispersive model (1) which was normalized to 1 in Fig. 2. Illustration of models which have capacity to exceed this velocity in some frequency range will be done in the following section 3.
Figure 2: Dependence of the velocity (left) and the Q-factor (right) on the wavelength \(\lambda\) illustrating the normal, the anomalous, and the general dispersion for the Kelvin-Voigt (with \(D=3\)), Maxwell (with \(D=7\)), and Jeffreys (with \(D_{1}=3\) and \(D_{2}=7\)) rheologies, respectively, for \(C{=}1\) and \(\varrho=1\).
**Remark 1** (Standard solid: Poynting-Thomson-Zener rheology.).: There is only one 3-element rheology presented in Figure 1. The other possibility of the 3-element rheology arises from the Maxwell rheology arranged parallelly with a Hook elastic element or (alternatively and, in small strains, equivalently) as Kelvin-Voigt model arranged in series with a Hook elastic element. This is called a Zener or a Poynting-Thomson standard solid, respectively. The explicit form of the dispersion and attenuation calculated via the ansatz (3) involves a 3rd-order algebraic equation and leads to a complicated formula; viz [33, Remark 6.5.6] or, in terms of phase velocity [14, Sect.2.4.3]. Moreover, the standard-solid (like the Maxwell) rheology has a hyperbolic character (manifested by easy propagation of waves with arbitrarily high frequencies) and the analytically rigorous large-strain variant is troublesome, as partly mentioned in Section 4 below.
## 3 Various gradient enhancements
There are many options how to enhance the above simple-material models by some spacial gradients. Such extensions are, at least in some cases, related with the concept of the so-called _non-simple_ or _multipolar materials_.
### Dissipative-gradient Kelvin-Voigt rheology
Let us start with a gradient enhancement of the _Kelvin-Voigt rheology_ by a 2nd-grade "hyper" viscosity with the coefficient \(\ell^{2}D>0\), i.e. extending (4) as
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}-D\frac{\partial u_{xx}}{\partial t }+\ell^{2}D\frac{\partial u_{xxxx}}{\partial t}-Cu_{xx}=0\,; \tag{22}\]
Figure 3: Schematic 1D-diagrammes of some gradient-enhanced rheologlies from Figure 1; the enhanced elements are depicted in wider lines. Some other gradient enhancements from Section 3.7 do not bear such a figuration.
the length-scale parameter \(\ell>0\) has the physical dimension meters.
Having in mind the ansatz (3), we have \(\varrho\frac{\partial^{2}}{\partial\ell^{2}}u=-\varrho w^{2}u\), \(D\frac{\partial}{\partial t}u_{xx}=-{\rm i}Dwu/\lambda^{2}\), \(\ell^{2}D\frac{\partial}{\partial t}u_{xxxx}={\rm i}\ell^{2}Dwu/\lambda^{4}\), and \(Cu_{xx}=-Cu/\lambda^{2}\), so that the one-dimensional dispersive wave equation (22) yields the algebraic condition
\[\varrho w^{2}-{\rm i}\left(\frac{D}{\lambda^{2}}{+}\frac{\ell^{2}D}{\lambda^{ 4}}\right)w-\frac{C}{\lambda^{2}}=0\,. \tag{23}\]
When substituting \(w=\omega+{\rm i}\gamma\) so that \(w^{2}=\omega^{2}-\gamma^{2}+2{\rm i}\omega\gamma\), we obtain two algebraic equations for the real and the imaginary part each, called _Kramers-Kronig's relations_. More specifically, here
\[\varrho(\omega^{2}-\gamma^{2})=\frac{C}{\lambda^{2}}-\left(\frac{D}{\lambda^{ 2}}{+}\frac{\ell^{2}D}{\lambda^{4}}\right)\!\gamma\quad\mbox{ and }\quad 2 \varrho\gamma=\frac{D}{\lambda^{2}}{+}\frac{\ell^{2}D}{\lambda^{4}}\,. \tag{24}\]
From the latter equation we can read that \(\gamma=D(\lambda^{2}+\ell^{2})/(2\varrho\lambda^{4})\) and then the former equation yields \((\lambda\omega)^{2}=C/\varrho+\gamma^{2}\lambda^{2}-D(1+\ell^{2}/\lambda^{2}) \gamma/\varrho=C/\varrho-\gamma^{2}\lambda^{2}=C/\varrho-D^{2}(\lambda^{2}+ \ell^{2})^{2}/(4\varrho^{2}\lambda^{6})\). Realizing that the speed of waves is \(v=\omega\lambda\), we obtain
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}-D^{2}\frac{(\lambda^{2}{+}\ell^{2})^{2}}{ 4\varrho^{2}\lambda^{6}}}\,, \tag{25}\]
which gives a _normal dispersion_ for sufficiently long waves, namely having a length \(\lambda>\lambda_{\mbox{\tiny{crit}}}\) with the critical wavelength \(\lambda_{\mbox{\tiny{crit}}}>0\) solving the equation
\[2\sqrt{\varrho C}\lambda_{\mbox{\tiny{crit}}}^{3}-D\lambda_{\mbox{\tiny{crit} }}^{2}-\ell^{2}D=0\,, \tag{26}\]
cf. Fig. 4 for illustration. Let us recall that the adjective "normal" for dispersion means that waves with longer lengths propagate faster than those with shorter lengths. The attenuation \(\gamma=\gamma(\lambda)\) is decreasing with the wavelength \(\lambda\), cf. the latter relation in (24). In particular, waves with the length \(\lambda_{\mbox{\tiny{crit}}}\) or shorter are so much attenuated that they cannot propagate at all. This also reveals the dispersion/attenuation for the simple Kelvin-Voigt model from Sect. 2.1 when putting \(\ell^{2}=0\) into (25) and (26); in particular \(\lambda_{\mbox{\tiny{crit}}}=D/\sqrt{4\varrho C}\). The other extreme case is for the inviscid situation \(D\to 0+\) where \(\lambda_{\mbox{\tiny{crit}}}\to 0+\) so that also waves with ultra-high frequencies can propagate.
As for the Q-factor \(1/(1{-}{\rm e}^{-2\lambda\gamma/v})\) in the "hyper" Kelvin-Voigt model (22), here \(\gamma=\gamma(\lambda)\) determined from the latter equality in (24) and \(1/\omega=\lambda/v(\lambda)\) where the velocity \(v(\lambda)\) is here considered from (25), i.e.
\[\mbox{Q-factor }\sim\ \frac{1}{1-{\rm e}^{-D(\ell^{2}+\lambda^{2})/(\varrho \lambda^{3}v(\lambda))}}\quad\mbox{with}\quad v(\lambda)\ \mbox{ from (\ref{eq:1})}\,. \tag{27}\]
There is an interesting question about what is the difference between the normal dispersion caused by standard simple viscosity and those caused by higher-grade multipolar viscosities. In Fig. 4, we can see a comparison of normal dispersion due to the conventional viscosity \(D=10\) with \(\ell=0\) or due to the hyperviscosity \(D\sim 0\) and \(\ell^{2}D=30\) with the same critical wavelength \(\lambda_{\mbox{\tiny{crit}}}\).
**Remark 2** (Energetics for \(\ell\to 0\).).: Considering (22) on the one-dimensional domain \(\Omega=[0,1]\) with zero traction boundary conditions \(u_{x}=u_{xx}=0\) at \(x=0\) and \(=1\), the energetics behind (22) can be seen by testing it by \(\frac{\partial}{\partial t}u\) and integrating over \(\Omega\). This gives
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\underbrace{\frac{\varrho}{2}\Big{(} \frac{\partial u}{\partial t}\Big{)}^{2}\!\!+\frac{C}{2}(u_{x})^{2}}_{\text{ kinetic and stored energy}}\,\,\mathrm{d}x+\int_{\Omega}\underbrace{D\Big{(}\frac{\partial u_{xx}}{\partial t} \Big{)}^{2}\!\!+\ell^{2}D\Big{(}\frac{\partial u_{xx}}{\partial t}\Big{)}^{2}} _{\text{dissipation rate}}\,\,\mathrm{d}x=0\,. \tag{28}\]
The concept of nonsimple materials and, in particular, the higher-gradient extension is often considered questionable and it is therefore interesting to ask how it influences the energetics for small \(\ell>0\). When testing (22) by \(\frac{\partial}{\partial t}u_{xx}\), we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\varrho}{2}\Big{(}\frac{ \partial u_{x}}{\partial t}\Big{)}^{2}\!\!+\frac{C}{2}(u_{xx})^{2}\mathrm{d}x +\int_{\Omega}D\Big{(}\frac{\partial u_{xx}}{\partial t}\Big{)}^{2}\!\!+\ell^{ 2}D\Big{(}\frac{\partial u_{xxxx}}{\partial t}\Big{)}^{2}\,\mathrm{d}x=0\,.\]
From this, considering enough regular (smooth) initial conditions, namely \(u_{xx}\left|{}_{t=0}\right.\) and \(\frac{\partial}{\partial t}u_{x}\left|{}_{t=0}\right.\) square-integrable on \(\Omega\), we can see that \(\int_{\Omega}D(\frac{\partial}{\partial t}u_{xx})^{2}\,\mathrm{d}x\) is square-integrable in time. As a result, the dissipation due to the hyper-viscosity in (28) vanishes for \(\ell\to 0\). In particular, the energetics of the model for small \(\ell>0\) is not much different from the energetics of the simple Kelvin-Voigt model (4). These energetical considerations holds also for multidimensional situations, cf. also [32, Prop. 2].
### Conservative-gradient Kelvin-Voigt rheology
Another variant of dispersion by incorporating higher-order spacial gradients into the Kelvin-Voigt rheology is in the conservative part of the system by augmenting the stored energy by the strain gradient as \(\varphi=\frac{1}{2}C(e^{2}+\ell^{2}e_{x}^{2})\) where \(\ell>0\) is a length-scale parameter (in meters).
Figure 4: A comparison of the (normal) dispersion and Q-factor due to the “hyper” viscosity (solid line) with the standard viscosity in the Kelvin-Voit viscoelastic model as in Figure 2 (dashed line). For the same \(\lambda_{\text{\tiny{CRT}}}\) from (26), the dispersion due to hyper-viscosity can be less visible than the dispersion due to usual simple viscosity and facilitates easier wave propagation due to a higher Q-factor.
Sometimes, this gradient term is interpreted as _capillarity_. In the Kelvin-Voigt model (4), this leads a dispersive wave equation
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}-D\frac{\partial u_{xx}}{\partial t}- C\big{(}u{-}\ell^{2}u_{xx}\big{)}_{xx}=0\,. \tag{29}\]
This corresponds to expansion of the stored energy \(\frac{1}{2}Cu_{x}^{2}\) in (4) by the term \(\frac{1}{2}\ell Cu_{xx}^{2}\). Instead of (5), using also \(\ell^{2}Cu_{xxxx}=\ell^{2}Cu/\lambda^{4}\), we now have \(\varrho w^{2}-{\rm i}Dw/\lambda^{2}-C/\lambda^{2}-\ell^{2}C/\lambda^{4}=0\) so that the _Kramers-Kronig's relations_ (6) now expands as
\[\varrho(\omega^{2}-\gamma^{2})=\frac{C}{\lambda^{2}}-D\frac{\gamma}{\lambda^{2 }}+\ell^{2}\frac{C}{\lambda^{4}}\quad\mbox{ and }\quad 2\varrho\gamma=\frac{D}{\lambda^{2}}\,; \tag{30}\]
the length-scale parameter \(\ell>0\) has again the physical dimension meters. The speed of wave \(v=\omega\lambda\) is now
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}+\left(\ell^{2}C{-}\frac{D^{2}}{4\varrho} \right)\!\frac{1}{\varrho\lambda^{2}}}\,. \tag{31}\]
The corresponding Q-factor \(1/(1{-}{\rm e}^{-2\lambda\gamma/v})\) uses \(\gamma=D/(2\varrho\lambda^{2})\) from (30), i.e.
\[\mbox{Q-factor }\sim\ \frac{1}{1-{\rm e}^{-D/(\varrho\lambda v(\lambda))}} \quad\mbox{with}\quad v(\lambda)\ \mbox{ from }\eqref{eq:1}\,. \tag{32}\]
Interestingly, for \(\ell>2\sqrt{\varrho C}/D\), the conservative gradient facilitates the propagation of high-frequency waves, which otherwise could not propagate in the standard Kelvin-Voigt model, an _anomalous dispersion_, i.e. higher-frequency waves (i.e. with longer wavelengths) propagate faster than waves with lower frequencies. Moreover, for the special ratio of \(D\) and \(\ell\), namely
\[\frac{D}{\ell}=2\sqrt{\varrho C}\,, \tag{33}\]
we obtain a non-dispersive model with the velocity \(v=\sqrt{C/\varrho}\) as in the basic elastodynamic model \(\varrho\frac{\partial^{2}}{\partial t^{2}}u=Cu_{xx}\) but now with a finite \(\lambda\)-dependent Q-factor. In the opposite case \(D/\ell>2\sqrt{\varrho C}\), we obtain the normal dispersion. No matter what is dispersion, for \(D>0\) (very) small, the Q-factor is (very) large. In particular, we can thus devise a gradient solid/parabolic-type model (which might be advantageous from mathematical reasons especially if there would be some nonlinearities involved) which is nondispersive and possibly a very low attenuation (high Q-factor), viz Figure 5 (dashed line).
A special situation occurs if \(D=0\), which makes the model (29) conservative. In particular, there is no attenuation so the Q-factor is \(+\infty\). Actually, it is the conservative enhancement of (1) leading to anomalous dispersion illustrated in Figure 6, like considered in [11, Sect. 1.2.2], [28, Sec.2.2], or [36].
**Remark 3** (Energetics for \(\ell\to 0\).).: Considering (29) on the one-dimensional domain as in Remark (2), its energetics can again be seen by testing it by \(\frac{\partial}{\partial t}u\) and integrating over \(\Omega\)
which gives
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\underbrace{\frac{\varrho}{2}\Big{(} \frac{\partial u}{\partial t}\Big{)}^{2}}_{\text{kinetic and stored energy}}+\frac{C}{2}(u_{x})^{2}+\ell^{2}C(u_{xx})^{2}}{\mathrm{d}x+\int_{\Omega}\underbrace{D \Big{(}\frac{\partial u_{x}}{\partial t}\Big{)}^{2}}_{\text{dissipation rate}}\, \mathrm{d}x=0\,. \tag{34}\]
To show the asymptotics for \(\ell\to 0\) like in Remark (2), we test (29) by \(\frac{\partial}{\partial t}u_{xxxx}\). This yields
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\varrho}{2}\Big{(}\frac{ \partial u_{xx}}{\partial t}\Big{)}^{2}+\frac{C}{2}(u_{xxx})^{2}+\ell^{2}\frac {C}{2}(u_{xxxx})^{2}\mathrm{d}x+\int_{\Omega}D\Big{(}\frac{\partial u_{xxx}}{ \partial t}\Big{)}^{2}\,\mathrm{d}x=0\,.\]
From this, considering the initial conditions \(u_{xxx}|_{t=0}\) and \(\frac{\partial}{\partial t}u_{x}|_{t=0}\) square-integrable on \(\Omega\), we can see that \(\int_{\Omega}C(\frac{\partial}{\partial t}u_{xx})^{2}\,\mathrm{d}x\) is uniformly bounded in time. As a result, the higher-gradient stored energy term in (34) integrated in time vanishes for \(\ell\to 0\). In particular, the energetics of the model for small \(\ell>0\) is not much different from the energetics of the simple Kelvin-Voigt model (4). These energetical considerations holds also for multidimensional situations.
Figure 5: Normal (solid line) and anomalous (dotted line) dispersion which both can be obtained by the model (29) as well as the nondispersive variant (dashed line); \(\ell=1\), \(C\)=1 and \(\varrho=1\).
Figure 6: Anomalous dispersion of a conservative model (29) with \(D=0\) for various length-scale parameters \(\ell\); \(C\)=1 and \(\varrho=1\). The Q-factor is identically \(+\infty\).
### Mixed-gradient Kelvin-Voigt rheology
Moreover, (22) and (29) can be combined together to get dispersion depending nonmonotonically on wavelength by combining normal-anomalous dispersion, modelled here by gradients in nonconservative and conservative parts. Considering now two length-scale parameters \(\ell_{1}>0\) and \(\ell_{2}>0\), we have in mind the dispersive wave equation
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}-D\frac{\partial u_{xx}}{\partial t} +\ell_{1}^{2}D\frac{\partial u_{xxxx}}{\partial t}-C\big{(}u{-}\ell_{2}^{2}u_{ xx}\big{)}_{xx}=0\,. \tag{35}\]
The combination of (23) and (13) yields the algebraic condition
\[\varrho w^{2}-{\rm i}\left(\frac{D}{\lambda^{2}}{+}\frac{\ell_{1}^{2}D}{ \lambda^{4}}\right)w-\bigg{(}1{+}\frac{\ell_{2}^{2}}{\lambda^{2}}\bigg{)}\frac {C}{\lambda^{2}}=0\,. \tag{36}\]
When substituting \(w=\omega+{\rm i}\gamma\) so that \(w^{2}=\omega^{2}-\gamma^{2}+2{\rm i}\omega\gamma\), we obtain the _Kramers-Kronig's relations_
\[\varrho(\omega^{2}-\gamma^{2})=\frac{C}{\lambda^{2}}-\bigg{(}\frac{D}{ \lambda^{2}}+\frac{\ell_{1}^{2}D}{\lambda^{4}}\bigg{)}\gamma+\ell_{2}^{2} \frac{C}{\lambda^{4}}\quad\mbox{ and }\quad 2\varrho\gamma=\frac{D}{ \lambda^{2}}{+}\frac{\ell_{1}^{2}D}{\lambda^{4}}\,. \tag{37}\]
The speed of wave \(v=\omega\lambda\) as a function of the wavelength is now
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}\bigg{(}1+\frac{\ell_{2}^{2}}{\lambda^{2 }}\bigg{)}-\frac{D^{2}}{4\varrho^{2}\lambda^{2}}\bigg{(}1+\frac{\ell_{1}^{2}} {\lambda^{2}}\bigg{)}^{2}}\,. \tag{38}\]
The Q-factor \(1/(1{-}{\rm e}^{-2\lambda\gamma/v})\) corresponding to this model uses \(\gamma=\gamma(\lambda)=(D/\lambda^{2}{+}\ell_{1}^{2}D/\lambda^{4})/(2\varrho)\) and \(v=v(\lambda)\) from (38), i.e.
\[\mbox{Q-factor }\sim\frac{1}{1-{\rm e}^{-D(\ell_{1}^{2}+\lambda^{2})/(\varrho \lambda^{3}v(\lambda))}}\quad\mbox{with}\quad v(\lambda)\mbox{ \ from (\ref{eq:11})}\,. \tag{39}\]
The combination of the normal and the anomalous dispersion in the formulas (25) and (31) allows for devising a quite general nonmonotonic dispersion, as indicated in Figure 7.
### Maxwell rheology with dissipative gradient
Besides the Kelvin-Voigt model, the gradient enhancement can be made also for other rheologies. For the _Maxwell rheology_, it is important to start with the formulation by employing internal variable (10). There are essentially two options: a dissipative-gradient extension of the flow rule \(D\frac{\partial}{\partial t}e_{\mbox{\tiny{\sc ns}}}=\sigma\) as \(D\frac{\partial}{\partial t}e_{\mbox{\tiny{\sc ns}}}=\sigma+{\rm div}(\ell^{2}D \frac{\partial}{\partial t}\nabla e_{\mbox{\tiny{\sc ns}}})\) or a conservative-gradient extension of the Hook law \(Ce_{\mbox{\tiny{\sc ns}}}=\sigma\) as \(Ce_{\mbox{\tiny{\sc ns}}}=\sigma+{\rm div}(\ell^{2}C\nabla e_{\mbox{\tiny{\sc ns }}})\) with some length-scale parameter \(\ell>0\) again of the physical dimension meters.
The former option of the dissipative gradient, written as \(D\pi=\sigma+{\rm div}(\ell^{2}D\pi)\) when abbreviating by \(\pi:=\frac{\partial}{\partial t}e_{\mbox{\tiny{\sc ns}}}\) the _rate of the inelastic creep strain_\(e_{\mbox{\tiny{\sc ns}}}\), leads to
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}=\sigma_{x},\qquad D\big{(}\pi{-} \ell^{2}\pi_{xx}\big{)}=\sigma,\quad\mbox{and}\quad\frac{\partial\sigma}{ \partial t}=C\Big{(}\frac{\partial u_{x}}{\partial t}-\pi\Big{)}\,. \tag{40}\]
Differentiating the last equation gives \(\frac{\partial}{\partial t}\sigma_{xx}=C(\frac{\partial}{\partial t}u_{xxx}-\pi_{ xx})\). Summing them with the weights \(D\) and \(-\ell^{2}D\) allows for the elimination of the internal rate variable \(\pi\) when using the second equation in (40), namely \(D(\frac{\partial}{\partial t}\sigma-\ell^{2}\frac{\partial}{\partial t}\sigma _{xx})=DC(\frac{\partial}{\partial t}u_{x}-\ell^{2}\frac{\partial}{\partial t }u_{xxx})-C\sigma\). Differentiating in time and using the first equation in (40) allows for elimination of \(u\), yielding the dispersive wave equation
\[\frac{1}{C}\frac{\partial^{2}}{\partial t^{2}}\big{(}\sigma{-}\ell^{2}\sigma_{ xx}\big{)}+\frac{1}{D}\frac{\partial\sigma}{\partial t}-\frac{1}{\varrho} \big{(}\sigma{-}\ell^{2}\sigma_{xx}\big{)}_{xx}=0\,. \tag{41}\]
Alternatively, we can eliminate \(\sigma\): using \(\varrho\frac{\partial^{2}}{\partial t^{2}}u=\sigma_{x}\) for \(D(\frac{\partial}{\partial t}\sigma_{x}-\ell^{2}\frac{\partial}{\partial t} \sigma_{xxx})=DC(\frac{\partial}{\partial t}u_{xx}-\ell^{2}\frac{\partial}{ \partial t}u_{xxxx})-C\sigma_{x}\) gives \(\varrho D(\frac{\partial^{3}}{\partial t^{3}}u-\ell^{2}\frac{\partial^{3}}{ \partial t^{3}}u_{xx})=DC(\frac{\partial}{\partial t}u_{xx}-\ell^{2}\frac{ \partial}{\partial t}u_{xxxx})-\varrho C\frac{\partial^{2}}{\partial t^{2}}u\). When integrated in time, it leads to the dispersive equation (41) written in terms of \(u\) instead of \(\sigma\). Interestingly, the gradient term \(\ell^{2}\pi_{xx}\) in (40) is manifested in the dispersive wave equation (41) written for \(u\) as an inertia-gradient term \(\varrho\ell^{2}\frac{\partial^{2}}{\partial t^{2}}u_{xx}\), cf. Remark 5 below.
Exploiting the ansatz (3) now for (41), we obtain
\[\frac{w^{2}}{C}\bigg{(}1{+}\frac{\ell^{2}}{\lambda^{2}}\bigg{)}-{\rm i}\frac{ w}{D}-\frac{1}{\varrho\lambda^{2}}\bigg{(}1{+}\frac{\ell^{2}}{\lambda^{2}} \bigg{)}=0\]
so that, realizing \(w=\omega+{\rm i}\gamma\), we expand (13) as
\[\frac{\omega^{2}{-}\gamma^{2}}{C}\bigg{(}1{+}\frac{\ell^{2}}{ \lambda^{2}}\bigg{)}+{\rm i}\frac{2\omega\gamma}{C}\bigg{(}1{+}\frac{\ell^{2} }{\lambda^{2}}\bigg{)}-{\rm i}\frac{\omega}{D}+\frac{\gamma}{D}-\frac{1}{ \varrho\lambda^{2}}\bigg{(}1{+}\frac{\ell^{2}}{\lambda^{2}}\bigg{)}=0\,.\]
The Kramers-Kronig relations (14) then expand as here as
\[\frac{1}{C}(\omega^{2}-\gamma^{2})+\frac{\gamma\lambda^{2}}{D(\ell{ +}\lambda^{2})}-\frac{1}{\varrho\lambda^{2}}=0\quad\quad\text{and}\quad\quad \frac{2\gamma}{C}\bigg{(}1{+}\frac{\ell^{2}}{\lambda^{2}}\bigg{)}=\frac{1}{D}\,.\]
Thus (15) modifies as
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}-\frac{C^{2}\lambda^{6}}{4D^{2}(\lambda^{ 2}{+}\ell^{2})^{2}}} \tag{42}\]
Figure 7: A nonmonotone dispersion of the wave velocity (left) due to (38) and the Q-factor (right) due to (39) in dependence on the wavelength \(\lambda\); \(C=1\), \(\varrho=1\), \(D\sim 0\), and \(\ell_{2}=2\).
while the Q-factor is again from (16) but involving now (42) instead of (15). Cf. Figure 8 which illustrates that the gradient term in (40) facilitates especially the propagation of longer-length waves (as far as less dispersion and higher Q-factor concerns) in comparison with the simple Maxwell rheology. Noteworthy, it is an example how gradient extension of the dissipative part can lead to less dissipation.
When reducing \(D\pi-D\ell^{2}\pi_{xx}=\sigma\) in (40) to \(-H\pi_{xx}=\sigma\) with \(H=D\ell^{2}\) the hyperviscosity coefficient, (42) turns into
\[v=v(\lambda)=\sqrt{\frac{C}{\varrho}-\frac{C^{2}\lambda^{6}}{4H^{2}}}\,. \tag{43}\]
The Q-factor (8) is now \(1/(1-\mathrm{e}^{\lambda^{3}C/(2Hv(\lambda))})\) with \(v(\lambda)\) from (43). It is interesting to compare the standard viscosity in the Maxwell model with this "hyper Maxwell" model. When choosing \(H\) so that the critical wavelength in (42) \(\lambda_{{}_{\mathrm{CRT}}}=\sqrt[6]{4H^{2}/\varrho C}\) is the same as the critical wavelength \(\lambda_{{}_{\mathrm{CRT}}}=2D/\sqrt{\varrho C}\) in (15), we can relevantly compare both models, cf. Fig. 9. Similarly as in Figure 4, the hyper-viscosity facilitates the propagation of waves as far as less dispersion and higher Q-factor.
Noteworthy, for \(D\to\infty\), we obtain a fully conservative model with no dispersion, i.e. \(v=\sqrt{C/\varrho}\) is constant and the Q-factor is \(+\infty\). This model was devised in [39].
### Maxwell rheology with conservative gradient
The later mentioned option, i.e. the conservative-gradient enhancement of (10), leads to the system
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}=\sigma_{x},\ \ \ \ \ D\pi=\sigma,\ \ \ \text{and}\ \ \ \frac{\partial\sigma}{\partial t}=C\Big{(}\frac{\partial u_{x}}{\partial t} -\pi\Big{)}-\ell^{2}C\Big{(}\frac{\partial u_{x}}{\partial t}-\pi\Big{)}_{xx}. \tag{44}\]
Figure 8: Dependence of the velocity (left) and the Q-factor (right) on the wavelength \(\lambda\) of the Maxwell model for \(C=1\), \(\varrho=1\), and \(D=7\). It illustrates the (expected) influence of adding the hyper-viscosity to decrease the dispersion and to increase the Q-factor. For \(\ell=0\), cf. the Maxwell model from Figure 2.
By eliminating the creep rate \(\pi\) and the stress \(\sigma\), we obtain the dispersive equation
\[\frac{1}{C}\frac{\partial^{2}u}{\partial t^{2}}+\frac{1}{D}\frac{ \partial u}{\partial t}-\frac{1}{\varrho}u_{xx}=\ell^{2}\Big{(}\frac{1}{D} \frac{\partial u}{\partial t}-\frac{1}{\varrho}u_{xx}\Big{)}_{xx}\,. \tag{45}\]
Note that, for \(\ell=0\), (45) naturally turns into the telegraph equation (12) written for \(u\). For \(\ell>0\), (13) expands to
\[\frac{\omega^{2}{-}\gamma^{2}}{C}+{\rm i}\frac{2\omega\gamma}{C}-{\rm i}\frac {\omega}{D}+\frac{\gamma}{D}-\frac{1}{\varrho\lambda^{2}}=\ell^{2}\Big{(} \frac{1}{\varrho\lambda^{4}}-\frac{\gamma}{D\lambda^{2}}+{\rm i}\frac{\omega }{D\lambda^{2}}\Big{)}\,,\]
from which we obtain the Kramers-Kronig relations (14) expanded here as
\[\frac{\omega^{2}{-}\gamma^{2}}{C}+\frac{1}{D}\gamma-\frac{1}{ \varrho\lambda^{2}}=\ell^{2}\Big{(}\frac{1}{\varrho\lambda^{4}}-\frac{\gamma }{D\lambda^{2}}\Big{)}\quad\mbox{ and }\quad\frac{2}{C}\gamma=\frac{1}{D}+\frac{\ell^{2}}{D \lambda^{2}}\,.\]
This gives
\[v=\sqrt{\frac{C}{\varrho}+\gamma\Big{(}\gamma{-}\frac{C}{D}\Big{)}\lambda^{2} +\ell^{2}C\Big{(}\frac{1}{\varrho\lambda^{2}}{+}\frac{\gamma}{D}\Big{)}}\quad \mbox{ with }\ \gamma=\frac{C}{2D}+\frac{\ell^{2}C}{2D\lambda^{2}}\,. \tag{46}\]
Naturally, for \(\ell=0\), it reduces to the Maxwell dispersion (15). This is illustrated on Figure 10 together with the Q-factor \(1/(1{-}{\rm e}^{-2\lambda\gamma/v})\). As expected, adding the conservative gradient into the Maxwell model as in (44) facilitates propagation of ultra-high frequency waves, i.e. waves with ultra-short wavelength. Notably, for \(D\to\infty\), this dispersive model becomes fully conservative, \(\gamma=0\), and
\[v=\sqrt{\frac{C}{\varrho}\Big{(}1+\frac{\ell^{2}}{\lambda_{2}} \Big{)}}\,. \tag{47}\]
The same asymptotics towards anomalously dispersive model holds for the Kelvin-Voigt model from Section 3.2 with \(D\to 0\), cf. Figure 6.
Figure 9: Comparison of the conventional Maxwellian viscosity (dotted lines) with the Maxwellian hyper-viscosity (solid line) with the same \(\lambda_{\mbox{\tiny{CRIT}}}=14\); again \(C=1\) and \(\varrho=1\). The “hyper-Maxwell” rheology allow for propagation of waves wih lengths below \(\lambda_{\mbox{\tiny{CRIT}}}\) with less dispersion and attenuation (i.e. with higher Q-factor).
### Dissipative-gradient Jeffreys rheology
A dissipative gradient enhancement of the Jeffreys rheology suggests three variants, copying either (22) or (40), or both. Again, we need a formulation with the internal variable as the creep strain rate \(\pi\). Thus, (17) extended by two gradient terms reads as
\[\varrho\frac{\partial^{2}u}{\partial t^{2}}=\Big{(}D_{1}\frac{ \partial u_{x}}{\partial t}+Ce+\ell_{1}^{2}D_{1}\frac{\partial u_{xxx}}{ \partial t}\Big{)}_{x}\,, \tag{48a}\] \[\frac{\partial e}{\partial t}=\frac{\partial u_{x}}{\partial t}- \pi\,,\quad\text{and}\quad D_{2}(\pi-\ell_{2}^{2}\pi_{xx})=Ce\,. \tag{48b}\]
To reveal the underlying dispersive wave equation, we make the elimination of \(\pi\) from the last and the penultimate equation in (48):
\[\frac{\partial}{\partial t}\big{(}e-\ell_{2}^{2}e_{xx}\big{)}+ \frac{C}{D_{2}}e=\frac{\partial}{\partial t}\big{(}u-\ell_{2}^{2}u_{xx}\big{)} _{x}\,. \tag{49}\]
Then, we differentiate the first equation in (48) in time, which yields
\[\varrho\frac{\partial^{3}u}{\partial t^{3}}=\Big{(}D_{1}\frac{ \partial^{2}u_{x}}{\partial t^{2}}+C\frac{\partial e}{\partial t}+\ell_{1}^{2 }D_{1}\frac{\partial^{2}u_{xxx}}{\partial t^{2}}\Big{)}_{x}\,. \tag{50}\]
Summing it with the first equation in (48) multiplied by \(C/D_{2}\) and subtracting the first equation in (48) multiplied by \(\ell_{2}^{2}\) and differentiated twice in space, we can use (49) to eliminate also \(e\) and obtain the dispersive wave equation in terms of the displacement \(u\), which, after integration in time, reads as
\[\varrho\frac{\partial^{2}}{\partial t^{2}}(u-\ell_{2}^{2}u_{xx}) -C\big{(}u-\ell_{2}^{2}u_{xx}\big{)}_{xx}=D_{1}\frac{\partial}{ \partial t}\big{(}u-\ell_{2}^{2}u_{xx}\big{)}_{xx}+C\frac{D_{1}}{D_{2}}u_{xx}\] \[-\varrho\frac{C}{D_{2}}\frac{\partial u}{\partial t}+\ell_{1}^{2 }D_{1}\frac{\partial}{\partial t}\big{(}u-\ell_{2}^{2}u_{xx}\big{)}_{xxxx}+C \frac{\ell_{1}^{2}D_{1}}{D_{2}}u_{xxxx}\,. \tag{51}\]
Figure 10: Anomalous dispersion of the velocity according (46) for \(\ell=3,\,1,\,0\) (left) and the corresponding \(Q\)-factor (right). The case \(\ell=0\) is the Maxwell model from Figure 2 again \(C\)=1, \(\varrho=1\), and \(D=7\).
Using the ansatz (3) and abbreviating \(a(\lambda)=1+\ell_{2}^{2}/\lambda^{2}\), we obtain
\[\varrho w^{2}a(\lambda)-{\rm i}\varrho\frac{Cw}{D_{2}}-{\rm i}D_{1}w\frac{a( \lambda)}{\lambda^{2}}-\frac{CD_{1}}{D_{2}\lambda^{2}}-C\frac{a(\lambda)}{ \lambda^{2}}+{\rm i}w\ell_{1}^{2}D_{1}\frac{a(\lambda)}{\lambda^{4}}+\frac{C \ell_{1}^{2}D_{1}}{D_{2}\lambda^{4}}=0\,.\]
Reminding \(w=\omega+{\rm i}\gamma\) and \(w^{2}=\omega^{2}-\gamma^{2}+2{\rm i}\omega\gamma\), the resulting Kramers-Kronig relations are
\[\varrho(\omega^{2}-\gamma^{2})a(\lambda)+\varrho\frac{C\gamma}{D_ {2}}+D_{1}\gamma\frac{a(\lambda)}{\lambda^{2}}-\frac{CD_{1}}{D_{2}\lambda^{2}} -C\frac{a(\lambda)}{\lambda^{2}}-\gamma\ell_{1}^{2}D_{1}\frac{a(\lambda)}{ \lambda^{4}}+\frac{C\ell_{1}^{2}D_{1}}{D_{2}\lambda^{4}}=0\,,\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
From this, we obtain the Kramers-Kronig relations
\[\varrho(\omega^{2}-\gamma^{2})\Big{(}1-\frac{\ell^{2}}{\lambda^{2}}\Big{)}=\frac{ C}{\lambda^{2}}\qquad\mbox{and}\qquad 2\varrho\gamma\Big{(}1-\frac{\ell^{2}}{ \lambda^{2}}\Big{)}=0\,. \tag{56}\]
The latter relation gives \(\gamma=0\) so that the former relation results to
\[v=\omega\lambda=\sqrt{\frac{C}{\varrho(1{-}\ell^{2}/\lambda^{2})}}\,. \tag{57}\]
The real-valued velocity is obtained for \(\lambda>\lambda_{\rm crit}:=\ell\). Note that \(v\) in (57) decays with increasing \(\lambda\), which means the anomalous dispersion. For ultra-long waves, the velocity asymptotically approaches the velocity in the nondispersive model \(\sqrt{C/\varrho}\) while for \(\lambda\to\lambda_{\rm crit}\), the velocity blows up to \(+\infty\).
One can consider a combination of this fully conservative model with the previous viscous dissipative models. The _Kelvin-Voigt rheology with stress diffusion_ would expand (53) as
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}\qquad\mbox{and}\qquad\frac{ \partial}{\partial t}\big{(}\sigma-\ell^{2}\sigma_{xx}\big{)}=Cv_{x}+D\frac{ \partial v_{x}}{\partial t}\,. \tag{58}\]
This gives the additional term \(D\frac{\partial}{\partial t}v_{xx}\) on the right-hand side of (54) and \({\rm i}Dw/\lambda^{2}\) in (55). Therefore (56) expands as
\[\varrho(\omega^{2}-\gamma^{2})\Big{(}1-\frac{\ell^{2}}{\lambda^{2}}\Big{)}= \frac{C}{\lambda^{2}}-\frac{D\gamma}{\lambda^{2}}\qquad\mbox{and}\qquad 2 \varrho\gamma\Big{(}1-\frac{\ell^{2}}{\lambda^{2}}\Big{)}=\frac{D}{\lambda^{2} }\,. \tag{59}\]
The latter relation gives \(\gamma\), and then the former relation yields the velocity:
\[v=\omega\lambda=\sqrt{\frac{C-\gamma D+\lambda^{2}\gamma^{2}}{\varrho(1-\ell^ {2}/\lambda^{2})}}\quad\mbox{ with }\ \gamma=\frac{D}{2\varrho(\lambda^{2}{-}\ell^{2})}\,. \tag{60}\]
For \(D=0\) we obtain the model (53) with the critical wave-length \(\lambda_{\rm crit}=\ell\) while for \(\ell=0\) we obtain the Kelvin-Voigt model (4) with the critical wave-length \(\lambda_{\rm crit}=D/(2\sqrt{\varrho C})\). Depending which model dominates, i.e. whether \(D/(2\sqrt{\varrho C})<\ell\) or not, the dispersion is monotone (anomalous) or non-monotone, cf. Figure 11-left while the corresponding Q-factor \(1/(1{-}{\rm e}^{-2\lambda\gamma/v})\) is in Figure 11-right.
As already mentioned, for \(\ell\to 0\), we obtain in the limit the conventional Kelvin-Voigt rheology. The asymptotic behaviour and comparison with that Kelvin-Voigt model is illustrated in Figure 12.
Another possibility is to involve the dissipative stress gradient into the kinematic constraint \(\frac{\partial}{\partial t}e=v_{x}\) in (1), leading to the so-called _stress diffusion_. Such a dissipative modification of the purely conservative model (1) reads as
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}\quad\mbox{ with }\quad\sigma=Ce\qquad\mbox{and}\qquad\frac{\partial e}{\partial t}=v_{x}+ \frac{\sigma_{xx}}{D}\,. \tag{61}\]
The energetics behind (61) can be revealed (assuming homogeneous boundary conditions as in Remark 2) by testing \(\varrho\frac{\partial}{\partial t}v=Ce_{x}\) by \(v\). When integrating it over a spacial domain \(\Omega\), this gives
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\varrho}{2}v^{2} \mathrm{d}x+\int Cev_{x}\mathrm{d}x =\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\varrho}{2}v^{2} \mathrm{d}x+\int_{\Omega}Ce\Big{(}\frac{\partial e}{\partial t}-\frac{\sigma_{ xx}}{D}\Big{)}\,\mathrm{d}x\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\Big{(}\frac{\varrho }{2}v^{2}+\frac{1}{2}Ce^{2}\Big{)}\,\mathrm{d}x+\int_{\Omega}\frac{\sigma_{x} ^{2}}{CD}\,\mathrm{d}x=0\,.\]
The last term reveals the dissipation rate \(\sigma_{x}^{2}/(CD)\), which explains why the last term in (61) is referred as the stress-diffusion. Differentiating the first equation in (61) in space and the second one in time allows for the elimination of \(v\) to obtain the dispersive wave equation
Figure 11: An anomalous or nonmonotone dispersion of the wave velocity (left) due to (57) and (60) and the Q-factor (right) in dependence on the wavelength \(\lambda\); \(C=1\), \(\varrho=1\), and \(\ell=1\), For (57) which is (60) with \(D=0\), \(Q=+\infty\) is not depicted.
Figure 12: A normal or nonmonotone dispersion of the wave velocity (left) due to (57) and (60) and the Q-factor (right) in dependence on the wavelength \(\lambda\); \(C=1\), \(\varrho=1\), and \(D=3\). For \(\ell=0\), it coincides with the Kelvin-Voigt model from Figure 2.
\(\frac{\partial^{2}}{\partial t^{2}}\sigma/C-\sigma_{xx}/\varrho-\frac{\partial}{ \partial t}\sigma_{xx}/D=0\). In terms of \(e\), it can be written as
\[\varrho\frac{\partial^{2}e}{\partial t^{2}}-\frac{\varrho C}{D}\frac{\partial e _{xx}}{\partial t}-Ce_{xx}=0\,. \tag{62}\]
which has the same structure (and thus the same dispersion and Q-factor) as in the Kelvin-Voigt model (4) except that, instead of \(D\) in (4), the viscosity coefficient in (62) is now \(\varrho C/D\).
Such stress diffusion can be combined with the previous viscous dissipative models. In the case of the Kelvin-Voigt model which expands the momentum equation by viscosity, it leads to
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}+D_{1}v_{xx}\quad\mbox{ with }\quad\sigma=Ce\quad\mbox{ and }\quad\frac{\partial e}{\partial t}=v_{x}+\frac{\sigma_{xx}}{D_{2}}\,. \tag{63}\]
Differentiation the first equation in (63) in space and the latter equation both once in time and separately also twice in space allows for the elimination of \(v\) to obtain the dispersive wave equation in terms of the stress \(\sigma\):
\[\varrho\frac{\partial^{2}\sigma}{\partial t^{2}}-C\sigma_{xx}-\Big{(}D_{1}+ \frac{\varrho C}{D_{2}}\Big{)}\frac{\partial\sigma_{xx}}{\partial t}+\frac{ CD_{1}\sigma_{xxxx}}{D_{2}}=0\,. \tag{64}\]
It is to observe that, qualitatively, this model exhibits the same dispersion and Q-factor as the conservative-gradient enhancement (29). In particular, a nondispersive situation appears also here if \(2\varrho CD_{1}=D_{1}^{2}D_{2}+\varrho^{2}C^{2}/D_{2}\).
An alternative to (63) arises by expanding the Hook law \(\sigma=Ce\) instead of the momentum equation, This yields
\[\varrho\frac{\partial v}{\partial t}=\sigma_{x}\quad\mbox{ with }\quad\sigma=Ce+D_{1}\frac{\partial e}{\partial t}\quad\mbox{ and }\quad\frac{\partial e}{\partial t}=v_{x}+\frac{\sigma_{xx}}{D_{2}}\,. \tag{65}\]
Differentiation the first equation in (65) in space and the latter equation in time allows for the elimination of \(v\) to obtain the dispersive wave equation in terms of the strain \(e\):
\[\varrho\frac{\partial^{2}e}{\partial t^{2}}-Ce_{xx}-\frac{\varrho D_{1}}{D_{ 2}}\frac{\partial^{2}e_{xx}}{\partial t^{2}}-\Big{(}D_{1}+\frac{\varrho C}{D _{2}}\Big{)}\frac{\partial e_{xx}}{\partial t}=0\,. \tag{66}\]
Again we can see a micro-inertia term \(\frac{\partial^{2}}{\partial t^{2}}e_{xx}\) and, in fact, the same structure (and thus the dispersion and Q-factor) as the Kelvin-Voigt model with stress diffusion (58). Naturally, for \(D_{2}\to\infty\), both models (63) and (65) become equivalent to (each other and to) the Kelvin-Voigt model (4).
**Remark 4** (An alternative diffusive regularization of \(\frac{\partial}{\partial t}e=v_{x}\).).: In literature, a regularization of the kinematic constraint \(\frac{\partial}{\partial t}e=v_{x}\) can sometimes be seen as
\[\frac{\partial e}{\partial t}=v_{x}+\varepsilon e_{xx} \tag{67}\]
with \(\varepsilon>0\) a diffusion parameter (in m\({}^{2}\)/s). This variant applied to the wave equation \(\varrho\frac{\partial}{\partial t}v=Ce_{x}\) in (61) leads to the Kelvin-Voigt model (4) but with the viscosity coefficient \(\varrho\varepsilon\) instead of \(D\). When applied to the Kelvin-Voigt model in the form \(\varrho\frac{\partial}{\partial t}v=Ce_{x}+Dv_{xx}\) in (63) with \(D=D_{1}\), the dispersive wave equation (66) takes the form
\[\varrho\frac{\partial^{2}e}{\partial t^{2}}-Ce_{xx}-\big{(}D{+}\varepsilon \varrho\big{)}\frac{\partial e_{xx}}{\partial t}+\varepsilon De_{xxxx}=0\,. \tag{68}\]
This has the form of the conservative-gradient extension of the Kelvin-Voigt model (29); in particular, this parabolic regularization facilitates the propagation of high-frequency waves (which otherwise could not propagate) if \(\varepsilon\geq D/\varepsilon\) and it may be non-dispersive under the condition \(D=\varepsilon\varrho\); cf. the condition (33) but now with \(D+\varepsilon\varrho\) and \(\varepsilon D\) instead of \(D\) and \(\ell^{2}C\), respectively, and realize that, after some algebra, it leads to \((D{-}\varepsilon\varrho)^{2}=0\). On the other hand, the regularization (67) for (65), i.e. for the Kelvin-Voigt model written in the form \(\varrho\frac{\partial}{\partial t}v=Ce_{x}+D\frac{\partial}{\partial t}e_{x}\), leads to a different dispersive equation than (68), specifically
\[\varrho\frac{\partial^{2}e}{\partial t^{2}}-Ce_{xx}-\big{(}D{+}\varepsilon \varrho\big{)}\frac{\partial e_{xx}}{\partial t}=0\,, \tag{69}\]
which is again the classical Kelvin-Voigt dispersive wave equation (4) only with an increased viscosity coefficient.
**Remark 5** (Micro-inertia.).: The inertia-gradient terms of the type \(\varrho\frac{\partial^{2}}{\partial t^{2}}u_{xx}\), as in (41) or (54) or (66) written in terms of \(u\), is sometimes called micro-inertia. See e.g. [5, 6, 28, 37, 39] for discussion about micro-inertia as invented by R.D. Mindlin [41] and related possible other gradient terms and dispersion analysis. Cf. also [18] or [11, Sect.6.3-4]. The inviscid variant of (66), i.e. the hyperbolic equation of the type \(\varrho\frac{\partial^{2}}{\partial t^{2}}(u{-}\ell^{2}u_{xx})-Cu_{xx}=0\) as (54), is sometimes called a Love-Rayleigh model, cf. [11, Sect. 1.2.4].
## 4 Notes about gradient theories at large strains
Models in the large-strain continuum mechanics in the three space dimensions are inevitably governed by very nonlinear partial differential equations. In dynamical situations, rigorous analytical justification of such models as far as a mere existence of (suitably defined notions of weak) solutions is problematic, cf. [8, 9]. There is a certain agreement that involving some gradient theories is inevitable to facilitate a rigorous analysis and to ensure stability and convergence of various numerical approximation strategies. This section aims to survey various options from the perspective of the dispersive models in the previous section 3.
In the large-strain continuum mechanics, the basic geometrical concept is a _deformation_\({\bf y}:\Omega\to\mathbb{R}^{3}\) as a mapping from a reference configuration \(\Omega\subset\mathbb{R}^{3}\) into the physical space \(\mathbb{R}^{3}\). We will denote the referential (resp. the actual) quantities by uprighted (resp. slanted) fonts. In particular, we denote by \({\bf x}\) and \({\mathbf{x}}\) the reference (Lagrangian) and the actual (Eulerian)
point coordinates, respectively. The further basic geometrical object is the (referential) _deformation gradient_\(\mathbf{F}(\mathbf{x})=\nabla_{\mathbf{X}}\mathbf{y}\).
If evolving in time, \(\mathbf{x}=\mathbf{y}(t,\mathbf{x})\) is sometimes called a "motion". The inverse motion \(\mathbf{\xi}=\mathbf{y}^{-1}:\mathbf{y}(\Omega)\rightarrow\Omega\), if it exists, is called a _return_ (or sometimes a _reference_) _mapping_. The important quantity is the (referential) velocity \(\mathbf{v}=\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{y}(t,\mathbf{x})\) with \(\mathrm{d}/\mathrm{d}t\) the derivative with respect to time of a time dependent function. When composed with the return mapping \(\mathbf{\xi}\), we obtain the Eulerian representations
\[\mathbf{F}(t,\mathbf{x})=\mathbf{F}(t,\mathbf{\xi}(\mathbf{x}))\quad\text{ and }\quad\mathbf{v}(t,\mathbf{x})= \mathbf{v}(t,\mathbf{\xi}(\mathbf{x}))\,. \tag{70}\]
The rheologies from Section 2, exploiting one elastic element, are now characterized by a non-quadratic stored energy \(\varphi=\varphi(F)\) with \(F\in\mathbb{R}^{3\times 3}\), being assumed frame indifferent, i.e. a function of the right Cauchy-Green tensor \(C=F^{\top}F\) and hence surely nonconvex. Moreover, the blow-up \(\varphi(F)\rightarrow+\infty\) when \(\det F\to 0+\) is often imposed to grant a local non-interpenetration of the deformed medium. Existence of such potential of conservative forces is an essence of the concept of _hyperelastic materials_. Among several options, we specifically consider \(\varphi\) in J/m\({}^{3}\)=Pa (not in kg/m\({}^{3}\)), meaning energy per referential (not actual) volume.
### Lagrangian formulation
The small-strain kinematic constraint \(\frac{\partial}{\partial t}e=v_{x}\) in (1) is now "translated" into the large-strain Lagrangian formulation as
\[\frac{\partial\mathbf{F}}{\partial t}=\nabla\mathbf{v}\,. \tag{71}\]
The elastic response now leads to the first _Piola-Kirchhoff stress_\(\mathbf{S}=\varphi^{\prime}(\mathbf{F})\), so that the large-strain analog of the fully conservative model (1) is now \(\rho\frac{\partial}{\partial t}\mathbf{v}-\mathrm{div}\mathbf{S}=\mathbf{0}\) with \(\mathbf{S}=\varphi^{\prime}(\mathbf{F})\) and with a referential mass density \(\rho=\rho(\mathbf{x})\) to be completed by (71).
The large-strain the Kelvin-Voigt model (29) uses also a nonlinear dashpot governed by a dissipative potential \(\zeta=\zeta(\mathbf{F},\frac{\partial}{\partial t}\mathbf{F})\). The frame indifference of dissipative forces is quite delicate, as pointed out in [4], and dictates that \(\zeta\) should be a function of \(\mathbf{C}=\mathbf{F}^{\top}\!\mathbf{F}\) and its rate \(\frac{\partial}{\partial t}\mathbf{C}=2\mathrm{sym}(\mathbf{F}^{\top}\!\frac{ \partial}{\partial t}\mathbf{F})\). The analog to the linear dashpot \(\sigma=D\frac{\partial}{\partial t}e\) is now \(4D\mathbf{F}\mathrm{sym}(\mathbf{F}^{\top}\!\frac{\partial}{\partial t} \mathbf{F})\). In simple materials, as pointed out also in [8, Sec. 3.2], this model does not seem mathematically tractable unless restrictions on short-time analysis or usage of a certain very weak (measure-valued) concept of solutions or on a non-invariant viscous stress; cf. [16, 17, 45, 47, 58].
The conservative-gradient capillarity-type enhancement of the Kelvin-Voigt model like (29) consists in involving the conservative hyperstress \(\ell^{2}C\nabla\mathbf{F}\), i.e. the conservative stress \(-\mathrm{div}(\ell^{2}C\nabla\mathbf{F})\). This can be obtained from the extended stored energy \(\varphi(\mathbf{F})+\frac{1}{2}\ell^{2}C|\nabla\mathbf{F}|^{2}\).
The large-strain analog of the model (29) now reads as
\[\rho\frac{\partial\mathbf{v}}{\partial t}-\mathrm{div}\big{(}\mathbf{ D+}\mathbf{S}\big{)}=\mathbf{0} \mathrm{with} \mathbf{S}=\varphi^{\prime}(\mathbf{F})-\mathrm{div}(\ell^{2}C\nabla \mathbf{F}) \tag{72a}\] \[\mathrm{and} \mathbf{D}=4D\mathbf{F}\mathrm{sym}\Big{(}\mathbf{F}^{\top}\frac{ \partial\mathbf{F}}{\partial t}\Big{)} \tag{72b}\]
to be combined with the kinematic equation (71). The conservative variant, i.e. (72) with \(D=0\), gives the purely elastic model analogous to (29) with \(D=0\) as in Figure 6, was analytically justified in [33, Sect.9.2.1]. The Kelvin-Voigt model (29) with \(D>0\) was analyzed in [40] or [33, Sect.9.3]. Conceptually, according to the analysis from Section 3.2, this model (72) has a potentiality to suppress the dispersion of elastic waves.
The analog of the small-strain Green-Naghdi additive decomposition (10) is now conventionally replaced by the Kroner-Lee-Liu [30, 34]_multiplicative decomposition_\(\mathbf{F}=\mathbf{F}_{\mbox{\tiny\sc m}}\mathbf{F}_{\mbox{\tiny\sc m}}\) with \(\mathbf{F}_{\mbox{\tiny\sc m}}\) the elastic \(\mathbf{F}_{\mbox{\tiny\sc m}}\) distortion and the inelastic distortion \(\mathbf{F}_{\mbox{\tiny\sc m}}\). The stored energy \(\varphi\) now depends on \(\mathbf{F}_{\mbox{\tiny\sc m}}=\mathbf{F}\mathbf{F}_{\mbox{\tiny\sc m}}^{-1}\) instead of \(\mathbf{F}\). In the case of Maxwell's rheology with a linear creep, the other ingredient is the dissipation potential \(\zeta=\zeta(\mathbf{F}_{\mbox{\tiny\sc m}},\cdot):\mathbb{R}^{3\times 3} \rightarrow\mathbb{R}\) depending (in the case of linear creep quadratically) on the inelastic distortion rate \(\frac{\partial}{\partial t}\mathbf{F}_{\mbox{\tiny\sc m}}\).
The large-strain analog of the Jeffreys rheology from Section 2.3 now leads to the system of 1st-order differential equations for \((\mathbf{v},\mathbf{F},\mathbf{F}_{\mbox{\tiny\sc m}})\):
\[\rho\frac{\partial\mathbf{v}}{\partial t}-\mathrm{div}\big{(} \mathbf{D+}\mathbf{S}\mathbf{F}_{\mbox{\tiny\sc m}}^{-\top}\big{)}=\mathbf{0} \mathrm{with} \mathbf{S}=\varphi^{\prime}(\mathbf{F}_{\mbox{\tiny\sc m}})\] \[\mathrm{and} \mathbf{D}=4D_{1}\mathbf{F}\mathrm{sym}\Big{(}\mathbf{F}^{\top} \frac{\partial\mathbf{F}}{\partial t}\Big{)}\,, \tag{73a}\] \[\zeta^{\prime}_{\frac{\partial}{\partial t}}\mathbf{F}_{\mbox{ \tiny\sc m}}\Big{(}\mathbf{F}_{\mbox{\tiny\sc m}},\!\frac{\partial\mathbf{F}_{ \mbox{\tiny\sc m}}}{\partial t}\Big{)}+\mathbf{S}_{\mbox{\tiny\sc m}}=\mathbf{0} \mathrm{with} \mathbf{S}_{\mbox{\tiny\sc m}}=\mathbf{F}^{\top}\!\mathbf{S}(\mathbf{F}_{ \mbox{\tiny\sc m}}^{-1})^{\prime}\mathrm{\ \ with}\mathbf{F}_{\mbox{\tiny\sc m}} \mathbf{=F}\mathbf{F}_{\mbox{\tiny\sc m}}^{-1} \tag{73b}\]
to be combined with the kinematic equation (71). For \(D_{1}=0\), this degenerates to a nonlinear hyperbolic system (representing the analog of the Maxwell rheology from Section 2.2) and its solution is analytically problematic. Even, the "parabolic" situation \(D_{1}>0\) does not seem much better at this point.
The conservative gradient enhancement like (44) would be now "translated" into large strains by expanding \(\mathbf{S}\) in (73a) as in (72a), i.e. here \(\mathbf{S}=\varphi^{\prime}(\mathbf{F}_{\mbox{\tiny\sc m}})-\mathrm{div}(\ell _{1}^{2}C\nabla\mathbf{F}_{\mbox{\tiny\sc m}})\). This corresponds to the enhancement of the stored energy \(\varphi(\mathbf{F}_{\mbox{\tiny\sc m}})\) by the term \(\frac{1}{2}\ell_{1}^{2}C|\nabla\mathbf{F}_{\mbox{\tiny\sc m}}|^{2}\). Yet, the rigorous analysis seems, however, again analytically open, cf. [33, Remark 9.4.6] or [51, Remark 2.3]. Here a combination with the dissipative gradient like in Sections 3.4 but of a higher order can help: with \(\zeta\) in (73b) expanded by the 2nd-order gradient \(\frac{1}{2}\int_{\Omega}|\ell^{2}\nabla^{2}\frac{\partial}{\partial t}\mathbf{F }_{\mbox{\tiny\sc m}}|^{2}\,\mathrm{d}x\) which, when the derivative of \(\zeta\) in (73b) understood in a functional sense, contributes to the left-hand side of (73b) by the 4th-order-gradient stress \(\mathrm{div}^{2}(\ell^{4}\nabla^{2}\frac{\partial}{\partial t}\mathbf{F}_{ \mbox{\tiny\sc m}})\). For a rigorous analysis, we refer to [15].
One can also consider a conservative gradient as in the Poynting-Thomson standard solid in Remark 1, here enhancing \(\mathbf{S}_{\mbox{\tiny\sc m}}\) in (73b) by the term \(-\mathrm{div}(H\nabla\mathbf{F}_{\mbox{\tiny\sc m}})\). In [50, 51] and in [33, Sect.9.4], a simplified enhancement by \(\frac{1}{2}\ell^{2}C|\nabla\mathbf{F}|^{2}\), leading to the higher-order gradient contribution to the Piola-Kirchhoff stress \(-\mathbf{F}\mathrm{div}(\ell^{2}C\nabla\mathbf{F})\), was rigorously analyzed
but, mixing \(\mathbf{F}_{\mbox{\tiny$\pm$}}\) and \(\mathbf{F}\) in the stored energy is rather a misconception for a Maxwellian-type rheologies, corresponding rather to Zener's standard solid model.
**Remark 6** (Normal dispersion by multipolar viscosity.).: The multipolar viscosity \(\ell^{2}D\frac{\partial}{\partial t}u_{xxxx}\) in Sections 3.1 or Sect. 3.3 should now involve the gradient of the rate of \(\mathbf{F}\) in the the dissipation potential. Yet, by frame indifference the dissipation potential should depend on the rate \(\mathbf{R}\) of the right Cauchy-Green tensor \(\mathbf{C}=\mathbf{F}^{\top}\mathbf{F}\), i.e. on \(\mathbf{R}=\frac{\partial}{\partial t}\mathbf{C}=2\mathrm{sym}(\mathbf{F}^{ \top}\frac{\partial}{\partial t}\mathbf{F})\), rather than on \(\frac{\partial}{\partial t}\mathbf{F}\). For the quadratic isotropic dissipation potential \(\mathscr{D}(\mathbf{R})=\frac{1}{2}\ell^{2}D\int_{\Omega}|\nabla\mathbf{R}|^{2 }\,\mathrm{d}\mathbf{x}\), the contributions to the dissipative Piola-Kirchhoff stress \(\mathbf{D}\) arising as the functional derivative of \(\frac{\partial}{\partial t}\mathbf{F}\mapsto\mathscr{D}(\mathbf{R})\) when realizing that
\[\nabla\mathbf{R}=\nabla\frac{\partial}{\partial t}(\mathbf{F}^{\top}\mathbf{F} )=\nabla\mathbf{F}^{\top}.\frac{\partial\mathbf{F}}{\partial t}+\mathbf{F}^{ \top}.\nabla\frac{\partial\mathbf{F}}{\partial t}+\nabla\frac{\partial \mathbf{F}^{\top}}{\partial t}.\mathbf{F}+\frac{\partial\mathbf{F}^{\top}}{ \partial t}.\nabla\mathbf{F}\]
consists both from the stress depending linearly on \(\frac{\partial}{\partial t}\mathbf{F}\) through coefficients of the type \(\nabla\mathbf{F}\otimes\nabla\mathbf{F}\) and the divergence of a hyperstress depending linearly on \(\nabla\frac{\partial}{\partial t}\mathbf{F}\) through coefficients of the type \(\mathbf{F}\otimes\mathbf{F}\). The rigorous analysis (as well as convergent numerical strategies) of such model seems very nontrivial because it needs strong convergence (of an approximation) of \(\nabla\mathbf{F}\), and open.
### Eulerian formulation
The Eulerian velocity \(\boldsymbol{v}\) from (70) is employed in the convective time derivative \((\mathbf{\cdot})^{\mbox{\tiny$\star$}}=\frac{\partial}{\partial t}( \mathbf{\cdot})+(\mathbf{v}\!\cdot\!\nabla)(\mathbf{ \cdot})\) with \(\nabla\) considered with respect to the actual coordinates, to be used for scalars and, component-wise, for vectors or tensors. The small-strain kinematic constraint \(\frac{\partial}{\partial t}e=v_{x}\) in (1) is now "translated" into large-strain Eulerian formulation as
\[\dot{\boldsymbol{F}}=(\nabla\boldsymbol{v})\boldsymbol{F}\,. \tag{74}\]
In contrast to the referential mass density \(\rho=\rho(\mathbf{x})\), the actual mass density \(\varrho=\varrho(t,\boldsymbol{x})\) now depends also on time and its evolution is governed by the continuity equation
\[\dot{\varrho}=(\mathrm{div}\,\boldsymbol{v})\varrho\,. \tag{75}\]
When the initial deformation is identity, i.e. \(\boldsymbol{x}=\mathbf{y}(0,\mathbf{x})=\mathbf{x}\) and (75) is completed with the initial condition \(\varrho(0,\boldsymbol{x})=\rho(\mathbf{x})\), it holds \(\varrho=\rho/\mathrm{det}\,\boldsymbol{F}\). The (hyper)elastic response is again governed by the stored energy \(\varphi\) which is now a function of the Eulerian deformation gradient \(\boldsymbol{F}\). This leads to the symmetric _Cauchy stress_\(\boldsymbol{T}=\varphi^{\prime}(\boldsymbol{F})\boldsymbol{F}^{\top}\!/ \mathrm{det}\,\boldsymbol{F}\). The momentum equation is now \(\varrho\dot{\boldsymbol{v}}=\mathrm{div}\,\boldsymbol{T}\) or, equivalently, \(\frac{\partial}{\partial t}\boldsymbol{v}=\mathrm{div}(\boldsymbol{T}-\varrho \boldsymbol{v}\otimes\boldsymbol{v})\). Analysis of this nonlinear hyperbolic system is naturally very difficult and only limited results are available in special cases, cf. e.g. [26, 53].
The analog of the 1D small-strain Kelvin-Voigt model (4) written as \(\varrho\frac{\partial}{\partial t}v-(Dv_{x}+Ce)_{x}=0\) now leads to the system for \((\varrho,\boldsymbol{F},\boldsymbol{v})\) composed from (74), (75), and the momentum equa
tion
\[\varrho\dot{\mathbf{v}}-\mathrm{div}\big{(}\mathbf{T}{+}\mathbf{D}\big{)}=\mathbf{0} \quad\text{with}\ \ \mathbf{T}=\frac{\varphi^{\prime}(\mathbf{F})\mathbf{F}^{\top}}{\det\mathbf{F}} \tag{76a}\] \[\quad\text{and}\quad\mathbf{D}=D\mathbf{e}(\mathbf{v})\,. \tag{76b}\]
The mere existence of suitably defined solutions (not speaking about its numerical approximation) of this (very nonlinear) system is still very uneasy. With vanishing shear elastic response (i.e. for viscoelastic fluids rather than solids), we refer to [20, 21, 25].
The dissipative-gradient extension like (22) here uses (76a) with the dissipative stress \(\mathbf{D}=D\mathbf{e}(\mathbf{v})-\mathrm{div}(\ell^{2}D\nabla\mathbf{e}(\mathbf{v}))\) or with \(\mathbf{D}=D\mathbf{e}(\mathbf{v})-\mathrm{div}(\ell^{2}D\nabla^{2}\mathbf{v})\). This is the concept of so-called _multipolar materials_ by J. Necas at al. [42, 43, 44] or of solids [46, 54], later also by E. Fried and M. Gurtin [22], inspired by R.A. Toupin [56] and R.D. Mindlin [41], and allowing for rigorous analysis, also used in [52]. Conceptually, according to the analysis from Section 3.1, the hyper-viscosity in the model (76) has a potentiality to cause lesser dispersion and attenuation than the usual viscosity for wavelengths bigger than the critical, cf. Figure 4. In a nonlinear variant of the multipolar viscosity, it may grant boundedness of velocity gradient, which further opens the way to analysis of the whole system (76), cf. [49]. It is well recognized that without the mentioned velocity-gradient boundedness, the singularities of transported variables \(\varrho\) and \(\mathbf{F}\), whose occurrence in solids may be debatable, may develop, cf. [3].
The Maxwell or the Jeffreys rheologies again use the multiplicative decomposition now for the Eulerian deformation gradient \(\mathbf{F}=\mathbf{F}_{{}_{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleftleftleftleftleftleftleftleftleft|{{ \leftleftleftleftleft|{{\leftleftleftleftleft|{{\leftleftleftleftleft|{{ \leftleftleftleftleft|{{{\leftleftleftleftleftleftleft|}} {{{{\leftleftleftleftleft|}}{{{\leftleftleft|}}\right|}\right|}\right|}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\{\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
This diffusive regularization (79) used for the Kelvin-Voigt model (76) facilitates the rigorous analysis, as shown in the incompressible case in [1, 10, 23]. Although violation of the ultimate kinematic constraint (74) is surely not physically legitimate, the linear small-strain analysis based on the dispersive wave equation (68) yields a certain modelling justification to facilitate the propagation of high-frequency waves.
**Remark 7** (Anomalous dispersion by conservative gradients.).: As in Remark 6, the combination of the dissipative gradient with the gradient theory in the conservative part like (35) here leads to the stored energy \(\varphi=\varphi(\boldsymbol{F})\) expanded as \(\frac{1}{2}\ell_{2}^{2}C|\nabla\boldsymbol{F}|^{2}\). Dictated by correct energetics arising from the test of the momentum equation by \(\boldsymbol{v}\), this gradient term gives the conservative hyperstress contribution to the Cauchy stress as
\[\mathscr{S}(\boldsymbol{F},\nabla\boldsymbol{F})=\mathrm{div}\big{(}\ell_{2}^ {2}C\nabla\boldsymbol{F}\big{)}\boldsymbol{F}^{\top}\ \ \text{and}\ \ \mathscr{K}(\nabla\boldsymbol{F})=\ell_{2}^{2}C\Big{(}\nabla \boldsymbol{F}\otimes\nabla\boldsymbol{F}-\frac{1}{2}|\nabla\boldsymbol{F}|^{2 }\mathbb{I}\Big{)}\,, \tag{80}\]
respectively. Similarly as in Remark rem-normal, there are however mathematical problems with this extension particularly due to the quadratic nonlinearity of \(\mathscr{K}:\mathbb{R}^{3\times 3}\to\mathbb{R}^{3\times 3}_{\text{sym}}\). Let us only vaguely remark that its analysis would need strong convergence (of an approximation) of \(\nabla\boldsymbol{F}\), which seems difficult unless some 3rd-grade multipolar viscosity or some stress diffusion would be additionally involved.
## 5 Conclusion
The connection between propagation of waves in linear viscoelastic media and nonlinear models at large strains used in literature is scrutinized with a focus on higher-order spatial-gradient extensions of viscoelastic rheological models. It is shown that, sometimes, higher gradients can facilitate propagation of waves in some frequency range by lower dispersion and attenuation than conventional models, cf. Figures 4, 8, or 9. Some higher-gradient extensions may compensate dispersion and result in nondispersive models, cf. Figures 5, which applies not only to a stored-energy extension but also to the diffusive extension of the kinematic constraints leading to (68). It is shown (at least in particular cases in Remarks 2 and 3 although similar phenomenon applies essentially in other models) that, when a regularity of solutions is ensured by the regularity of initial conditions, such higher gradients do not substantially influence the energetics of the model if the respective length-scale parameter \(\ell\) is small.
Simultaneously, such higher gradients can facilitate rigorous analysis at large strains either in the Lagrangian or in the Eulerian formulations. The dispersion/attenuation analysis for linear models in Section 3 can vaguely offer certain additional interesting attributes for such nonlinear large-strain models which are usually not taken into account in literature. In particular, the dispersion analysis behind (67), showing possible compensation of dispersion and easier propagation of high-frequency waves, somehow motivates the physically unjustified diffusive extension (79) used sometimes in literature.
On the other hand, some gradient extensions do not seem rigorously analyzed at large strains in literature, which gives challenges for possible future research.
_Acknowledgments._ Support from the CSF grant no. 23-06220S and the institutional support RVO: 61388998 (CR) is gratefully acknowledged.
|
2310.00092 | Voice2Action: Language Models as Agent for Efficient Real-Time
Interaction in Virtual Reality | Large Language Models (LLMs) are trained and aligned to follow natural
language instructions with only a handful of examples, and they are prompted as
task-driven autonomous agents to adapt to various sources of execution
environments. However, deploying agent LLMs in virtual reality (VR) has been
challenging due to the lack of efficiency in online interactions and the
complex manipulation categories in 3D environments. In this work, we propose
Voice2Action, a framework that hierarchically analyzes customized voice signals
and textual commands through action and entity extraction and divides the
execution tasks into canonical interaction subsets in real-time with error
prevention from environment feedback. Experiment results in an urban
engineering VR environment with synthetic instruction data show that
Voice2Action can perform more efficiently and accurately than approaches
without optimizations. | Yang Su | 2023-09-29T19:06:52Z | http://arxiv.org/abs/2310.00092v1 | # Voice2Action: Language Models as Agent for
###### Abstract
Large Language Models (LLMs) are trained and aligned to follow natural language instructions with only a handful of examples, and they are prompted as task-driven autonomous agents to adapt to various sources of execution environments. However, deploying agent LLMs in virtual reality (VR) has been challenging due to the lack of efficiency in online interactions and the complex manipulation categories in 3D environments. In this work, we propose Voice2Action, a framework that hierarchically analyzes customized voice signals and textual commands through action and entity extraction and divides the execution tasks into canonical interaction subsets in real time with error prevention from environment feedback. Experiment results in an urban engineering VR environment with synthetic instruction data show that Voice2Action can perform more efficiently and accurately than approaches without optimizations.
## 1 Introduction
Large Language Models (LLMs) have demonstrated impressive zero-shot and few-shot learning abilities in natural language understanding and generation Brown et al. (2020). With human alignments like reinforcement learning from human feedback (RLHF), these models become better at following human instructions Ouyang et al. (2022); with instruction prompting and providing external resources Nakano et al. (2021), they can be used as agents to autonomously choose tools Schick et al. (2023), communicate with other agents Shen et al. (2023), and show superior ability in decision-making and task execution.
However, the seamless integration of these models within VR has remained a challenging frontier, hindered by efficiency, accuracy, and the complexities associated with interactions and manipulations in 3D spaces. Firstly, as a simulated three-dimensional interaction environment that mimics the real world, the VR environment has enormous possibilities in the way that the user can interact with entities (objects in the virtual scene) and manipulate their properties; secondly, the game engines that execute the user instructions has a predefined set of atomic operations for entity attribute modifications, causing it non-trivial to map or classify the user instruction to the most proper configuration in the engine; lastly, the accuracy of VR hardware (i.e., the voice recognition SDK, Wit.ai) and the efficiency in 3D graphics rendering (i.e., the uv rendering pipeline) limits the number of operations we can perform while not exceeding user's comfortable response time to receive the feedback of the executed tasks.
In this paper, we focus on two main challenges for deploying agent LLMs in VR: efficiency and accuracy. While improving and balancing these metrics, we plan to define how agent LLMs operate within the virtual environments, and then build an interactive tool to provide users with a more practical experience in developing their customized virtual scene. Hence, we propose the Voice2Action framework, created upon a rich taxonomy of text input commands, ranging from simple object selection and state manipulation to more complex operations involving animation, scripted sequences, and environment configuration modification. By hierarchical instruction prompting and entity extraction, Voice2Action can accurately interpret users' textual instructions by incorporating environmental feedback.
To provide empirical validation, we conduct experiments and ablation studies in an urban city planning virtual environment. We build a synthetic dataset generated by the _text-davinci-003_ model from OpenAI API with the self-instruct Wang et al. (2022) framework, where we use a pre-defined canonical instruction subset as the seed tasks, and manually filter out the unsatisfactory generated instruction-execution pair. The results indicate a
marked increase and well-balanced execution efficiency and accuracy.
To summarize, our contributions to operating agent LLMs in VR are:
1. We define a hierarchical set of canonical instructions in VR for language models to perform decision-making actions.
2. We improve efficiency and accuracy by incorporating different-purposed LLMs to divide and conquer the execution tasks, and error prevention from environment feedback.
3. We build and open source the Voice2Action framework1 to stimulate further research for applying agent LLMs in customizable 3D interactive environments. Footnote 1: Code is available at [https://github.com/yang-su2000/VR-Multimodal-Interaction](https://github.com/yang-su2000/VR-Multimodal-Interaction)
## 2 Related Work
**Language models for decision-making.** Large language models' robust text understanding ability enables them to make accurate decisions and perform action execution. By using external resources such as browsing the web Nakano et al. (2021), using tools by deciding API calls Schick et al. (2023), or connecting different LLMs and summarizing their responses Shen et al. (2023), language models can be used as controllers for any task executions. For more complex tasks, LLMs are also prompted to plan, self-criticize by reasoning and acting Yao et al. (2022), and store history in memory with external databases, as shown by BabyAGI and LangChain. Many of these methods rely on extensive human feedback from policy learning or freely generate inefficient long reasoning chains to reach the final decision. Voice2Action uses a divide-and-conquer framework that hierarchically extracts and performs the action so that an unnecessary decision-making trajectory can be pruned before the generation chain is finished, which makes it both efficient and relies more on the environment than human feedback.
**Language models in an interactive environment.** Deploying LLMs in an interactive environment has been studied mainly in robotics planning Huang et al. (2022) and closed-loop systems Yao et al. (2022), where real-time environment changes are generally ignored. In the VR environment, efficiency and accuracy affected by real-time graphics rendering, state changes, and human intervention are essential components. To the best of our knowledge, Voice2Action is the first to tackle the problem of using LLM for decision-making in a real-time VR environment.
## 3 Background
Consider the general setup of a user interacting within the VR environment (Figure 1). Due to the uncomfortableness of keyboard typing (text commands) in virtual scenes, most users begin their commands through voice signals. From frame \(1\) to \(f\) (usually 60 frames per second), the user initiates a voice command with input voice sequence \(V=\{V_{1},...,V_{f}\}\), which goes through a black box voice recognition hardware SDK that sequentially produces a text sequence \(T=\{T_{1},...,T_{t}\}\) by the end of the last frame \(f\), and the environment changes along the frame \(E=\{E_{1},...,E_{f}\}\). On average, \(t\approx 0.04f\). We do not restrict the format of \(T\) generated from the user's commands. However, \(T\) is highly inaccurate, and it is necessary to perform additional pre-processing with **agent LLM for preprocessing** from frame \(f+1\) to \(f_{0}\). In particular,
\[T_{0}=LLM_{pre}(T,S_{action}) \tag{1}\]
where \(S_{action}\) is a dictionary of \((\mathit{action\ type},\mathit{action\ args})\) pairs.
From frame \(f_{0}+1\) to \(f_{1}\), an **agent LLM for classification** takes in \(T_{0}\) and identifies the action category and the textual commands associated with the action as a formalized template \(T_{1}\). In particular,
\[\begin{split}& T_{1}=LLM_{cls}(T_{0},S_{action})=\\ &\mathit{action\ type}:T_{\mathit{action\ type}},\\ &\mathit{action\ arg_{1}}:T_{\mathit{action\ arg_{1}}},...,\\ &\mathit{action\ arg_{n}}:T_{\mathit{action\ arg_{n}}}\end{split} \tag{2}\]
where each \(T_{\mathit{action\ arg_{i}}}\) are filtered natural language sequences from \(T_{0}\), and \(n\) is the number of argument for \(\mathit{action\ type}\).
From frame \(f_{1}+1\) to \(f_{2}\), an **agent LLM for extraction** recognizes \(T_{\mathit{action\ type}}\) and extracts the action and entity properties from each \(T_{\mathit{action\ arg_{i}}}\) as a formalized template \(T_{2}\). In particular,
\[T_{2} =concat(i=1\to n:T_{ext})\] \[T_{ext} =LLM_{ext}(T_{action\;arg_{i}},S_{entity},S_{atomic\;action})\] \[=entity:T_{entity\;type},\] \[\textit{atomic\;action\;type}:T_{atomic\;action\;type},\] \[\textit{atomic\;action\;arg_{1}}:T_{atomic\;action\;arg_{1}},...,\] \[\textit{atomic\;action\;arg_{m}}:T_{atomic\;action\;arg_{m}} \tag{3}\]
where \(S_{entity}\) is the collection of available entity (object) types in the virtual environment, and \(S_{atomic\;action}\) is the collection of available built-in atomic functions in the game engine. \(T_{atomic\;action\;arg_{j}}\) could deviate from natural languages (i.e., they can be non-text form), as long as they match the correct argument types defined in its atomic action type, and \(m\) is the number of arguments for \(\textit{atomic\;action\;type}\).
The environment \(E=\{E_{1},...E_{f_{2}}\}\) changes while the above operations are performed, which is treated as the environment feedback. From frame \(f_{2}+1\) to \(f_{3}\), an **agent LLM for execution** takes in \(T_{ext}\) with its specified arguments, decides the order and number of actions to perform based on the atomic action list and environment feedback, and generates a sequence of actions \(A=\{A_{1}...,A_{a}\}\) with length \(a=|A|\). In particular,
\[A =concat(i=1\to a:A_{exe})\] \[A_{exe} =LLM_{exe}(T_{ext},E)=respond(T_{ext}) \tag{4}\]
where \(respond\in\{do,skip\}\). Each \(A_{exe}\) happens exactly on some frame from \(f_{2}+1\) to \(f_{3}\). To reduce execution difficulty, we do not consider the correlation of consecutive voice commands, or the environment changes after frame \(f_{2}\), and only focus on the efficiency and accuracy of the agent LLMs generated output \(A\). We also fix all \(A_{exe}\) to be performed at frame \(f_{3}\), which makes the execution _time-invariant_, i.e., no actions would affect the outcomes of other actions. Time-invariant actions include many canonical manipulation tasks in the virtual environment, such as object selection, positioning, and static property modifications.
## 4 Method
We propose an efficient autonomous interaction framework in the VR environment that hierarchically uses language models as agents to extract and execute user commands with error prevention from the environment feedback. We break down the problem into two stages: (1) how an "action" is defined in VR; and (2) how our method executes actions in VR.
Action in the VR environment is defined by its task space, task parameters, and design dimension
Figure 1: An overview of Voice2Action. The process consists of 5 steps that take place sequentially, where red refers to audio or text data, orange refers to textual prompts or examples, blue refers to audio recognition or language models, purple refers to structured arguments or configurations, and green refers to that the program successfully executes.
(LaViola Jr et al., 2017). The task space can be treated as the category of our dataset, which outlines the scope of interaction and the configuration of the VR environment. In our experiment, the task space is an urban engineering virtual scene where the users want to manipulate the building layouts for city planning. The users are usually equipped with means to do object selection and mesh manipulation, as implemented in Arkio, a popular building sketching VR tool. Hence, \(S_{action}=(select,select\;args),(mesh,mesh\;args)\).
Task parameters define the specific goals or objectives that the user is supposed to achieve within the task space. In the urban engineering example, task parameters might include locating specific items, navigating through the city, and moving or altering the properties of the items. i.e., \(select\;args=\{object\;type,\;location,\;direction,\;...\}\).
Design dimensions refer to the specific VR elements and mechanisms that are used to facilitate the accomplishment of the task. In our example, the design dimension would involve how buildings, roads, and vehicles are oriented. i.e., \(S_{entity}=\{building,\;road,\;vehicle\}\) and \(S_{atomic\;action}=\{range(start,end),\;locate(x,y,z),...\}\).
With these parameters defined, our method consists of the following 4 modules to create a hierarchical NLP system for interpreting and executing commands in a VR environment.
### Voice to Text Pre-processing
The voice commands go through a black-box speech recognition hardware, and the produced text is often misaligned with their pronunciation (i.e., trees \(\rightarrow\) we, building \(\rightarrow\) beauty). To clean and pre-process the text command, we first use an **agent LLM for pre-processing** with instruction prompting to reconstruct the tokens to the closest pronunciation in our task space. We sample the output from \(m_{pre}=|S_{action}|\) LLMs, where each of them is provided \(k_{pre}\) example commands with a particular action type. The outputs are prompted and structured as \((supposed,wrongly\;pronounced)\) token pairs. We collect and weight these pairs by dividing the number of occurrences of the \(supposed\) tokens in our instruction dataset, and select the top \(\alpha=25\%\;wrongly\;pronounced\) token to replace. This module vastly improves execution accuracy by mapping wrongly pronounced tokens to more frequent tokens in the dataset.
### Text Property Classification
The pre-processed text \(T\) needs to be categorized into one of several predefined action categories. Based on 3D interaction principles (LaViola Jr et al., 2017), these categories could include
* static entity property modification actions: including object selection, mesh manipulation, creation, and static state changes.
* dynamic entity property modification actions: object movements, mesh transformation (animations), dynamic state changes, and scripted sequences (house starts firing).
* environment property modification actions: environment control, physics property alternations, and graphics effect changes.
For efficiency consideration, we limit \(S_{action}\) to include only selection and mesh manipulation in the urban engineering scenario. We provide the set of action categories to the **agent LLM for classification** and their corresponding action types, along with \(k_{cls}\) few-shot examples, chosen upon the complexities of the task parameters. We only use 1 LLM as this sub-task is simpler, and the output is structured as \(T_{1}\).
### Entity and Action Extraction
For each formalized \(T_{1}\) template consisting of the action type and its arguments in natural language format, we use an **agent LLM for extraction** to decompose the \((entity,action)\) pair into atomic function calls in the VR system. This is usually infeasible with the number of atomic actions defined in any game engine, so we restrict the set to the canonical manipulation tasks defined in (LaViola Jr et al., 2017). It consists of atomic actions used for object selection and mesh manipulations, which fits our design dimension of urban engineering. We use \(m_{ext}=|S_{atomic\;action}|\) LLM to do the extraction for each \(T_{1}\) with \(k_{ext}\) examples. As \(m_{ext}\) gets larger, this operation would be costly. We thus improve efficiency by pre-calculating the vector embeddings of the atomic actions' textual description and only extracting the closest atomic action that matches each \(T_{action\;arg_{i}}\) on the fly.
### Command Execution
We have now gathered both the structured \(action\;type\) in \(T_{1}\) and \((entity,atomic\;action)\) pair in \(T_{2}\), the **agent LLM for execution** needs
to determine the order of execution and filter out unnecessary actions with feedback from the environment. We sample \(m_{exe}\) LLM to generate the execution orders and prompt them to freely filter out the actions that they consider unnecessary; then we put each of them in a simulated environment to execute the commands and receive the \(feedback\in\{pass,\ fail\}\). If it fails, we go back to the extraction step with an added negative example from the error message until it passes. Finally, we pick the execution LLM that takes the shortest time (which usually generates shorter texts) and successfully accomplishes the task. In practice, we stop other LLMs from running once the task is successfully accomplished and modify the environment state to the designated successful configuration, which significantly improves accuracy by preventing error occurrences from the environment. An example flow of the framework is listed in Table 1.
## 5 Experimental Setup
We generate a synthetic dataset with 100 data samples. Each sample contains a textual command \(T_{0}\). We create this dataset with some modifications to the self-instruct framework Wang et al. (2022). Starting with \(S_{action},S_{entity}\), and \(S_{atomic\ action}\), we randomly sample one action type, \(d_{ent}=Uniform(1,3)\) entity types, and \(d_{atom}=Uniform(2,10)\) atomic actions, and provide them to the _gpt-4_ module from OpenAI API along with their textual description (API documentation), and some example use cases (manually written by human). We re-sample 10 times and ask the model to generate 10 \(T_{0}\) samples each round. \(d_{ent}\) and \(d_{atom}\) are chosen by a user study to mimic the number of operations that an urban engineer would do in a virtual city planning scenario. After all \(T_{0}\) is collected, we manually filter out the ones that are not satisfactory enough, which empirically turns out to be less than 10%. We re-sample a few more times and keep 100 of them in the experiment. The dataset is small because to run each sample, we need to manually speak out the sentence of \(T_{0}\) to go through the voice recognition SDK and wait for all agent LLMs and the environment \(E\) to finish its frame and provide feedback. We manually evaluate the correctness of each \((T_{0},E)\) pair by investigating the environment and the usage of entities and atomic actions. On average, running and evaluating each pair costs 20 seconds and 0.015$.
For the agent LLMs, we use _text-davinci-003_.
\begin{table}
\begin{tabular}{l l l l l l} \hline
**Notation** & **Example Input/Output** & **Example Prompts** & **Example Prompts** \\ \hline \(T\) & select the highest & N/A & \\ & beauty on mean sea & & & \\ \(T_{0}\) & select the highest & & find likely misspelled words in \{action type\} \\ & building on main street & & & \\ \(T_{action\ type}\) & select & & “select” refers to \{explanation\} & \\ \(T_{entity\ type}\) & building & & extract entity type from \(S_{entity}\) & \\ \(T_{select\ arg_{1}}\) & height & & if applicable, extract superlattice degree of ”building” & \\ \(T_{select\ arg_{2}}\) & main street & & if applicable, extract location of “building” & \\ \(T_{atomic\ type}\) & scale getter & & “scale getter” refers to \{function documentation\} & \\ \(T_{atomic\ action\ arg_{1}}\) & y: inf & & if applicable, execute action & \\ & & & ”getter(y=inf)” on entity “building” & \\ \hline \end{tabular}
\end{table}
Table 1: Examples of input and outputs of the Voice2Action framework, voice inputs are omitted. When the game engine starts, available action and entity types will be loaded and filled in \(\{...\}\), along with the documentation of atomic functions.
\begin{table}
\begin{tabular}{l l l l l l l} \hline
**Model** & \(N_{0}\) & \(N_{1}\) & \(N_{2}\) & \(N_{3}\) & \(N_{trial}\) & \(N_{token}\) \\ \hline LLM-Exe & 0 & 0 & 0 & 368 & 5.4 & 1987 \\ LLM-Pre-Exe & 152 & 0 & 0 & 355 & 2.9 & 1182 \\ LLM-Pre-Ext-Exe & 152 & 0 & 402 & 133 & 1.3 & 848 \\ Voice2Action & 152 & 92 & 285 & 140 & **1.2** & **754** \\ \hline \end{tabular}
\end{table}
Table 2: Efficiency results on different model architectures by averaging over the synthetic dataset with 100 data samples. We see that the full model uses much fewer trials and generates fewer tokens than previous baselines.
For the pre-processing agent, we set the temperature to 0.9 to encourage diverse pronunciation interpretation; for the other agents, we set the temperature to 0 to discourage randomness. 80% confidence and 512 maximum generation length are set for all agents. For few-shot examples, \(k_{pre}=k_{ext}=k_{exe}=3,k_{cls}=|S_{action}|=2\) are provided for the agent LLMs.
We compare the following baselines and improved methods. In practice, we run them all paralleled in the simulated VR environment and evaluate them one by one.
* LLM-Exe: the model that takes in \(T\), with added prompts for choosing between extracting and execution, additionally provided with \(k_{ext}\) examples.
* LLM-Pre-Exe: the model that takes in pre-processed \(T_{0}\) and call LLM-Exe. This model mimics the decision-making LLMs proposed in related works, which lets the model choose between planning, reasoning, and executing.
* LLM-Pre-Ext-Exe: the full model but treats \(T_{action\;arg_{i}}\) as the pre-processed \(T_{0}\). This model essentially puts the task of classification to the extraction and execution module.
* Voice2Action (LLM-Pre-Cls-Ext-Exe): the full model.
## 6 Results
The accuracy is measured by the performance of \(A_{exe}\). We rate \(A_{exe}\) by 4 levels suggested by self-instruct (Wang et al., 2022), as shown in Figure 2. The first 3 levels correspond to the \(pass\) state from environment feedback, and the last level D corresponds to the \(fail\) state, i.e., an error has occurred in the game engine, usually due to illegal parse of the action arguments.
The efficiency is measured by the number of tokens \(N_{token}\). In particular,
\[N_{token}=N_{0}+N_{1}+N_{trial}*(N_{2}+N_{3})\]
\[N_{i}=|T_{i}|+|prompt(T_{i})|\]
where \(N_{i}\) is the total token length from both the input tokens from instruction prompting and the intermediate tokens generated from agent LLMs, and \(N_{trivial}\) is the number of runs for the model to receive the \(pass\) flag from the environment. Note that a \(pass\) simply means that the code can execute, which does not indicate correctness. Higher \(N_{token}\) means that the number of rendered frames \(f_{3}\) in the environment caused by the performed action is higher, which is of critical importance to the user's VR experience. The details are in Table 2.
**Analysis** For accuracy comparison, we see that the model without voice command pre-processing can barely execute any action correctly, because most wrongly-generated tokens are pronunciation-wise similar to the supposed token rather than semantically similar. The model with both the extraction and execution LLMs has much better decision-making accuracy, compared to the ones with only one of these LLMs, because the extraction LLM "divides" the tasks into atomic blocks for the execution LLM to "conquer" it, which can be verified from the generated token length in Table 2. The classification LLM makes little improvement as we only evaluate 2 action types, which is a simpler task that can be approachable without further "dividing".
For efficiency comparison, we see that models without "divide-and-conquer" have significantly larger \(N_{trial}\), which suggests that the model cannot successfully map the commands to the atomic actions defined in the game engine. We also see that the full model with the added classification LLM makes the generated token length \(N_{i}\) for each component more evenly distributed, which implies the effectiveness of the hierarchical framework in Voice2Action.
Figure 2: Accuracy results on different model architectures. Human evaluators rate the model’s action by 4 levels based on the VR environment feedback.
Conclusion
We build a framework to enable efficient real-time interaction in virtual reality built upon agent language models, called Voice2Action. By hierarchically dividing the users' commands into class-specific categories and matching them with atomic function calls in the game engine, this system shows exceptional interaction efficiency with minimal loss of execution accuracy. By incorporating environment changes and user feedback with models from other modalities, the system also demonstrates the potential to be applicable to more complex environments, where multi-turn and multi-agent interaction can be further developed. We open-source the framework and leave this as future work for the community to work on.
|
2309.07852 | ExpertQA: Expert-Curated Questions and Attributed Answers | As language models are adopted by a more sophisticated and diverse set of
users, the importance of guaranteeing that they provide factually correct
information supported by verifiable sources is critical across fields of study.
This is especially the case for high-stakes fields, such as medicine and law,
where the risk of propagating false information is high and can lead to
undesirable societal consequences. Previous work studying attribution and
factuality has not focused on analyzing these characteristics of language model
outputs in domain-specific scenarios. In this work, we conduct human evaluation
of responses from a few representative systems along various axes of
attribution and factuality, by bringing domain experts in the loop.
Specifically, we collect expert-curated questions from 484 participants across
32 fields of study, and then ask the same experts to evaluate generated
responses to their own questions. In addition, we ask experts to improve upon
responses from language models. The output of our analysis is ExpertQA, a
high-quality long-form QA dataset with 2177 questions spanning 32 fields, along
with verified answers and attributions for claims in the answers. | Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, Dan Roth | 2023-09-14T16:54:34Z | http://arxiv.org/abs/2309.07852v2 | # ExpertQA \(\boldsymbol{\measuredangle}\): Expert-Curated Questions and Attributed Answers
###### Abstract
As language models are adapted by a more sophisticated and diverse set of users, the importance of guaranteeing that they provide factually correct information supported by verifiable sources is critical across fields of study & professions. This is especially the case for high-stakes fields, such as medicine and law, where the risk of propagating false information is high and can lead to undesirable societal consequences. Previous work studying factuality and attribution has not focused on analyzing these characteristics of language model outputs in domain-specific scenarios. In this work, we present an evaluation study analyzing various axes of factuality and attribution provided in responses from a few systems, by bringing domain experts in the loop. Specifically, we first collect expert-curated questions from 484 participants across 32 fields of study, and then ask the same experts to evaluate generated responses to their own questions. We also ask experts to revise answers produced by language models, which leads to ExpertQA, a high-quality long-form QA dataset with 2177 questions spanning 32 fields, along with verified answers and attributions for claims in the answers.1
Footnote 1: Code and dataset is available at [https://github.com/chairanymalaviya/expertqa](https://github.com/chairanymalaviya/expertqa).
## 1 Introduction
As the influence of large language models (LLMs) grows beyond the computer science community, _experts_ from various fields are rapidly adapting LLMs for assistance in information-seeking scenarios. For example, medical professionals are using these systems for performing differential diagnosis (Lee et al., 2023) and researchers are using them for faster literature surveys (Krenn et al., 2022; Birhane et al., 2023; Owens, 2023). While the use of LLMs in specialized domains has many potential benefits, it also carries significant risks. False or hallucinated claims that are confidently phrased can potentially mislead experts and propagate societal harms, especially in high stakes domains such as medicine or law (Evans et al., 2021; Dash et al., 2023; Volokh, 2023). Such claims can cause frustration and distrust in AI tools among individual experts in the mildest case, and propagation of misinformation and unsafe practices, in the worst case.
Providing citations or attributions within generated responses is a promising direction for alleviating such concerns. However, the quality of these attributions in model-generated responses, as well as the factuality of responses, is understudied in domain-specific settings. This is partly because we do not completely understand the specific information-seeking needs of experts. Although ex
Figure 1: ExpertQA contains 2177 information-seeking questions formulated by experts spanning 32 fields, as well as expert-verified, model-generated answers to these questions. Each claim-evidence pair in an answer is judged by experts for various properties such as the claim’s informativeness, factuality, citeworthiness, whether the claim is supported by the evidence, and reliability of the evidence source. Further, experts revise the original claims to ensure they are factual and supported by trustworthy sources.
perts from different fields are naturally best suited to aid with such an evaluation, expert evaluations are rarely conducted, as bringing experts in the loop can be time-consuming and costly.
To bridge this gap, we construct ExpertQA, a benchmark of information-seeking questions curated by experts from 32 diverse fields. ExpertQA includes questions relevant to each field, as well as judgements from experts about how well several state-of-the-art systems perform along various axes of factuality and attribution. Having experts in the loop allows us to model a realistic information-seeking scenario that helps us understand how people in different fields use LLMs and where their capabilities fall short.
ExpertQA is constructed by asking qualified experts to formulate questions from their field that they are curious about or have encountered in their professional lives (SS2.1). Responses to these questions are collected from a set of LLM-based systems that produce attributions for their answers (SS3). These include purely generative, retrieval-augmented, and post-hoc attribution systems. We then ask experts to validate the claims and evidences found in the responses to their own questions (SS2.2). Experts judge each claim for its informativeness to the question, its citeworthiness, and factuality. They are also asked to judge how faithful the claim is to an accompanying evidence and rate the reliability of the evidence source. Finally, experts revise each claim so it is faithful to a reliable source and make a best effort attempt at ensuring the claim is factual. This overall process is described in Figure 1.
We first use ExpertQA to evaluate representative systems from which responses are sampled (SS4). Our findings suggest that:
1. _Retrieve-and-read systems often generate complete attributions compared to LLM prompting and post-hoc attribution, but struggle to produce citations for all cite-worthy claims._
2. _The retrieval source significantly impacts the quality of attribution and overall factuality._
3. _High-stakes domains such as medicine and law suffer from a large percentage of incomplete attributions (35% and 31% incomplete attributions respectively) and many attributions come from unreliable sources (51% attributions are rated as not reliable by experts)._
We also measure the extent to which existing automatic methods for attribution and factuality estimation (Bohnet et al., 2022; Min et al., 2023) correlate with expert judgements (SS5). We find that these metrics fall short in correlating with reference judgements of attribution and factuality. However, adapting these metrics to our data through finetuning results in improvements across domains.
The revised answers we collect can be used for improving and evaluating future models on long-form question answering. While similar datasets have been proposed (Fan et al., 2019), examples in ExpertQA are more realistic and contain verified answers edited by experts. Furthermore, unlike existing datasets, questions in ExpertQA contain few vague questions because they include questions professionals have encountered in their practice. We establish several baselines and show that we can improve models by finetuning on ExpertQA but that there is substantial room for improvement, both in terms of ROUGE and QAFactEval (SS6).
## 2 ExpertQA: Annotation Tasks
The annotation is conducted in multiple longitudinal stages and we describe each of these below. In the first stage, we ask experts to write questions from their field (SS2.1). In the next stage, we present responses sampled from various systems back to the same experts for analysis (SS2.2). Further details about annotator backgrounds, annotation cost and screenshots of our interface, are presented in Appendix A.
### Stage 1: Expert-Curated Questions
Participants are recruited through Prolific and are qualified as experts if they have attained a formal education in the field and have worked in the field for at least 3 years. They are first asked to write questions from their field of expertise. They are told that this question could be one they have encountered in their profession or one they are curious about. We ask them to formulate challenging technical questions, for which it may not be possible to find a single document on the web that answers them completely.
Each expert is asked to write 5 questions and to specify the question type(s) for each question (as shown in Table 2). We formulate these question types broadly based on existing work that attempts to classify information needs (Rose and Levinson, 2004). Because of their practical nature, at least
two of the questions are required to be scenario-based questions (Type V, Table 2). We collect more than 3000 questions this way from 524 experts across 32 fields. We manually filter all these questions for coherence and relevance to the field and our initial question corpus contains 2507 questions. Examples of these questions from different fields are presented in Table 1.
### Stage 2: Answer and Claim Annotation
Next, we generate responses for the questions from stage 1 by prompting six different systems that provide attributions with their answers. These systems are described in SS3. We split each answer into claims, where claims are considered at the granularity of a sentence and extracted using the spaCy sentence tokenizer Honnibal and Montani (2017).2
Footnote 2: We also considered further increasing the atomicity of claims (like Kamoi et al. (2023)) but finer-grained atomic claims incur considerably higher annotation cost.
In this stage of annotation, experts validate responses to their own questions. This is beneficial as experts are best qualified to evaluate answers to their own questions. We noticed a low attrition rate in our study as around 92% of annotators from stage 1 validated at least 1 of their own questions in stage 2. Since this task is intensive, a single annotation task is broken down into 1-3 question-answer pairs. The following properties of answers and claims are evaluated, and are presented to annotators in the same order as below. Properties that judge answer quality are marked with A and those that judge evidence quality are marked with P.
(A) Answer Usefulness.Participants are first asked to judge whether the complete answer is useful in answering the question. Usefulness is measured based on whether the answer is at least partially responding to the question. Usefulness is marked on a scale of {_useful_, _partially useful_, _not useful at all_}.
\begin{table}
\begin{tabular}{c c|c} \hline \hline \multicolumn{1}{c|}{**Field**} & **Question** & **Types** \\ \hline Anthropology & _Why is it that Africa’s representation is still a problem in modern day times regardless_ & II,VII \\ & _of the academic writings that state otherwise?_ & VII \\ Architecture & _Suppose an architect decides to reuse an existing foundation of a demolished building, what is to be considered to ensure success of the project?_ & IV \\ Biology & _Can you explain the mechanisms by which habitat fragmentation affects biodiversity_ & III,VI \\ & and ecosystem functioning, and provide examples of effective strategies for mitigating these impacts?_ & III,VI \\ Chemistry & _Why does gallic acid have an affinity with trivalent iron ions?_ & I \\ Engineering \& Technology & _How different will licensing a small modular reactor be as compared to licensing traditional large nuclear power plants?_ & VII \\ Healthcare/Medicine & _If a 48 year old woman is found to have an esophageal carcinoma that imades the musculars propria and has regional lymph node metastases but no distant metastasis, what is her stage of cancer and what are possible recommended treatments?_ & I \\ Law & _Can direct evidence in a case that has been obtained illegally be considered by the court in some cases if it directly points to the defendant’s guilt?_ & IV \\ Music & _What exercises would you do in a singing class with a teenager with puberphonia?_ & IV \\ Physics \& Astronomy & _Standard Model does not contain enough CP violating phenomena in order to explain baryon asymmetry. Suppose the existence of such phenomena. Can you propose a way to experimentally observe them?_ & V \\ Political Science & _Despite the fact that IPCC was formed in 1988, several studies have showed that arguably more than 50\% of all carbon emissions in history have been released since 1988. What does this show about IPCC and developed countries’ efforts?_ & VII \\ Visual Arts & _Tell me the step by step process of recycling a canvas._ & III \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples from ExpertQA. See Table 14 for a larger list showing an example from all fields.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Question Type** & **Count** \\ \hline I & Directed question that has a single & 444 \\ & unambiguous answer & \\ II & Open-ended question that is potentially & 528 \\ & ambiguous & \\ III & Summarization of information on a topic & 371 \\ IV & Advice or suggestions on how to approach & 251 \\ & a problem & \\ V & Question that describes a hypothetical & 853 \\ & scenario and asks a question based on this & \\ VI & Request for a list of resources where one & 160 \\ & can find more information & \\ VII & Request for opinion on a topic & 207 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Question types categorized according to various information needs that are part of ExpertQA.
(A + #) Claim / Evidence Attribution.Attribution is judged based on whether a claim is supported by its accompanying evidence, following a similar design as Rashkin et al. (2021); Bohnet et al. (2022). Support may be judged as _complete_, _partial_ or _incomplete_. If no evidence is provided, support needs to be marked _Missing_ and if the evidence is inaccessible, it needs to be marked _N/A_. Annotators are told that they can assume that certain common sense facts don't need to be explicitly stated in the evidence to judge support. If the evidence included multiple documents, annotators judge support for the claim collectively using all documents. Taking inspiration from Kamoi et al. (2023), if the claim is partially supported, annotators are asked to specify the unsupported span and the reason why it is unsupported.
Evidences can come in the form of URLs or passage evidences, depending on the system. Judging attribution can be difficult for passage evidences as relevant context might be missing, so we provide URLs along with attributed passages for context. Annotators are instructed to only use the passage for judging attribution in these cases.
(a) Claim Informativeness.To differentiate between the relevance of different claims for the question, we asked annotators to label informativeness of each claim. A claim may be judged as central to answering the question (_very relevant_), making a relevant point that is slightly important to answer the question (_a bit relevant_), making a relevant point that isn't too relevant to answering the question (_not too important_) or making a peripheral point that is not relevant to answering the question (_uninformative_).
(A) Claim Factuality.Next, we ask annotators to label their best estimate of the factual correctness of each claim. They are asked to judge factuality based on their own expertise, the evidence returned by the system, and minimal browsing on the internet if needed (lasting no longer than 2-3 minutes). This judgement is also collected on a Likert scale (_Definitely correct_, _Probably correct_, _Unsure_, _Likely incorrect_ and _Definitely incorrect_). We ask annotators to be conservative in their judgements of factual correctness, labeling _Definitely correct_ only if every word in the claim is correct.
(#) Reliability of Source Evidence.Experts may judge certain sources as more reliable than others. For example, webmd.com is a patient-facing website for medical information that may be considered reliable by non-medical experts, but medical experts may not put as much trust into the evidence from this website. Catering attributions to domain experts will require presenting attributions that they consider as credible. Therefore, we ask annotators if the accompanying evidence (if any) is found on a website they would consider reliable (on a scale of _Reliable_, _Somewhat Reliable_, _Not reliable at all_).
(A) Worthiness of Claim Citation.Previous work has highlighted that not all claims are equally worthy of citation (Bohnet et al., 2022; Alam et al., 2023). Certain claims might be considered too obvious or fundamental to a user's domain of expertise and hence, might not need a citation (for example,
Figure 2: The distribution of questions across different fields in ExpertQA.
the definition of a cell for a biologist). Annotators are asked to simply state if the claim is necessary to be cited (as a binary choice). We note that the cite-worthiness of a given claim might depend heavily on a user's prior knowledge and needs, which can vary across experts from the same field.
(A +??) Claim/Evidence Revision.After labeling the above properties, annotators edit the claim and evidences to ensure that the claim is factually correct and the given references support the claim. For instance, if the claim is false or uninformative, annotators could delete it. Further, if the evidence is incorrect or insufficient, they could remove it. While doing so, we did not require them to replace missing/incorrect evidence with correct evidences because doing that can be too time-consuming.
## 3 Systems Evaluated
We now describe the classes of systems from which we sampled responses for evaluation (also see Figure 3). All systems produce an answer string and attributions in the form of in-line citations for the sentences in the answer. Attributions are either returned as URLs to webpages or passages along with URLs to webpages from where they are retrieved. Further experimental details and prompts are included in Appendix B.
LLM as generator + retriever.In this paradigm, we prompt large language models in a closed-book fashion Brown et al. (2020); OpenAI (2023) to generate an answer with in-line citations where they provide URLs for each citation. Note that this is unlikely to work as the model essentially has to generate a URL from its parametric memory. Nevertheless, we consider GPT-4 as the LLM from which we sample responses (gpt4). The prompt used is given in Table 9.
Post-hoc retrieval.This system differs from the above, as we only prompt LLMs to generate answers without attribution, and perform retrieval of evidence for a claim as a post-hoc step. This renders the attributions naturally unfaithful, but we believe this is still a worthwhile approach to investigate because of the strength of LLMs as generators and retrievers independently. The attribution corpora we consider are Sphere Piktus et al. (2021) (post_hoc_sphere_gpt4), which is a large static dump of CommonCrawl, and Google search results (post_hoc_gs_gpt4).
Retrieve-and-read.In this class of systems, we first retrieve evidence for a question and then generate an answer by prompting a model to use the retrieved evidence to answer the question. As our attribution corpus, we again consider Sphere Piktus et al. (2021) (rr_sphere_gpt4) as well as Google search results (rr_gs_gpt4). We use sparse retrieval using BM25 Robertson et al. (2009) for retrieving from Sphere. We then generate an answer using GPT-4, by including the retrieved evidence as context in our prompt. The answer generator is instructed to also generate in-line citations for each sentence, which refer to the passages provided in the context. The prompt used is given in Table 10.
of claims for each answer to be at most 10 for lowering annotation costs. During annotation, we noticed that attributions from gpt4 often pointed to broken links and evaluating such attributions is not meaningful, so we sampled fewer responses from that system. The number of examples included from each system and the abstention rate of the systems (how frequently a system responds with one of the predefined strings indicating it cannot provide an answer) is presented in Table 3.
## 4 Analysis
### Data Statistics
The total number of examples validated in stage 2 of our annotation is 2177. The average number of claims and average number of tokens across these examples is 5.79 and 152.12 respectively (the overall distributions are presented in Figure 4). The distribution of examples across fields is presented in Figure 2. A large percentage of examples in ExpertQA come from high-stakes fields such as Healthcare/Medicine and Law. Table 2 presents the distribution of questions across different question types shown to annotators in stage 1. The largest number of questions are Type V because participants were asked to write 2/5 questions of this type. After those, the largest number of questions are open-ended questions (Type I) and directed questions (Type II).
### Manual Analysis
To estimate the agreement of human labels in ExpertQA, we (the authors) labeled our agreement with the reference labels from two fields to which the authors collectively belong. Specifically, we sampled 60 questions from Engineering and Technology and another 60 from Healthcare / Medicine, where we included 10 questions from each system. For each claim in the responses to these 60 questions, we label binary agreement with the annotated properties from SS2.2.
Our analysis showed fairly high agreement (> 85%) for most labels in both fields. The agreement for attribution labels for medical claims was found to be slightly lower than engineering claims. We noticed that medical claims allow for more nuance in interpretation, compared to more objective claims in engineering.
### Results
We present the Likert distribution for claims across all systems and properties in Figure 6. Below we summarize our main conclusions from the analysis.
Majority of answers are useful, but answers from purely generative systems are considered more useful.We find that \(\sim\)87-89% of answers from gpt4 are marked useful. The retrieve-and-read systems (as well as Bing_chat, which also retrieves evidences first) are marked slightly less useful (73-80%), likely because retrieved evidences are not always highly relevant. Choosing relevant evidences using Google search results in more useful answers than with the smaller Sphere corpus and a sparse retriever.
Figure 4: Histogram of the number of claims and number of tokens across all examples in ExpertQA.
Figure 5: Percentage agreement on claim annotations for a random sample of 60 questions each from the Engineering & Technology and Medicine fields.
\begin{table}
\begin{tabular}{l|c|c} \hline
**System** & **Count** & **Abstention Rate** \\ \hline gpt4 & 174 & 0\% \\ bing\_chat & 470 & 0.01\% \\ rr\_sphere\_gpt4 & 279 & 37.89\% \\ rr\_gs\_gpt4 & 452 & 22.69\% \\ post\_hoc\_sphere\_gpt4 & 403 & 0\% \\ post\_hoc\_gs\_gpt4 & 399 & 0\% \\ \hline \end{tabular}
\end{table}
Table 3: Number of examples sampled from different systems and the abstention rates of different systems.
Retrieve-and-read systems often generate complete attributions, but struggle to produce citations for all cite-worthy claims.Retrieve-and-read systems have a stronger inductive bias to use the retrieved evidence to answer the question. However, they do not always produce attributions for cite-worthy claims (18% of these claims are missing attributions)3. On the other hand, post-hoc attribution systems return attributions for every single claim, by definition, but return more incomplete attributions. Lack of context during post-hoc retrieval can be an issue for retrieving valid attributions. For example, a claim such as _"Targeted therapies, such as PARP inhibitors or CDK4/6 inhibitors, may be useful depending on the specific genetic makeup or molecular features of the tumor [2]."_ does not actually contain any details about the fact that the question asked about breast cancer, which can make it hard to find relevant retrievals. Therefore, we might need approaches that can make claims standalone, such as Choi et al. (2021), to improve post-hoc citation quality.
Footnote 3: Figure 6 shows the Likert distribution of attribution labels on those claims deemed cite-worthy by experts.
Finally, without retrieval, we found that gpt4 often generates citations to URLs that link to plausible & trustworthy domains (for eg, nasa.gov for astronomy and nih.gov for medical claims), but the content on these webpages is often totally mismatched (more than 60% of the time).
Both vanilla prompting and retrieval-augmented systems generate mostly _very relevant_ claims to the question.At the same time, a significant percentage of claims (30-40) are not very relevant. This may include void claims (that simply restate the question or state simplistic facts). This suggests that there is a lot of room in making answers concise and relevant. In addition, using Sphere as the retrieval corpus results in fewer informative claims than Google search for retrieve-and-read systems.
Just over half the claims are labeled as _definitely correct_ by experts.While a significant percentage of claims are labeled as correct (_probably_ or _definitely_), experts do not instill high confidence in the factual correctness of claims. This might be because it is hard to judge factuality with a high degree of confidence, in a short time frame. Once again, a smaller retrieval corpus and retriever (rr_sphere_gpt4) results in less factual claims as the model may be more likely to hallucinate.
Figure 6: The Likert distribution of labels for the different properties of answers / claims, annotated by experts. The top 3 properties (answer usefulness, claim informativeness and factuality) are judgements of answer quality and the bottom 3 (claim/evidence attribution, source reliability and claim cite-worthiness) are attribution quality.
The retrieval corpus has a significant effect on expert judgements of source reliability.Expert judgements of reliability are directly influenced by the credibility of the sources from which evidences are retrieved. Corpora such as Sphere, which do not account for source, often present evidences that are unreliable to experts (for both rr_sphere_gpt4 and post_hoc_sphere_gpt4). For example, in a question about breast cancer, evidence from a comment on a blog is retrieved and is naturally considered _Not reliable at all_ by the expert. Using Google search as the retrieval system, improves source reliability judged by experts.
Majority of claims are deemed cite-worthy across systems.Only around 17-22% claims are judged not citeworthy by the experts. This suggests that most claims in responses to expert-curated warrant providing supporting evidence. Non-cite-worthy claims are mostly either uninformative or too basic for an expert from the field.
Effect of Retrieval Corpus.While sampling responses, we instruct models to abstain if they cannot answer the question faithfully and accurately. Among retrieve-and-read systems, we find that the abstention rate is significantly higher for systems that use Sphere as the retrieval corpus compared to Google search. Across claim properties, we find that the retrieval corpus has a significant impact on human judgements. Systems that use Sphere as the retrieval corpus instead of Google search result in more missing attributions, fewer correct and informative claims, attributions with more evidences from unreliable sources, and overall less useful answers.
Domain and Question Type Trends.Figure 10 shows the distribution of labels across all fields. There are few clearly discernible patterns from the trends across fields. The percentage of claims labeled as being _Definitely_ or _Probably correct_ is fairly high (>85%) for many fields. However, we note that across all annotated claims, **high-stakes domains such as medicine and law can suffer from a significant percentage of incomplete attributions** (around 35% of medical claims and 31% of legal claims worthy of citation are not supported with appropriate evidence). Further, **a large percentage of claims present evidences from unreliable sources for these domains** (for eg, \(\sim\)51% of medical claims have attributions from sources that are not labeled _Reliable_ by experts).
Across question types, systems clearly struggle with Type VI questions that request for a list of resources, as claims are less informative, factual, and supported by evidence. Other question types that appear to be challenging are Type IV and Type VII, which seek advice for a problem or opinions on a topic respectively.
## 5 Automatic Estimation of Attribution and Factuality
Next, we study the effectiveness of current automatic systems for _attribution_Honovich et al. (2022) and _factuality_ (correctness) estimation Min et al. (2023) in the context of ExpertQA. In both cases, we observe that **current systems show high precision but low recall when compared with human judgements of attribution and factuality**.
### Automatic Attribution Estimation
Under the framework of _attributable to identified sources_ (AIS) Rashkin et al. (2021), i.e. judging whether model generated content can be verified against given attributions, previous work has found NLI models to be effective in providing automated AIS (AutoAIS) estimations Gao et al. (2023); Bohnet et al. (2022).
To understand the effectiveness of AutoAIS on ExpertQA, we follow the settings in previous work and use the NLI classifier4 from Honovich et al. (2022) as an AutoAIS system to predict the attribution labels of claim-evidence pairs in ExpertQA. The model gives a binary decision for whether a hypothesis claim is entailed by the evidence as premise. For evidence longer than the maximum sequence length of the model (512), we apply the retrieve-and-rerank technique from Schuster et al. (2022), where we split evidence into sentences and take top-2 sentences
\begin{table}
\begin{tabular}{l|c|c}
**System** & **AutoAIS** & **Num. Claims** \\ \hline gpt4 &.156 & 149 \\ bing\_chat &.320 & 992 \\ rr\_sphere\_gpt4 &.689 & 732 \\ rr\_gs\_gpt4 &.778 & 1415 \\ post\_hoc\_sphere\_gpt4 &.281 & 1158 \\ post\_hoc\_gs\_gpt4 &.241 & 1500 \\ \hline \end{tabular}
\end{table}
Table 4: AutoAIS score (more attributable\(\rightarrow\)1, less attributable\(\rightarrow\)0) of predicted responses by the systems. Only claims annotated as _citeworthy_ and _with complete support_ are considered.
with highest entailment confidence predicted by the NLI model as evidence.
Table 4 shows the macro-averaged AutoAIS scores for the set of claims marked as having attribution with complete support from each system. Compared to our human judgments, the AutoAIS estimations show a large variance in terms of the averaged completeness of attribution across systems. Notably, attributions generated by post-hoc retrieval receive much lower AutoAIS scores, whereas retrieve-and-read systems get higher scores.
We compare the per-claim AutoAIS predictions to human judgement of attribution completeness in Table 5. The results suggest that AutoAIS estimations with a NLI model overall have high-precision yet low-recall against human judgements of attribution completeness. To understand the discrepancy between NLI model behavior vs. human judgement of attribution completeness, we highlight a few typical examples of attribution(s) in Table 6. For NLI, every part of the hypothesis or claim is expected to be directly verifiable against the premise, while we observe during attribution verification, human judgements involve more implicit world knowledge, e.g. _calcium carbonate is an alkali_. Another common type of mistake from AutoAIS involves combining information from multiple pieces of evidence. We observe mutli-source attributions to be particularly common among bing_chat and retrieve-and-read systems.
### Automatic Factuality Estimation
Prior work has proposed methods Manakul et al. (2023); Min et al. (2023) to automatically estimate the factuality of model-generated responses. In particular, we use and study FActScore Min et al. (2023) as an automatic estimator for the factuality of claims. We first break down each claim into finer-grained atomic claims using few-shot prompting with text-davinci-003, using the same prompt and setting as Min et al. (2023). We retrieve the top-k (k=3) relevant passages using Google search using an atomic claim as the query. The evidence paragraphs for each atomic claim are then included in a prompt sent to gpt-3.5-turbo, along with the atomic claim, where the model is instructed to say whether the atomic claim is _True_ or _False_. The final factuality score for a claim is then calculated by averaging the scores of all the atomic claims.
In Table 7, we report the F1 scores of the factual (T) and non-factual (F) classes and the micro-averaged overall F1 scores of the FActScore factuality scores and the reference factuality labels. FActscore scores are thre
\begin{table}
\begin{tabular}{l|
nary scores and reference factuality labels are 1 if the claim's factuality is labeled as _Probably correct_ or _Definitely correct_, and 0 otherwise.
We find that automatic factuality estimation struggles to identify non-factual claims in our dataset. In particular, predicted labels have low recall of non-factual claims. This is more often the case for retrieve-and-read systems, where the answer is generated based on retrieved evidences. The other systems use GPT4's parametric knowledge for answer generation, which might make it slightly easier for an evaluator like ChatGPT to judge factuality.
## 6 Long-form QA Evaluation
A beneficial output of our annotation pipeline is the revised answers produced by annotators. These answers are vetted by experts to contain factual information and compose a new long-form QA dataset, ExpertQA. We consider two types of splits for ExpertQA (both 80-10-10): a random split of the data and a domain-wise split, where 80% of a field's data is included in the training set and 10% is included in both validation and test sets.
### Evaluation Metrics
We consider standard metrics used for evaluating long-form question answering, that are based on similarity to a reference answer, i.e., ROUGE Lin (2004) and those focused on evaluating factual consistency through question-answer pairs generated on a reference answer, i.e., QAFactEval Fabbri et al. (2022).
### Baselines
We finetune the following open-source language models: FlanT5-11B Chung et al. (2022), Alpaca-7B Taori et al. (2023), Vicuna-7B Chiang et al. (2023) and LLaMa2-7B-Chat Touvron et al. (2023). We finetune these models with the same prompts as the ones used in their training (provided in Tables 11, 12). Further, we also report results with Llama2-70B-Chat without finetuning (marked *).
### Results
Our results are shown in Table 8. We find that both Llama2-7B and Vicuna-7B outperform FlanT5-11B despite the smaller model size, likely due to additional instruction finetuning for both those models. We observe that finetuning significantly improves performance (results without finetuning are in Table 13), and Llama2-70B performs worse than finetuned systems under zero-shot prompting.
## 7 Related Work
Attribution-Generating Methods.With the rapid adaptation of large language models, a few different classes of systems have been proposed for generating attributions for the responses they produce. The first class of systems is **vanilla LLM prompting**Tay et al. (2022); Weller et al. (2023), where LLMs such as GPT-4 OpenAI (2023) are prompted to return attributions (in the form of titles of references, optionally accompanied with URLs) along with their answers. Since these systems are not explicitly trained to produce attributions, they can often hallucinate the references they provide Agrawal et al. (2023). Another class of systems is **retrieve-and-read systems**Guu et al. (2020); Borgeaud et al. (2022); Izacard et al. (2022), which first retrieve evidence relevant for a query, and then generate an answer based on the retrieved evidence. These systems are sometimes trained on demonstrations of humans browsing for information Nakano et al. (2021); Thoppilan et al. (2022); Menick et al. (2022), which allows them to jointly generate answers and citations. Finally, **post-hoc retrieval**Gao et al. (2023); He et al. (2022) involves retrieving attributions after answering a query using both the query and response for retrieval, and optionally revising the answer based on the attribution. For sampling responses, we consider all three classes of systems described above.
Attribution AnalysisPrior work has conducted analysis of attributions produced by systems in response to queries from existing NLP datasets Rashkin et al. (2021); Bohnet et al. (2022); Dziri et al. (2022); Liu et al. (2023); Muller et al. (2023). Notably, Rashkin et al. (2021) propose the framework of Attributable to Identified Sources (AIS) for performing human evaluation of attributions.
\begin{table}
\begin{tabular}{c|c|c|c|c}
**Split** & **Model** & **R1** & **R2** & **RL** & **QPE** \\ \hline \multirow{4}{*}{Random} & FlanT5-11B & 0.335 & 0.114 & 0.215 & 2.068 \\ & Vicuna-7B & 0.351 & 0.119 & 0.212 & 1.068 \\ & Llama2-7B & 0.362 & 0.125 & 0.219 & 1.985 \\ & Llama2-70B* & 0.320 & 0.101 & 0.181 & 1.050 \\ \hline \multirow{4}{*}{Domain} & FlanT5-11B & 0.324 & 0.107 & 0.210 & 1.538 \\ & Vicuna-7B & 0.359 & 0.120 & 0.213 & 1.739 \\ & Llama2-7B & 0.363 & 0.124 & 0.219 & 1.726 \\ & Llama2-70B* & 0.328 & 0.104 & 0.187 & 0.979 \\ \hline \end{tabular}
\end{table}
Table 8: Long-form QA results after finetuning models on the random and domain splits of ExpertQA.
Analysis from these works suggests that systems are still far from providing precise attributions with sufficient recall for all citeworthy statements. In our work, we recognize that this is problematic in specific domains where attribution precision and recall are both critical.
Automatic methods to measure attribution have also been explored. Briefly, attribution has been automatically measured through the use of textual entailment models (Bohnet et al., 2022; Yue et al., 2023), by checking the consistency of answers produced to the same question using the evidence or the response (Wang et al., 2020) and prompting LLMs and finetuning LLMs on tasks such as NLI relevant for judging attributions (Yue et al., 2023).
Previous efforts at collecting gold attribution data have been conducted by repurposing Wikipedia citations (Kamoi et al., 2023; Petroni et al., 2022), or existing datasets like Natural Questions (Bohnet et al., 2022; Kwiatkowski et al., 2019), and relying on large-scale human annotation (Chen et al., 2022; Dziri et al., 2022; Kamalloo et al., 2023). We note that with regards to the information needs of experts, this data can be restricting in terms of the accompanying corpus, and contain limited complexity when it comes to expert-curated content (for instance, experts are likely to use and trust sources other than Wikipedia).
Factuality Analysis.Analysis of factuality or truthfulness of LM generations has been conducted extensively in prior work (Thorne et al., 2018; Evans et al., 2021; Maynez et al., 2020; Pagnoni et al., 2021; Lin et al., 2021; Muhlgay et al., 2023). Factuality is closely linked to work studying hallucinations (Ji et al., 2023), which includes closed-domain hallucination (Kryscinski et al., 2020; Maynez et al., 2020), where hallucinations are analyzed in reference to some context, as well as open-domain hallucination (for example, Manakul et al. (2023)). The factuality labels we collect as part of ExpertQA elicit a best-effort judgement of truthfulness of claims from experts.
Prior methods proposed to verify the factuality of statements include zero-resource approaches (Manakul et al., 2023; Kadavath et al., 2022; Agrawal et al., 2023; Azaria and Mitchell, 2023; Min et al., 2023), which operate mostly in a black-box fashion, and resource-enriched approaches (Thorne et al., 2018; Guo et al., 2022; Chen et al., 2023; Feng et al., 2023), that verify statements by checking their validity against external databases or resources. We use one such recent method called FactScore (Min et al., 2023) to evaluate how well the factuality labels in our dataset correlate with automatic factuality judgements. Min et al. (2023) split claims into finer-grained atomic claims and using LMs to verify these atomic claims. Dedicated benchmarks to study hallucination have also been proposed (Liu et al., 2022; Li et al., 2023), but these are often synthetically generated and not based on real user queries and responses to them.
Long-form QA.Existing long-form QA datasets are often created using naturally occurring sources on the web such as search queries (Nguyen et al., 2016; Stelmakh et al., 2022) and forums (Fan et al., 2019). Several issues have been pointed out with conducting robust evaluation for long-form QA (Krishna et al., 2021). Previous work has also suggested that evaluating finer-grained claims provides higher agreement of factuality among annotators (Krishna et al., 2023). Keeping this in mind, we construct ExpertQA to cover practical information needs of experts and collect fine-grained judgements of factuality along with vetted answers.
Xu et al. (2023) also conduct human evaluation of long-form answers with experts from 7 fields. Notably, they emphasize the importance of evaluating multiple aspects of long-form answers, which are also considered in our work.
Domain-specific QA.Several domain-specific datasets for question answering have also been proposed in prior work. Specifically, existing work has presented datasets for the medical domain (Tsatsa-ronis et al., 2015; Pampari et al., 2018; Jin et al., 2019, 2021; Pal et al., 2022), legal (Guha et al., 2023), technology (Dos Santos et al., 2015) and others. There is also work that proposes datasets with examples spanning multiple domains (Rogers et al., 2020; Reddy et al., 2019; Hendrycks et al., 2021). However, these datasets often have a very limited coverage of domains. Importantly, most of these are not created with experts-in-the-loop and hence may not always represent natural information seeking distributions, that go beyond factoid questions. Further, they do not contain long-form answers and attributions for the answers. Finally, because of large-scale pretraining, we are aware that many such datasets are under the risk of contamination (Sainz et al., 2023). Hence, we choose to crowdsource questions from experts, and further also get annotations for attribution and factuality.
Conclusion and Future Work
Our study suggests that although large language models show a lot of promise for aiding domain experts, there is large ground to cover in being able to provide expert-level guidance that is factual and also supported by reliable sources Metzler et al. (2021). While LLMs make information access and search substantially easier for domain experts, improving factuality and attribution of answers is necessary to improve trustworthiness in these systems. Repurposing general-purpose language models for domain-specific purposes requires having _experts-in-the-loop_, so we can understand their information needs and how models fall short in meeting them. We hope that our benchmark, ExpertQA, can benefit the community with improved methods for attribution & factuality estimation, and long-form question answering evaluation.
## 9 Limitations
Atomicity of Claims.In most cases, claims in our dataset are sentences that may not represent singular information units. This lack of atomicity in claims means that properties such as factuality and attribution need to be judged exhaustively for a claim. Collecting human judgements for finer-grained atomic claims can be significantly more expensive and is not explored in this work.
Claim Extraction.Extracting sentence-level claims from a generated answer for the purpose of evaluation is performed by using a sentence tokenizer. However, we note that existing tokenizers suffer from sentence tokenization errors (for example, when lists or tables are present in answers). This resulted in a small number of claims being excessively long and hard to evaluate.
Field Coverage.Even though we tried to cover a wide range of fields in our dataset, we missed covering questions from certain fields. Finding experts from rarer fields can be especially hard. We will consider further expanding ExpertQA to more domains, so that it can be more broadly useful. In addition, the examples in our dataset represent the information needs of English-speaking annotators primarily based in Europe, the Americas and Africa.
Subjectivity of labels.Some of the properties of claims can elicit more subjective judgements, which can vary between experts from the same field. This subjectivity is not inherently captured in our data through multiple judgements, but we do estimate agreement using claims from engineering and medicine through our own labels (SS4.2).
## Acknowledgements
First, we would like to thank the 484 annotators who took the time and effort out to help out with this study. We would also like to thank Artemis Panagopoulou, Alyssa Hwang, Nelson Liu and Chris Alberti for helpful comments and discussions.
|
2309.16787 | Surface plasmon-polaritons in graphene, embedded into medium with gain
and losses | The paper deals with the theoretical consideration of surface
plasmon-polaritons in the graphene monolayer, embedded into dielectric with
spatially separated gain and losses. It is demonstrated, that presence of gain
and losses in the system leads to the formation of additional mode of graphene
surface plasmon-polaritons, which does not have its counterpart in the
conservative system. When the gain and losses are mutually balanced, the
position of exceptional point -- transition point between unbroken and broken
$\mathcal{PT}$-symmetry -- can be effectively tuned by graphene's doping. In
the case of unbalanced gain and losses the spectrum of surface
plasmon-polaritons contains spectral singularity, whose frequency is also
adjustable through the electrostatic gating of graphene. | O. A. Zhernovnykova, O. V. Popova, G. V. Deynychenko, T. I. Deynichenko, Yuliy V. Bludov | 2023-09-28T18:24:18Z | http://arxiv.org/abs/2309.16787v1 | # Surface plasmon-polaritons in graphene, embedded into medium with gain and losses
###### Abstract
The paper deals with the theoretical consideration of surface plasmon-polaritons in the graphene monolayer, embedded into dielectric with spatially separated gain and losses. It is demonstrated, that presence of gain and losses in the system leads to the formation of additional mode of graphene surface plasmon-polaritons, which does not have its counterpart in the conservative system. When the gain and losses are mutually balanced, the position of exceptional point - transition point between unbroken and broken \(\mathcal{PT}\)-symmetry - can be effectively tuned by graphene's doping. In the case of unbalanced gain and losses the spectrum of surface plasmon-polaritons contains spectral singularity, whose frequency is also adjustable through the electrostatic gating of graphene.
graphene surface plasmon-polariton \(\mathcal{PT}\)-symmetry
## 1 Introduction
The physical systems, which involve both media with gain and media with losses exhibits one specific and counter-intuitive property: being in general non-Hermitian systems, under certain relation between gain and losses they can possess real spectrum (similar to Hermitian system). In the particular situation of \(\mathcal{PT}\)-symmetry [1] the gain and losses are perfectly balanced and the coordinate-dependent complex external potential \(V\left(\mathbf{r}\right)\) is characterized by the property \(V\left(\mathbf{r}\right)=V^{*}\left(-\mathbf{r}\right)\) [here star stands for the complex conjugation]. In the \(\mathcal{PT}\)-symmetric structure the spectrum is real (this situation is called unbroken \(\mathcal{PT}\)-symmetry) for gain/loss values below the certain threshold and is complex for gain/loss values above this threshold. The later situation is referred as broken \(\mathcal{PT}\)-symmetry and is characterized by the presence of two modes: one is growing and another is decaying. Although initially the conception of \(\mathcal{PT}\)-symmetry was introduced in the quantum mechanical formalism, nowadays it is unclear, whether some real quantum object described by the \(\mathcal{PT}\)-symmetric Hamiltonian can be found in nature. Nevertheless, experimentally the existence of \(\mathcal{PT}\)-symmetry was demonstrated in a variety of other fields, namely mechanical systems [2], acoustics [3], LRC circuits[4], coupled optical waveguides [5, 6, 7], or whispering-gallery resonators [8].
Certain similarity between Maxwell and Schrodinger equation leads to the possibility of the realization of \(\mathcal{PT}\)-symmetry in optical systems, where spatial distribution of dielectric permittivity obeys the relation \(\varepsilon\left(\mathbf{r}\right)=\varepsilon^{*}\left(-\mathbf{r}\right)\). At the same time \(\mathcal{PT}\)-symmetric optical systems exhibit a series of unusual properties like nonreciprocal (nonsymmetrical) wave propagation [6], negative refraction [9, 10, 11], simultaneous lasing and coherent perfect absorption[12, 13, 14], unidirectional visibility [15, 16, 17].
Optical systems, which operation principle is based on bulk electromagnetic waves, possess certain limit in the miniaturization of their components, called diffraction limit. One of possible ways to overcome this diffraction limit is to build photonic systems, which operate on surface waves (namely, surface plasmon-polaritons) instead of bulk ones.
Nevertheless, surface plasmon-polaritons in noble metals has relatively short lifetime due to high losses. In connection with this, using the \(\mathcal{PT}\)-symmetry in plasmonics[18, 19, 20, 21, 22, 23, 24] can be very promising, because it could compensate losses in the noble metals and originate lossless propagation of surface plasmon-polaritons. Another possibility to reduce losses in plasmonics is to use a two-dimensional carbon material graphene. Surface plasmon-polaritons sustained by the graphene exhibit both longer lifetime and degree of localization [25, 26], if compared to surface plasmon-polaritons in noble metals. At the same time, graphene's conductivity can be dynamically varied through the electrostatic gating[27], last fact allows to tune dynamically the wavelength [28, 29, 30, 31, 32] of graphene surface plasmon-polaritons (GSPPs) as well as realize tunable sensor [33] or plasmonic modulator [34, 35]. Along with this, gated graphene embedded into the \(\mathcal{PT}\)-symmetric structures allows to achieve dynamical tunability of losses[36]. At the same time an optical pumping of the graphene allows the realization of gain [37, 38, 39] and, as a consequence, amplification of GSPPs[40, 41]. Pumped graphene, being implemented into the lossy medium, allows the realization of the \(\mathcal{PT}\)-symmetry for a series of purposes like sensing [42], waveguiding [43, 44], or diffraction grating [45].
Nevertheless, the \(\mathcal{PT}\)-symmetric structures are not unique systems with gain or losses, which are characterized by real spectrum. For example, a finite slab of optical gainy medium at certain discrete frequencies possesses real spectrum[46]. These frequencies, called spectral singularities, behave like a zero-width resonances. Also a special relation between unbalanced gain and loss can give rise to the generalized \(\mathcal{PT}\)-symmetry[47], which exhibits the same properties as its \(\mathcal{PT}\)-symmetric counterpart. Along with this, systems with unbalanced gain and losses can exhibit a series of properties like perfect absorption [48], directional coupling[49] and lossless waveguiding[50].
In this paper we consider GSPPs in graphene monolayer, cladded between two dielectric layers: one is with gain, another with loss. We show, that when lossless graphene is imbedded into \(\mathcal{PT}\)-symmetric dielectric surrounding, the positions of the exceptional points in GSPP can be effectively tuned by changing graphene's Fermi energy. At the same time tunability of graphene's Fermi energy allows to vary the positions of spectral singularities in the GSPP spectrum of lossy graphene inside dielectric surrounding with unbalanced gain and losses.
## 2 Single layer graphene in the gain-loss surrounding
We consider the graphene layer [see Fig. 1] cladded between the two dielectric media of equal thickness \(d\), one of which is arranged at spatial domain \(-d<z<0\) and is characterized by losses (dielectric constant \(\varepsilon^{(l)}=\varepsilon+i\varepsilon_{l}\)), while another one occupies spatial domain \(0<z<d\) and is characterized by gain, \(\varepsilon^{(g)}=\varepsilon-i\varepsilon_{g}\). The half-spaces outside the described domains are filled with the lossless dielectric with permittivity \(\varepsilon\).
Since GSPPs are p-polarized waves, in this paper we restrict our consideration to the case of TM polarization, which is described by Maxwell equations
\[-\frac{\partial H_{y}}{\partial z}=-\frac{i\omega}{c}\varepsilon \left(z\right)E_{x}+\frac{4\pi}{c}\sigma\left(\omega\right)E_{x}\delta\left(z \right), \tag{1}\] \[ik_{x}H_{y}=-\frac{i\omega}{c}\varepsilon\left(z\right)E_{z},\] (2) \[\frac{\partial E_{x}}{\partial z}-ik_{x}E_{z}=\frac{i\omega}{c}H _{y}. \tag{3}\]
Here we admitted that the electric field of the p-polarized wave \(\mathbf{E}=\left(E_{x},0,E_{z}\right)\) possesses nonzero \(x\)- and \(z\)-components, while in its magnetic field only \(y\)-component is nonzero, i.e. \(\mathbf{H}=\left(0,H_{y},0\right)\). Also in Maxwell equations (1)-(3) we take into account the uniformity of the structure in \(y\)-direction (i.e. \(\partial/\partial y\equiv 0\)), spatiotemporal dependence of the electromagnetic field \(\sim\exp\left(ik_{x}x-i\omega t\right)\) [where \(\omega\) is the cyclic frequency, \(k_{x}\) is the in-plane component of the wavevector, and \(c\) stands for the light velocity in vacuum] as well as the spatial dependence of the dielectric constant
\[\varepsilon\left(z\right)=\left\{\begin{array}{cc}\varepsilon,&\left|z \right|>d\\ \varepsilon^{(l)},&-d<z<0\\ \varepsilon^{(g)},&0<z<d.\end{array}\right. \tag{4}\]
At the same time, in Eqs. (1)-(3) Dirac delta expresses the two-dimensional character of graphene's conductivity \(\sigma\left(\omega\right)\), which can be expressed in Drude form as
\[\sigma\left(\omega\right)=\frac{e^{2}}{4\hbar}\frac{4E_{F}}{\pi\hbar\left( \gamma-i\omega\right)}. \tag{5}\]
Here \(E_{F}\) is the Fermi energy and \(\gamma\) is the disorder. In practice, graphene monolayer is characterized by the finite thickness \(\approx 3\,\)A. But further in the paper the thickness of graphene is supposed to be much less than thicknesses of gainy and lossy dielectric layers, so the graphene is considered as two-dimensional conductor.
In the semiinfinite spatial domain \(z<-d\), the solution of Maxwell equations (1)-(3) can be represented as
\[H_{y}^{\left(-\right)}\left(z\right)=H_{y}^{\left(-\right)}\left(-d \right)\exp\left[p\left(z+d\right)\right], \tag{6}\] \[E_{x}^{\left(-\right)}\left(z\right)=\frac{cp}{i\omega\varepsilon }H_{y}^{\left(-\right)}\left(-d\right)\exp\left[p\left(z+d\right)\right],\] (7) \[E_{z}^{\left(-\right)}\left(z\right)=-\frac{ck_{x}}{\omega \varepsilon}H_{y}^{\left(-\right)}\left(-d\right)\exp\left[p\left(z+d\right) \right]. \tag{8}\]
Here \(H_{y}^{\left(-\right)}\left(-d\right)\) and \(p=\left[k_{x}^{2}-\left(\omega/c\right)^{2}\varepsilon\right]^{1/2}\) are the value of magnetic field at \(z=-d\) and its decaying factor, respectively. The electromagnetic wave, whose fields are described by Eqs. (6)-(8) can be either evanescent (when \(k_{x}>\omega\sqrt{\varepsilon}/c\), and \(p\) is purely real value, which sign is chosen to be positive in this case, \(\mathrm{Re}\left[p\right]>0\)), or propagating (when \(k_{x}<\omega\sqrt{\varepsilon}/c\), and \(p\) is purely imaginary with \(\mathrm{Im}\left[p\right]<0\)). Such condition for signs of the real and imaginary parts of \(p\) along with the positiveness of the sign of the argument of the exponent \(\exp\left[p\left(z+d\right)\right]\) describes the situation, where the evanescent wave decays in the direction towards \(z\rightarrow-\infty\), while the propagating wave propagates in the negative direction of \(z\)-axis.
In other semiinfinite domain \(z>d\) the solution of Maxwell equations (1)-(3) can be expressed in the form
\[H_{y}^{\left(+\right)}\left(z\right)=H_{y}^{\left(+\right)}\left( d\right)\exp\left[-p\left(z-d\right)\right], \tag{9}\] \[E_{x}^{\left(+\right)}\left(z\right)=-\frac{cp}{i\omega\varepsilon }H_{y}^{\left(+\right)}\left(d\right)\exp\left[-p\left(z-d\right)\right],\] (10) \[E_{z}^{\left(+\right)}\left(z\right)=-\frac{ck_{x}}{\omega \varepsilon}H_{y}^{\left(+\right)}\left(d\right)\exp\left[-p\left(z-d\right) \right], \tag{11}\]
Figure 1: The graphene layer, embedded into the dielectric with spatially separated gain and losses.
where \(H_{y}^{\left(l\right)}\left(d\right)\) is value of the magnetic field at \(z=d\). Owing to the negativeness of the sign of the exponent's argument \(\exp\left[-p\left(z-d\right)\right]\) the wave either decays towards \(z\rightarrow\infty\), or propagates in the positive direction of the axis \(z\).
In the dielectric with losses the solutions of Maxwell equation will have form
\[H_{y}^{\left(l\right)}\left(z\right) = H_{y}^{\left(l\right)}\left(-d\right)\cos\left[k_{z}^{\left(l \right)}\left(z+d\right)\right]\] \[+\frac{i\omega\varepsilon^{\left(l\right)}}{ck_{z}^{\left(l \right)}}E_{x}^{\left(l\right)}\left(-d\right)\sin\left[k_{z}^{\left(l\right) }\left(z+d\right)\right],\] \[E_{x}^{\left(l\right)}\left(z\right) = E_{x}^{\left(l\right)}\left(-d\right)\cos\left[k_{z}^{\left(l \right)}\left(z+d\right)\right]\] \[-\frac{ck_{z}^{\left(l\right)}}{i\omega\varepsilon^{\left(l \right)}}H_{y}^{\left(l\right)}\left(-d\right)\sin\left[k_{z}^{\left(l\right) }\left(z+d\right)\right],\] \[E_{z}^{\left(l\right)}\left(z\right) = -\frac{ck_{x}}{\omega\varepsilon^{\left(l\right)}}\left\{H_{y}^{ \left(l\right)}\left(-d\right)\cos\left[k_{z}^{\left(l\right)}\left(z+d\right)\right]\right.\] \[\left.+\frac{i\omega\varepsilon^{\left(l\right)}}{ck_{z}^{\left( l\right)}}E_{x}^{\left(l\right)}\left(-d\right)\sin\left[k_{z}^{\left(l\right) }\left(z+d\right)\right]\right\},\]
where \(k_{z}^{\left(l\right)}=\left[\left(\omega/c\right)^{2}\varepsilon^{\left(l \right)}-k_{x}^{2}\right]^{1/2}\), \(E_{x}^{\left(l\right)}\left(-d\right)\) and \(H_{y}^{\left(l\right)}\left(-d\right)\) being the values of the tangential components of the electric and magnetic fields at the boundary of the medium with losses \(z=-d\). In the similar manner the electromagnetic field in the medium with gain can be represented as
\[H_{y}^{\left(g\right)}\left(z\right) = H_{y}^{\left(g\right)}\left(0\right)\cos\left[k_{z}^{\left(g \right)}z\right]\] \[+\frac{i\omega\varepsilon^{\left(g\right)}}{ck_{z}^{\left(g \right)}}E_{x}^{\left(g\right)}\left(0\right)\sin\left[k_{z}^{\left(g\right)}z \right],\] \[E_{x}^{\left(g\right)}\left(z\right) = E_{x}^{\left(g\right)}\left(0\right)\cos\left[k_{z}^{\left(g \right)}z\right]\] \[-\frac{ck_{z}^{\left(g\right)}}{i\omega\varepsilon^{\left(g \right)}}H_{y}^{\left(g\right)}\left(0\right)\sin\left[k_{z}^{\left(g\right)}z \right],\] \[E_{z}^{\left(g\right)}\left(z\right) = -\frac{ck_{x}}{\omega\varepsilon^{\left(g\right)}}\left\{H_{y}^{ \left(g\right)}\left(0\right)\cos\left[k_{z}^{\left(g\right)}z\right]\right.\] \[\left.+\frac{i\omega\varepsilon^{\left(g\right)}}{ck_{z}^{\left( g\right)}}E_{x}^{\left(g\right)}\left(0\right)\sin\left[k_{z}^{\left(g\right)}z \right]\right\},\]
Here \(k_{z}^{\left(g\right)}=\left[\left(\omega/c\right)^{2}\varepsilon^{\left(g \right)}-k_{x}^{2}\right]^{1/2}\), while \(E_{x}^{\left(g\right)}\left(0\right)\) and \(H_{y}^{\left(g\right)}\left(0\right)\) stand for the values of the tangential components of the electric and magnetic fields, correspondingly at the boundary of the medium with gain \(z=0\).
Boundary conditions for the tangential components of the electric and magnetic fields can be obtained directly from Maxwell equations (1)-(3). Thus, integrating of (3) over the infinitesimal interval \(\left[d-0,d+0\right]\) gives the boundary condition
\[E_{x}^{\left(+\right)}\left(d\right)=E_{x}^{\left(g\right)}\left(d\right), \tag{18}\]
which couples the tangential component of the electric field across the boundary between the gainy and lossless dielectrics. In the similar manner, integration of (3) over the intervals \(\left[-0,+0\right]\) and \(\left[-d-0,-d+0\right]\) results in boundary conditions
\[E_{x}^{\left(l\right)}\left(0\right)=E_{x}^{\left(g\right)}\left(0\right), \tag{19}\]
\[E_{x}^{\left(l\right)}\left(-d\right)=E_{x}^{\left(-\right)}\left(-d\right) \tag{20}\]
for the electric field tangential component across the graphene layer and at the boundary between dissipative and lossless dielectrics, respectively.
Boundary conditions for the tangential component of the magnetic field can be obtained from integration of Eq. (1) over the same infinitesimal intervals as in the previous case. The final expressions can be represented in the form
\[H_{y}^{\left(+\right)}\left(d\right)=H_{y}^{\left(g\right)}\left(d\right), \tag{21}\]
\[H_{y}^{(g)}\left(0\right)=H_{y}^{(l)}\left(0\right)-\frac{4\pi}{c} \sigma\left(\omega\right)E_{x}^{(l)}\left(0\right), \tag{22}\] \[H_{y}^{(l)}\left(-d\right)=H_{y}^{(-)}\left(-d\right). \tag{23}\]
In other words, at boundaries \(z=\pm d\) magnetic field is continuous across the interface, while at the interface \(z=0\) the magnetic field is discontinuous across the graphene due to presence of currents in it.
Substitution of Eqs. (6), (7), (12), and (13) into Eqs. (20) and (23) results into
\[H_{y}^{(l)}\left(-d\right)=H_{y}^{(-)}\left(-d\right), \tag{24}\] \[E_{x}^{(l)}\left(-d\right)=\frac{cp}{i\omega\varepsilon}H_{y}^{ (-)}\left(-d\right). \tag{25}\]
In similar manner, substitution of Eqs. (9), (10), (15), and (16) into boundary conditions (18) and (21) gives
\[H_{y}^{(+)}\left(d\right)=H_{y}^{(g)}\left(d\right), \tag{26}\] \[-\frac{cp}{i\omega\varepsilon}H_{y}^{(+)}\left(d\right)=E_{x}^{( g)}\left(d\right). \tag{27}\]
In the matrix form the above equations can be represented as
\[\hat{F}^{(l)}\left(-d\right)=\hat{\mathcal{S}}^{(-)}H_{y}^{(-)} \left(-d\right), \tag{28}\] \[\hat{F}^{(g)}\left(d\right)=\hat{\mathcal{S}}^{(+)}H_{y}^{(+)} \left(d\right), \tag{29}\]
where \(\hat{\mathcal{S}}^{(\pm)}\) and \(\hat{F}^{(f)}\left(z\right)\) are 2\(\times\)1 matrices
\[\hat{\mathcal{S}}^{(\pm)}=\left(\begin{array}{c}1\\ \mp\frac{cp}{i\omega\varepsilon}\end{array}\right), \tag{30}\] \[\hat{F}^{(f)}\left(z\right)=\left(\begin{array}{c}H_{y}^{(f)} \left(z\right)\\ E_{x}^{(f)}\left(z\right)\end{array}\right), \tag{31}\]
and index \(f=g,l\). Along with this, representation of boundary conditions (19) and (22) in the matrix form results in
\[\hat{F}^{(g)}\left(0\right)=\hat{\mathcal{G}}\hat{F}^{(l)}\left(0\right), \tag{32}\]
where \(\hat{\mathcal{G}}\) is the 2\(\times\)2 matrix
\[\hat{\mathcal{G}}=\left(\begin{array}{cc}1&-\frac{4\pi}{c}\sigma\left( \omega\right)\\ 0&1\end{array}\right). \tag{33}\]
Along with this, from Eqs. (12) and (13) it is possible to link the fields at \(z=-d\) and \(z=0\) as
\[\hat{F}^{(l)}\left(0\right)=\hat{\mathcal{T}}^{(l)}\hat{F}^{(l)}\left(-d \right). \tag{34}\]
In similar manner from Eqs. (16) and (15) one can obtain
\[\hat{F}^{(g)}\left(d\right)=\hat{\mathcal{T}}^{(g)}\hat{F}^{(g)}\left(0\right). \tag{35}\]
In above equations the transfer-matrices
\[\hat{\mathcal{T}}^{(f)}=\left(\begin{array}{cc}\cos\left[k_{z}^{(f)}d \right]&\frac{i\omega\varepsilon^{(f)}}{ck_{z}^{(f)}}\sin\left[k_{z}^{(f)}d \right]\\ -\frac{ck_{z}^{(f)}}{i\omega\varepsilon^{(f)}}\sin\left[k_{z}^{(f)}d\right]& \cos\left[k_{z}^{(f)}d\right]\end{array}\right). \tag{36}\]
Applying consequently the boundary conditions (28), (32), and (29) [along with Eqs. (34) and (35)], we obtain
\[\hat{\mathcal{S}}^{(+)}H_{y}^{(+)}\left(d\right)=\hat{\mathcal{T}}^{(g)}\hat{ \mathcal{G}}\hat{\mathcal{T}}^{(l)}\hat{\mathcal{S}}^{(-)}H_{y}^{(-)}\left(-d \right). \tag{37}\]
This equation, being multiplied by row matrix
\[\left\{\hat{\mathcal{S}}^{(-)}\right\}^{-1}=\frac{1}{2}\left(\begin{array}{ cc}1&\frac{i\omega\varepsilon}{cp}\end{array}\right), \tag{38}\]
after taking into account orthogonality of matrices
\[\left\{\hat{\mathcal{S}}^{(-)}\right\}^{-1}\hat{\mathcal{S}}^{(+)}=0 \tag{39}\]
results in the dispersion relation of the waves in the graphene-based structure
\[\left\{\hat{\mathcal{S}}^{(-)}\right\}^{-1}\hat{\mathcal{T}}^{(g)}\hat{\mathcal{G }}\hat{\mathcal{T}}^{(l)}\hat{\mathcal{S}}^{(-)}=0, \tag{40}\]
which can be written in the explicit form as
\[\frac{\varepsilon^{(g)}}{k_{z}^{(g)}}\Phi^{(g)}+\frac{\varepsilon^{(l)}}{k_{z}^ {(l)}}\Phi^{(l)}+\frac{4\pi}{i\omega}\sigma\left(\omega\right)=0, \tag{41}\]
where
\[\Phi^{(f)}=\frac{\cos\left[k_{z}^{(f)}d\right]+\frac{\varepsilon^{(f)}p}{ek_{ z}^{(f)}}\sin\left[k_{z}^{(f)}d\right]}{\sin\left[k_{z}^{(f)}d\right]-\frac{ \varepsilon^{(f)}p}{ek_{z}^{(f)}}\cos\left[k_{z}^{(f)}d\right]}. \tag{42}\]
## 3 \(\mathcal{PT}\)-symmetric surface plasmon-polaritons
In the particular situation, when the graphene is considered to be lossless (\(\gamma=0\)) and the gain and losses in surrounding media are prefectly balanced (\(\varepsilon_{g}=\varepsilon_{l}\)), the dielectric function (4) is characterized by the property \(\varepsilon\left(z\right)=\varepsilon^{*}\left(-z\right)\). In other words, graphene-based structure possesses the \(\mathcal{PT}\)-symmetry, whose distinctive properties are revealed in the dispersion relation of GSPPs [see Figs. (2)(a) and (2)(b)]. It should be noted that Figs. (2)(a) and (2)(b) represent
the solution of the dispersion relation (41), when the frequency \(\omega\) is supposed to be purely real value, while the in-plane wavevector \(k_{x}=k_{x}^{\prime}+ik_{x}^{\prime\prime}\) is supposed to be complex value, whose imaginary part characterizes the degree of exponential decaying (when \(k_{x}^{\prime\prime\prime}>0\)) or growing (when \(k_{x}^{\prime\prime\prime}<0\)) of wave's amplitude per unit length during the propagation along \(x\)-axis.
In the conservative system (without gain/losses) the dispersion relation of GSPP possesses one branch [for details see, e.g. Ref. [51]], which exists in the whole range of the wavevectors and frequencies. The situation changes drastically in the case of \(\mathcal{PT}\)-symmetry. Thus, in the lossless graphene in \(\mathcal{PT}\)-symmetric dielectric surrounding there are two modes of GSPPs with unbroken \(\mathcal{PT}\)-symmetry - low-frequency mode [red solid line A in Fig. 2(a)] and high-frequency one [red solid line B in Fig. 2(a)]. Unbroken \(\mathcal{PT}\)-symmetry are characterized by zero imaginary part of these modes' wavevectors [\(k_{x}^{\prime\prime\prime}\equiv 0\), see Fig. 2(b)]. In other words, these modes propagate along the graphene layer without damping or growth. At the exceptional point, located at \(\omega\approx 2.025\) meV and \(k_{x}^{\prime}\approx 0.0218\,\mu\mathrm{m}^{-1}\), low- and high-frequency modes merge together, and GSPPs with frequencies below this exceptional point are characterized by broken \(\mathcal{PT}\)-symmetry. Here spectrum contains two modes [solid blue lines C and D in Figs. 2(a) and 2(b)] with complex wavevectors \(k_{x}\) such that for given frequency wavevector of one mode is complex conjugate of the other mode's wavevector. As a result, one GSPP mode [mode C in Figs. 2(a) and 2(b)] is exponentially growing during its propagation along the graphene, and another mode is exponentially decaying [mode D in Figs. 2(a) and 2(b)]. At the same time high-frequency GSPP mode with unbroken \(\mathcal{PT}\)-symmetry [line B in Figs. 2(a) and 2(b)] at another exceptional point [\(\omega\approx 7.58\,\)eV and \(k_{x}\approx 0.0762\,\mu\mathrm{m}^{-1}\)] folds towards light line \(\omega=ck_{x}/\sqrt{\varepsilon}\) and has end-point of the spectrum lying on this light line [which is depicted by dashed black line in Fig. 2(a)]. Along with this, at that exceptional point [see inset in Fig. 2(a)] GSPP mode with unbroken \(\mathcal{PT}\)-symmetry transforms into pair of modes with broken \(\mathcal{PT}\)-symmetry [green solid lines E and F in Figs. 2(a) and 2(b)], which degree of growth/decay (imaginary part of the wavevector \(k_{x}^{\prime\prime}\)) increases monotonically with an increase of frequency.
One of the advantages of using graphene in plasmonics is the possibility to tune dynamically graphene's Fermi energy (and, consequently, the dispersion properties of GSPPs) in time simply by changing the gate voltage applied to graphene. Respective dependence between the Fermi energy and bias voltage \(V_{b}\), applied to graphene, can be expressed as \(E_{F}\sim\left(V_{b}\right)^{1/2}\) [see, e.g., Refs.[52, 27, 53]]. In \(\mathcal{PT}\)-symmetric graphene-based structures it opens another possibility - to switch dynamically between the unbroken and broken \(\mathcal{PT}\)-symmetries. An example of such situation is demonstrated in Figs. 2(c) and 2(d), where for the fixed frequency \(\omega=2\,\)meV graphene's Fermi energies above and below \(E_{F}\approx 0.429\,\)eV give rise to the broken and unbroken \(\mathcal{PT}\)-symmetries, respectively. Notice, that upper mode E and F [see Figs. 2(e) and 2(f)] for chosen fixed frequency \(\omega=10\,\)meV, and dielectric constant's imaginary part \(\varepsilon_{g}=\varepsilon_{l}=1.9\) exhibits broken \(\mathcal{PT}\)-symmetry in all range of nowadays experimentally attainable [28] graphene's Fermi energies \(E_{F}\lesssim 0.5\,\)eV. Such a mode can exist even when graphene is absent (case \(E_{F}=0\)) - respective modes were investigated in Ref. [54].
The physical origins of the reported phenomena can be understood from spatial distributions of the electromagnetic field, which are shown in Fig. 3. In the case of unbroken \(\mathcal{PT}\)-symmetry [see Figs. 3(a) and 3(b) for low- and high-frequency modes A and B, correspondingly] in-plane component of the electric field possesses reflective symmetry (real part, depicted by solid red lines) or antisymmetry (imaginary part, depicted by dashed blue lines) with respect to the graphene layer, which is located at the boundary between the gainy and lossy medium at \(z=0\). In other words, their field distribution obey the \(\mathcal{PT}\)-symmetric relation \(E_{x}^{(A)}\left(z\right)=\left\{E_{x}^{(A)}\left(-z\right)\right\}^{*}\), \(E_{x}^{(B)}\left(z\right)=\left\{E_{x}^{(B)}\left(-z\right)\right\}^{*}\). Equality of the field amplitudes in gainy and lossy media originates a perfect balance between gain and losses of energy during the propagation, which in its turn lead to the propagation of GSPPs with constant amplitude. For broken \(\mathcal{PT}\)-symmetry [see Figs. 3(c)-3(f) for modes C-F, respectively], the spatial profiles \(E_{x}\left(z\right)\) are asymetric. Meanwhile, for modes C and E most of the field is concentrated in the medium with gain, while for modes D and F most of the field is concentrated in lossy medium, which determines respective growing of decaying of energy during mode propagation along the graphene. At the same time, modes with broken \(\mathcal{PT}\)-symmetry possess the mutual symmetry \(E_{x}^{(C)}\left(z\right)=\left\{E_{x}^{(D)}\left(-z\right)\right\}^{*}\), \(E_{x}^{(E)}\left(z\right)=\left\{E_{x}^{(F)}\left(-z\right)\right\}^{*}\), which determines the equality of absolute values of the imaginary parts of their wavevectors.
## 4 Graphene surface plasmon-polaritons in dielectric medium with unbalanced gain and losses
Now a natural question appears: how the dispersion properties will change, if graphene monolayer is not lossless? To answer this question we represent in Figs. 4(a) and 4(b) the disperion curves of the GSPPs in the case where gain and losses in surrounding dielectric media are mutually balanced, but small losses are present in graphene [\(\gamma\neq 0\) in Eq. (5)]. As evident, losses in graphene results in the situation, where the GSPP spectrum ceases to be real. In more
detail, the low-frequency mode (solid red lines) is decaying (with positive \(k_{x}^{\prime\prime}>0\)) in all range of the frequencies and wavevectors, while high-frequency one (green solid lines) is growing (with negative \(k_{x}^{\prime\prime}<0\)). Nevertheless, if in the dielectric surrounding gain prevails over losses [see Figs. 4(c) and 4(d)], then the low-frequency mode becomes growing at frequencies above some threshold frequency [\(\omega\approx 21.38\) meV for the parameters of Figs. 4(c) and 4(d)], and remains decaying at frequencies below this threshold. This threshold frequency [called _spectral singularity_ and depicted in Figs 4(c) and 4(d) by black dots] plays an important role - it is the only frequency in all frequency domain, where GSPP spectrum is characterized by purely real wavevector [\(k_{x}\approx 1.41\)\(\mu\)m\({}^{-1}\) for the parameters of Figs. 4(c) and 4(d)]. In other words, GSPP with frequency, corresponding to spectral singularity, propagates along graphene without growth or damping, owing to its real wavevector. At the same time, when in layered structure losses are higher than gain [Figs. 4(e) and 4(f)], the high-frequency mode contains the spectral singularity at frequency \(\omega\approx 0.44\) meV.
In connection with this a next question arises: is it possible to tune the position of spectral singularity by varying the Fermi energy? The answer to this question follows directly from Figs. 5(a) and 5(b). Thus, for given Fermi energy \(E_{F}\) (horizontal axis) it is possible to find such frequency \(\omega\) (left vertical axis) of spectral singularity, at which GSPP spectrum will be characterized by purely real wavevector \(k_{x}\) (right vertical axis). Even more, from comparison of Figs. 5(a) and 5(b) it follows, that lowering of losses [Fig. 5(b)] in dielectric leads to the red-shift of the spectral singularity. At the same time for fixed Fermi energy losses in graphene exert strong influence on the position of spectral singularity [see Fig. 5(c)]. Thus, lowering the graphene's disorder \(\gamma\) leads to decrease of the respective frequency and wavevector of spectral singularity.
## 5 Conclusions
To conclude, we considered spectrum of GSPPs in the structure, where graphene monolayer is cladded between two dielectric slabs of finite thickness - one slab with gain and another with losses. We demonstrated that in the case of \(\mathcal{PT}\)-symmetric dielectric surrounding the spectrum consists of two modes, which coalescs at exceptional point. At the frequency ranges below and above the exceptional point the \(\mathcal{PT}\)-symmetry is broken and unbroken, respectively. The
Figure 3: Spatial profiles of the electric field in-plane component \(E_{x}\left(z\right)\) for frequency \(\omega=3.684\) meV and wavevector \(k_{x}=0.05\,\mu\)m\({}^{-1}\) [panel (a)]; \(\omega=4.642\) meV, \(k_{x}=0.05\,\mu\)m\({}^{-1}\) [panel (b)]; \(\omega=2\) meV, \(k_{x}=(0.0215-i0.0006)\)\(\mu\)m\({}^{-1}\) [panel (c)]; \(\omega=2\) meV, \(k_{x}=(0.0215+i0.0006)\)\(\mu\)m\({}^{-1}\) [panel (d)]; \(\omega=10\) meV, \(k_{x}=(0.0957-i0.0051)\)\(\mu\)m\({}^{-1}\) [panel (e)]; \(\omega=10\) meV, \(k_{x}=(0.0957+i0.0051)\)\(\mu\)m\({}^{-1}\) [panel (f)]. Other parameters are the same as those in Fig. (2)(a). The frequencies \(\omega\) and wavevectors \(k_{x}\) of the profiles in panels (a)–(f), are depicted in Fig. (2) by points A–F, respectively (which are depicted as superindexes in y-axis titles). Real and imaginary parts of the electric field are depicted by solid red and dashed blue lines, respectively.
position of the exceptional point is sensitive to the Fermi energy of graphene. Last fact opens the possibility to switch dynamically between the broken and unbroken \(\mathcal{PT}\)-symmetry by means of the electrostatic gating, i.e. by changing the gate voltage, applied to graphene. When gain and losses in dielectric slabs are not mutually balanced and graphene is lossy, the GSPP spectrum in such system is characterized by the presence of spectral singularity - at certain particular frequency the GSPP propagation is lossless, i.e. respective GSPP is characterized by infinite mean free path and can travel along graphene monolayer with constant amplitude - without decaying ou growth. In its turn the electrostatic gating of graphene (varying the Fermi energy) allows to change the frequency of spectral singularity.
## Acknowledgements
YVB acknowledges support from the European Commission through the project "Graphene- Driven Revolutions in ICT and Beyond" (Ref. No. 785219), and the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Financing UID/FIS/04650/2019. Additionally, YVB acknowledges financing from FEDER and the portuguese Foundation for Science and Technology (FCT) through project PTDC/FIS-MAC/28887/2017.
|
2304.00119 | End-to-end deep learning-based framework for path planning and collision
checking: bin picking application | Real-time and efficient path planning is critical for all robotic systems. In
particular, it is of greater importance for industrial robots since the overall
planning and execution time directly impact the cycle time and automation
economics in production lines. While the problem may not be complex in static
environments, classical approaches are inefficient in high-dimensional
environments in terms of planning time and optimality. Collision checking poses
another challenge in obtaining a real-time solution for path planning in
complex environments. To address these issues, we propose an end-to-end
learning-based framework viz., Path Planning and Collision checking Network
(PPCNet). The PPCNet generates the path by computing waypoints sequentially
using two networks: the first network generates a waypoint, and the second one
determines whether the waypoint is on a collision-free segment of the path. The
end-to-end training process is based on imitation learning that uses data
aggregation from the experience of an expert planner to train the two networks,
simultaneously. We utilize two approaches for training a network that
efficiently approximates the exact geometrical collision checking function.
Finally, the PPCNet is evaluated in two different simulation environments and a
practical implementation on a robotic arm for a bin-picking application.
Compared to the state-of-the-art path planning methods, our results show
significant improvement in performance by greatly reducing the planning time
with comparable success rates and path lengths. | Mehran Ghafarian Tamizi, Homayoun Honari, Aleksey Nozdryn-Plotnicki, Homayoun Najjaran | 2023-03-31T20:28:28Z | http://arxiv.org/abs/2304.00119v1 | End-to-end deep learning-based framework for path planning and collision checking: bin picking application
###### Abstract
Real-time and efficient path planning is critical for all robotic systems. In particular, it is of greater importance for industrial robots since the overall planning and execution time directly impact the cycle time and automation economics in production lines. While the problem may not be complex in static environments, classical approaches are inefficient in high-dimensional environments in terms of planning time and optimality. Collision checking poses another challenge in obtaining a real-time solution for path planning in complex environments. To address these issues, we propose an end-to-end learning-based framework viz., Path Planning and Collision checking Network (PPCNet). The PPCNet generates the path by computing waypoints sequentially using two networks: the first network generates a waypoint, and the second one determines whether the waypoint is on a collision-free segment of the path. The end-to-end training process is based on imitation learning that uses data aggregation from the experience of an expert planner to train the two networks, simultaneously. We utilize two approaches for training a network that efficiently approximates the exact geometrical collision checking function. Finally, the PPCNet is evaluated in two different simulation environments and a practical implementation on a robotic arm for a bin-picking application. Compared to the state-of-the-art path planning methods, our results show significant improvement in performance by greatly reducing the planning time with comparable success rates and path lengths.
keywords: Path planning, Artificial Neural Network, Collision Checking, Bin Picking, Imitation Learning, Data Aggregation
## 1 Introduction
Bin-picking, which includes object detection, motion planning, and grasping, is a crucial part of automation in industry. Bin picking has been one of the trending research topics in recent years because of the challenges in image processing and path planning [1; 2]. Industrial bin-picking consists of a 2D/3D camera, which is used to collect the environmental information (object and obstacle detection), and a conveyor with the bins. The vision system is used to detect object positions and obstacle shapes, while the motion planning module calculates the path to the grasp position based on this information [3]. An efficient and real-time path for the manipulator can greatly improve the efficiency of mass production and assembly lines.
In a typical cycle of bin-picking operations, a combination of machine vision, inverse kinematics, and other algorithms are used to find the best grasping configuration for the robot. Once a configuration is found, path planning algorithms generate a path from the robot's fixed home state to the grasp, and then to a place state. Finally, the robot physically moves to complete the task. The serial and critical-path nature of path planning and movement means that planning time and execution time are essential factors that impact the overall cycle time and automation economics. It is crucial to ensure that robot paths are collision-free, considering both obstacles in the environment and self-collision.
Classical planners can be used for bin-picking, but they often fail to take advantage of the repetitive nature of the task. In a static environment, where a robot cell operates for extended periods, path planning is limited to fixed initial configurations and a fixed subspace of possible end-effector poses within a bin. This paper proposes a more efficient path planner in terms of the planning time for bin-picking tasks. We introduce the Path Planning and Collision checking
Network (PPCNet), a deep neural network-based tool specifically designed for repetitive tasks like bin-picking. The PPCNet generates waypoints along the path and estimates collisions faster, making it a reliable solution for autonomous systems such as robotic arms. By using PPCNet, we can generate a safe and efficient path in a relatively short amount of time. Our proposed method outperforms conventional approaches in terms of planning time and stays competitive in path quality and success rate, making it a viable solution for industrial bin-picking applications.
The following are the major features of this research:
* The proposed PPCNet end-to-end framework is an easy-to-implement and efficient approach in the context of path planning.
* The proposed post-processing dataset generation improves the quality of PPCNet paths.
* Collision detection is a critical and time-intensive aspect of path planning. PPCNet offers a faster solution, reducing planning time significantly through its ability to quickly detect collisions.
* We employ and compare two different methods for training the collision checking network to determine the best and safest approach for collision detection.
The paper is structured as follows: In Section 2, we provide a comprehensive overview of various path planning techniques, discussing their advantages, disadvantages, and limitations for this problem. Section 3 outlines the problem definition. We then introduce our framework, PPCNet, in Section 4, where we also discuss the training process. In Section 5, we evaluate and compare the performance of our approach against four state-of-the-art path planners. Finally, we conclude our study in Section 6.
## 2 Related work
### Path planning
Generating collision-free paths for robotic arms is a key area of research in robotics. Path planning methods can broadly be classified into two categories: classical and learning-based [4]. Classical path planning approaches encompass a wide range of techniques, including artificial potential field (APF) [5], bio-inspired heuristic methods [6], and sampling-based path planners [7]. In contrast, learning-based path planners primarily utilize various machine-learning techniques to plan for the robot's path. These methods have been gaining popularity in recent years due to their ability to handle complex and dynamic environments.
Sampling-based methods are widely used in path planning. In this regard, they can be divided into two categories: Single-query algorithms and Multi-query algorithms. While single-query approaches aim to find a path between a single pair of initial and goal configurations, the multi-query ones try to be efficient when there are multiple pairs [8]. The Rapidly-exploring Random Tree (RRT) [9] and Probabilistic Road Map (PRM) [10] are two of the most well-known algorithms for single-query and multi-query approaches, respectively. PRM is a roadmap algorithm which aims to build an exhaustive roadmap of the configuration space to be able to path plan between any two configurations given to it. While PRM has been used for path planning of robotic arms before [11], it has several limitations. Firstly, it generates paths that are far from the optimal solution. Secondly, generating the roadmap is computationally expensive [12], making it inefficient for path planning in relatively high-dimensional environments, even for static tasks such as bin-picking [13].
Furthermore, given an initial and goal configuration pair, RRT grows a tree of waypoints from the initial configuration in order to connect the two configurations. An important characteristic of RRT algorithm is its probabilistically completeness, meaning that it is guaranteed to find a path if there exists one [8]. Due to RRT's capability in non-convex high dimensional spaces, it has been extensively employed in many studies for path planning of robotic arms [14; 15; 16]. To increase the performance of RRT in different aspects, such as planning time and optimality, there are many variants of this method in the literature. In [17], the Bi-directional RRT (Bi-RRT) is introduced for path planning of the 7-DOF manipulator. Moreover, in [18] this algorithm used for distributed picking application. This method simultaneously builds two RRTs; one of them attempts to find the path from the initial configuration to the final configuration, and the other one attempts to find the path in the opposite direction and tries to connect these two trees in each iteration. Although this method is faster than RRT, they both often produce sub-optimal paths in practice and the quality of the path depends on the density of samples and the shape of the configuration space.
RRT* is the optimal version of the RRT and finds the asymptotically optimal solution if one exists [12]. While that is a strong feature of RRT*, it is inefficient in high-dimensional spaces and has a relatively slow convergence rate. As a result, it is not a suitable solution for the real-time path planning of a robotic arm. To overcome the problems mentioned above, biased sampling is one of the promising methods which can improve the performance of sampling-based path planners,
due to the adaptive sampling of the configuration space of the robot. Batch-informed trees (BIT*) [19] and informed RRT* [20] are other significant variants of the RRT* that employ an informed search strategy and batch-processing approach to find the optimal path in a more efficient way by focusing the search on promising areas of the tree. Although this method is able to find the optimal solution, it is not suitable for real-time path planning since it requires a large amount of computational resources, which can lead to delays in the system's response time.
As mentioned above, since collision-checking is computationally expensive and the configuration space gets exponentially bigger with the increase of dimensionality, most of the classical sampling-based methods suffer from high computation and low convergence rates, making them impractical for use in a complex environment with high dimensional configuration space and real-time applications such as bin picking [21]. Ellekilde et al. [13] present a new algorithm for planning an efficient path in a bin-picking scenario based on using lookup tables. Although this method is almost instantaneous and faster than sampling-based planners, it is memory inefficient.
To this end, learning-based path planners have emerged to deal with the limitations of classical methods. Learning-based techniques can solve the long run-time problem of sampling-based methods by providing biased sampling nodes, allowing them to run faster and more efficiently [22]. For instance, Qureshi et al. [23] proposed a deep neural network-based sampler to generate nodes for sampling-based path planners. This work shows a significant improvement in the performance of sampling-based path planners in terms of planning time. Reinforcement learning (RL) is a subcategory of learning-based techniques for manipulator path planning in an unknown environment [24; 25]. Although RL-based approaches illustrate promising performance in path planning research, they need exhaustive interaction with the environment to acquire experience, and this is not applicable in many practical cases. Moreover, imitation learning is an alternative to these kinds of cases. In imitation learning, the training dataset is provided by an expert during the execution of the task. For instance, in [26], a recurrent neural network is trained by using demonstration from the virtual environment to perform a manipulation task.
Neural planners are relatively new methods in the path planning context. These planners are based on deep neural networks, and their purpose is to solve the path planning problem as an efficient and fast alternative to sampling-based methods. Bency et al. [27] proposed a recurrent neural network to generate an end-to-end near-optimal path iteratively in a static environment. Moreover, in [28] and [29], motion planning networks (MPnet) is introduced. This method has a strong performance in unseen environments as the path can be learned directly.
### Collision Approximation
Collision detection is a vital component of path planning in robotics and can consume a significant amount of computation time. In fact, it has been reported that it can take up to 90% of the total path planning time [7]. There are several methods for performing collision checking in path planning. Analytical methods like GJK (Gilbert-Johnson-Keerthi) algorithm [30] use mathematical equations and geometric models to determine if two objects will collide. They are often fast and efficient but can struggle with complex shapes and interactions. Grid-based Methods like Voxel Grid [31] divide the environment into a grid of cells and check for collisions by looking at the occupied cells. This is often faster than analytical methods but can result in a loss of precision. Another simple method for collision detection is by approximating objects with overlapping spheres [32]. In this approach, the number of spheres used to represent each object is a crucial hyperparameter. Additionally, this method is more instantaneous than other traditional collision detection algorithms.
Recently, machine learning techniques have been utilized to overcome the limitations of traditional collision detection methods in robotics. For example, Support Vector Machines (SVM) [33] have been used to calculate precise collision boundaries in configuration space. Gaussian Mixture Models (GMM) were applied in another study [34], resulting in the path being generated five times faster than Bi-directional RRT. K-Nearest Neighbors (KNN) is another machine learning technique used in modeling configuration space for collision detection [35]. Fastron, a machine learning model, was proposed by Das et al. [36] to model configuration space as a proxy collision detector, decomposing it into subspaces and training a collision checkers for each region. The majority of previous studies in this field rely on decomposing the configuration space, which requires significant engineering and simplifications.
## 3 Problem definition
In this section, we will define the path-planning problem for the bin-picking application. Besides, the notation that we use in this paper will be introduced.
The first important concept in path planning is the configuration space (\(C_{space}\)). \(C_{space}\) includes all positions and configurations of the robot. We can define \(q\) as the specific configuration for an n-joints robotic arm:
\[q=\left[\begin{array}{ccc}\theta_{1}&...&\theta_{n}\end{array}\right]^{ \mathrm{T}} \tag{1}\]
Where \(\theta_{i}\) is the angular position of the i-th joint of the arm. \(C_{free}\) can be defined as the subspace of \(C_{space}\) in which the robot is collision-free. Let \(\tau\) be a trajectory and the first and last element of it be \(q_{initial}\) and \(q_{target}\), respectively. The path planning problem is to find all of the elements between \(q_{initial}\) and \(q_{target}\) in a way that all of the segments in the trajectory lie entirely in the \(C_{free}\) space. In the context of path planning, a segment between two configurations is the space of the configurations inside the linear combination of the two configurations. In the other words:
\[\left\{\alpha q_{i}+(1-\alpha)q_{i+1}\mid\alpha\in[0,1]\right\}\subseteq C_{ free},\forall i\in\{1,...,N-1\} \tag{2}\]
Where N is the number of waypoints.
Bin-picking includes two different path-planning problems. The first one is finding the path between the initial configuration of the robot and the grasp configuration which is located inside the bin (\(Q_{pick}\)); the second one is finding the path between the grasp position and the place configuration (\(Q_{place}\)). Figure 1 shows the pick-and-place operation overview. Now the problem is to design a planner that can satisfy the following objectives:
* generating a safe and obstacle-free path.
* answering the queries in real time.
* generating a high-quality path with a high probability of success.
## 4 Path planning and Collision checking framework
In this section, we will delve into the details of the proposed framework for path planning in bin-picking tasks. As illustrated in Figure 2, the proposed method is comprised of two consecutive networks, namely the planning network and the collision checking network.
The first network is a modified version of MPNet [29]. The original version of the MPNet consists of two networks. The first one is the planning network which generates the waypoints in a sequential manner by imitating the behavior of an expert planner. The second one, the encoder network, encodes the environment to a fixed-length vector in order to give environment information to the planning network for better planning.
Figure 1: Path planning for pick and place operation.
The primary obstacles of concern in this paper, are the bin, other permanent structures, and the robot itself. These are considered to be fully known and encoded within the planning environment such that unlike the MPNet, our proposed planner does not use a depth map or other sensory readings of the environment to encode information for the planning network. This allows for faster planning times, which is a key focus of our research. In a full bin-picking solution, the process that determines the grasp configuration will select grasps that are at low risk of collision during grasping and robot path.
In our work, the planning network takes the current and goal configurations and generates the next joint space configuration to move into. Each configuration is represented by an n-dimensional column vector. Therefore, the path planning network's input is a 2n-dimensional vector which is obtained by the concatenation of the current and goal configuration. In each iteration of decision-making, after getting the next configuration from the path planning network, the feasibility of the segment is checked using the second network. The second network, namely the collision checking network, gets the next waypoint and estimates how likely it is that segment between the two configurations will be inside \(C_{free}\). By following this approach, the process of decision-making (followed by collision-checking) in the configuration space can be done in a more efficient manner.
### Training and dataset generation
The main purpose of the path planning network is to clone the behavior of an "expert" planner. In cases like this, where it is feasible for an expert to demonstrate the desired behavior, imitation learning is a useful approach. Moreover, while RRT-based path planners are not themselves sequential decision-makers, they can be modeled as:
\[q_{t+1}=f(q_{t},q_{target}) \tag{3}\]
Figure 2: Path Planning and collision checking network (PPCNet).
The purpose of the neural planner is to learn the function \(f\). Behavior cloning, which focuses on using supervised learning to learn the expert's behavior, is the most basic type of imitation learning. The process of behavioral cloning is rather straightforward. The expert's demonstrations are broken down into state-action pairs, which are then used as independent and identically distributed (i.i.d.) training data for supervised learning. An important concern when using supervised learning for behavioral cloning is the risk of encountering out-of-distribution data. During the inference process, the neural planner may make decisions that were not previously observed in the dataset. Consequently, even minor inaccuracies in the replicated behavior can accumulate rapidly, resulting in a substantial failure rate in identifying viable paths. To mitigate this issue, we propose the data aggregation algorithm (DAGGER) in our study [37].
The training process of the neural planner and the collision checker is explained in Algorithm 1. At the beginning of the process, some initial demonstrations from the expert planner is required. To do so, \(RandomGoalConf(T)\) function samples T goal configurations in the collision-free space of the bin. The generated goal configurations are given to the expert planner to generate the paths using \(GeneratePath\) function which requires initial and goal configurations. The expert planner outputs two datasets: the neural planner dataset (\(D\)) and the collision checker network dataset (\(C\)). \(D\) only consists of the final path that the expert planner generates; however, \(C\) includes every segment which was checked by the expert planner using the geometrical collision checker. In the case of an RRT-based algorithm, the final path will be stored in \(D\) and the whole RRT tree and all the instances of its corresponding steer function are stored in \(C\).
To train the neural planner, the demonstration dataset generated by the expert planner should be of high quality. To generate a high-quality dataset, post-processing is applied to the paths generated by the expert planner. As shown in Figure 4, the expert planner generates a collision-free path for a random query. Followed by that, the Binary State Contraction (BSC) is utilized to remove redundant and unnecessary waypoints, resulting in a shorter overall path. The sparsity of waypoints is desirable as it simplifies the subsequent processing, such as collision checking, communication with the robot controller, and trajectory planning. While the output of BSC is similar to the lazy state contraction presented in [28], BSC is computationally more efficient and reduces the run time. After BSC, the resampling function is used to ensure the smoothness and efficiency of the planned path. This involves discretizing the waypoints at regular intervals, which are guaranteed to be collision-free as they already exist on the collision-free path. Importantly, these additional waypoints do not increase the overall length of the path but do serve to reduce the mean and variance of the target step magnitudes, \(\|q_{t+1}-q_{t}\|\). While larger steps require fewer neural network forward passes and collision checks during the inference phase, smaller steps offer several advantages. Firstly, they minimize the deviation from the planned path by reducing the magnitude of individual errors \(\|\hat{q}_{t+1}-q_{t+1}\|\). Secondly, they are more likely to be collision-free as they occupy less physical space. Finally, they offer a more normalized training target for the neural network, where the problem is reduced to predicting a direction in the limit. However, BSC is a crucial step before resampling, as it relies on dividing large steps into regular steps, and a dense path of short steps cannot be divided. Figure 3 illustrates this procedure in a 2D environment, highlighting how this method enhances the smoothness and quality of the path. The \(PostProcess\) procedure in Algorithm 1 presents the use of BSC and resampling on the generated paths.
#### 4.1.1 Path planner training formulation
Once the dataset has been generated, it is time to train the network. The goal of the network is to predict the next configuration, \(\hat{q}_{t+1}\), which should be as close as possible to the actual next configuration, \(q_{t+1}\), in the training dataset. The network uses the dataset to learn the underlying patterns and relationships between the input and output data.
Figure 3: Post processing procedure: a) The generated path by the planner, b) Path after binary state contraction, c) Path after resampling.
To achieve the goal of accurate prediction, the network must minimize the difference, or loss, between the predicted and actual configurations. This is done by comparing the predicted output \(\hat{q}_{t+1}\) to the actual output \(q_{t+1}\) and adjusting the network's parameters to minimize the difference. This process is known as backpropagation and it is typically done using optimization algorithms such as stochastic gradient descent.
\[L_{planner}=\frac{1}{N_{t}}\sum_{j=1}^{N_{d}}\sum_{i=1}^{T}\|\hat{q}_{j,i}-q_{j,i }\|^{2} \tag{4}\]
In this equation, \(N_{t}\) is the number of trajectories and \(N_{d}\) is the number of all individual waypoints in the training dataset.
#### 4.1.2 Collision checker training formulation
To ensure the safety of the robot, we experimented with two different approaches for training the collision checker and evaluated their performance. As mentioned earlier, the collision checker network takes the current and next configurations as input and estimates the probability of the segment being in-collision. Therefore, the training set for this network should include segments and corresponding labels indicating whether they are in collision or not. The optimization function utilized to train the collision checker for both of the approaches is called binary cross-entropy loss:
\[L(\hat{p},p_{true})=\hat{p}\log p_{true}+(1-\hat{p})\log\left(1-p_{true}\right) \tag{5}\]
where \(\hat{p}\) is the probability estimation made by the neural network and \(p_{true}\) is the true probability. The reason for using binary cross entropy is due to the fact that, in essence, the collision checker is a binary classification network. Therefore, the standard approach in the literature is to use cross entropy due to it giving a better probability distribution over the training data.
**Binary labels**: The most basic approach to tackle the optimization of the network is to use the true labels given by the analytical collision checker. In that case, \(p_{true}\) in Eq. 5 becomes either 0 (collision-free segment) or 1 (in-collision segment). In Figure 4 this approach has been named as \(L_{binary}\). The binary cross-entropy loss function measures the
Figure 4: End-end-training process of PPCNet: **Top row:** The imitation learning and data aggregation processes for training the planner network. **Bottom row:** Population-based probability estimation and collision checker network training processes.
dissimilarity between the network's predicted probability distribution and the true probability distribution in the binary classification problem.
**Population-based labels**: The sparsity of the binary labels hinders the performance of the collision checker since it skews the output of the network towards the more common label. The motivation for this method is to make the binary labels continuous. In addition to the mentioned reasoning, making the labels continuous can help the network be better optimized in corner cases. For example, a collision-free segment inside the bin is mostly surrounded by in-collision segments. Hence, the collision-free segment will not be of much help in optimizing the network compared to the massive existence of the in-collision segments. Making the labels continuous will amplify the effect of the collision-free segment. To make the binary labels continuous, we give a regional estimation of the probability of a specific segment being collision-free by retrieving the segments similar to it whose centers lie within the hypersphere (with a specific radius) surrounding the center of the segment. Finally, the population-based probability (denoted as \(\hat{p}_{ref}\) to indicate it being a reference probability) estimates the probability of a segment being collision-free by dividing the number of collision-free segments by the total number of segments:
\[\hat{p}_{ref}=\frac{\sum_{k=1}^{K}\mathds{1}[p_{true}^{(k)}==0]}{K} \tag{6}\]
where \(\mathds{1}\) is the identity-indication function. The advantage of the proposed approach is that the probability labels generated by this method can provide a well-behaving continuous function where near the obstacles, the probability gets near zero and the further it gets from them, it approaches one. In practice, we utilize KD-Tree method (built with the centers of the segments) to find similar segments. Moreover, the more data this approach gets, the better the estimation will be. Therefore, our proposed algorithm stores every segment that was checked by the analytical collision checker during the path-generation process with the expert planner. In the case of an RRT-based expert planner, this corresponds to storing the whole tree and all of the in-collision segments generated by it. While this approach may have higher computational cost compared to the binary labels approach, our results indicate that it can achieve better prediction performance. For this approach, \(\hat{p}_{ref}\) is used instead of \(p_{true}\) in Eq. 5 and the corresponding loss function is called \(L_{population}\).
#### 4.1.3 End-to-end training process
In this section, an overview of the end-to-end process of training the integrated model which consists of both the planner network and the collision checker network will be discussed. The training procedure is outlined in Algorithm 4.1.
```
Require expert planner \(\pi^{*}\), number of initial demonstrations \(T\), number of rollouts in each iteration \(T^{\prime}\),number of state samples \(S\), required policy success rate \(\zeta\) Initialize Planner and Collision Dataset: \(D,C\leftarrow\emptyset\) Generate initial demonstrations: \(GoalConf\gets RandomGoalConf(T)\) \(D,C\gets GeneratePath(\pi^{*},HomeConf,GoalConf)\) Apply post-processing: \(PostProcess(D)\)\(SuccessRate\gets 0\) for\(i=1,...\)do \(\pi_{i}\gets TrainPolicy(D)\) \(\triangleright\) Start of DAGGER \(GoalConf\gets RandomGoalConf(T^{\prime})\) \(D_{\pi_{i}}=GeneratePath(\pi_{i},HomeConf,GoalConf)\) \(S_{i}\leftarrow SampleStates(D_{\pi_{i}},S)\) \(D_{i},C_{i}\gets GeneratePath(\pi^{*},S_{i},GoalConf)\) \(D_{i}\gets PostProcess(D_{i})\) \(D\gets D\cup D_{i}\) \(\triangleright\) End of DAGGER \(C\gets C\cup C_{i}\) \(\eta_{i}\gets TrainCollisionChecker(C)\) \(SuccessRate\gets TestPolicy(\pi_{i},\eta_{i})\) if\(SuccessRate>\zeta\)then return\(\pi_{i},\eta_{i}\)
```
**Algorithm 1** Training process of PPCNet
Initially, a set of trajectories are sampled using the expert planner as the initial demonstration, and the planner and collision dataset are filled. In each iteration, the path planner's policy is trained using the demonstrations from an expert planner. Consequently, DAGGER process (as explained in Section 4.1) is executed which results in the expansion of
both of the datasets. Finally, the collision-checking dataset is trained and the model's performance is evaluated in the \(TestPolicy\) function. The function uses some random goal configurations and attempts to path plan between them (following the process in Algorithm 2). If the success rate of the model was satisfying, the path planner and the collision checker network weights are returned.
```
Require clinical checking network \(\eta\), safety threshold \(\bar{\eta}\) \(segments\gets Discretize(q_{current},q_{next})\) \(q_{final}\gets q_{current}\) for\((q_{i},q_{e})\) in \(segments\)do if\(\eta(q_{i},q_{e})>\bar{\eta}\)then \(q_{final}\gets q_{e}\) else if\(q_{final}==q_{current}\)then return failure else return success, \(q_{final}\)
```
**Algorithm 2** Path generation process using PPCNet
#### 4.1.4 Path planning process
In this section, the end-to-end path planning process of the PPCNet is explained. As illustrated in Figure 2, the two networks, the planning network and collision checking network, do the decision-making sequentially. In the context of the bin-picking application, given the start, pick, and place configurations, the proposed model picks the object from the bin and places it in the place configuration. As outlined in Algorithm 2, in each iteration of the decision-making process the path planning network outputs the next configuration that the robotic arm must transition into. The current configuration and the next configuration are then checked with the collision-checking network. To do so, the \(Steer\) function is used. As explained in Algorithm 3, the path between the two waypoints is discretized into equal segments with lengths close to the resolution step size of the expert planner. This is done in order to have the segments be close to the collision checker training dataset's distribution. Furthermore, in each segment, the collision-free probability of the segment is checked. In order to decide whether a segment is collision-free or not, a safety threshold is needed. If the probability is more the threshold, the algorithm continues to the next segment for collision checking. If the probability is less than the threshold,
then the segment is deemed to be in-collision. In that case, if the in-collision segment is the first segment of the path, then the planner has failed to output a good waypoint. Otherwise, if the in-collision segment is not the first segment, the planning network was successful and the tail of the last collision-free segment will be the output of the _steer_ function. Moving on, if the _steer_ function outputs success, then the next waypoint toward the goal configuration is added to the path. In the case that the steer function outputs failure, the planner network attempts to generate a new waypoint. For that purpose, stochasticity is needed inside the network. Therefore, dropout layers are used during inference to have the network be able to recover from failure cases [28]. Furthermore, in each iteration, if it is possible to directly connect the current configuration to the target configuration, then the target configuration is added to the path and it will be the final path generated by the PPCNet. At this point, the feasibility of the path is checked with the geometrical collision checker using \(IsFeasible\) function. If the path was collision-free, it will be returned; otherwise, the algorithm will attempt to patch the in-collision segments using the expert planner. To this end, the _InCollision_ function outputs the segments in the path that are in-collision. It is important to note that if some consecutive segments were in-collision, all of the segments will be counted as one bigger in-collision segment and the head and tail of the bigger segment will be considered. This is done in order to have lesser expert planner calls and be more computationally efficient. Furthermore, for each segment, the expert planner attempts to generate an alternative path between the two configurations. If for all of the segments, the patching was done with success, then the proposed path planner would be successful and the path would be returned.
## 5 Experimental results
In this section, we will discuss the performance of our proposed algorithm, which employs PyTorch [38] to implement the planner network and collision checker network. The simulation environment we used is Pybullet planning [39; 40]. To evaluate the effectiveness of our approach compared to the state-of-the-art path planning techniques, we compared it against Bi-RRT [17], informed RRT* [20], BIT* [19], and MPNet [29] on three different environments (Figure 5). The first environment consists of a universal robot UR5 and a bin, the second one involves a UR5 robot and a wall as an obstacle, and the third one features a Kinova Gen3 robotic arm, which we implemented on both simulation and practical settings. The system used for training and testing has a 3.700 GHz AMD Ryzen 9 processor with 96 GB RAM and NVIDIA TITAN RTX GPU. Moreover, Table 1 shows the hyperparameter selection for each of the planners.
Table 2 presents a comparative analysis of the performance of PPCNet against other planning methods. Our experiments demonstrate that PPCNet exhibits remarkable performance in terms of planning time in all three environments. Figure 6 provides a visual illustration of the planning time comparison. The hyperparameters for the competing RRT-based algorithms were tuned such that they produce paths of comparable length to PPCNet, whi
Figure 5: Experimental environments: a) UR5 scene, b) UR5 scene with a wall as an obstacle, c) Real-world implementation on Kinova Gen3.
find a path for each planning task in less than 10 seconds and 1000 iterations. However, the results show that BIT* and Informed RRT* methods take longer to plan and have lower success rates than PPCNet.
because the traditional algorithms need to compute the path in its entirety every time the target changes. Further, most existing algorithms lack the ability to learn from past experiences. The proposed PPCNet utilizes deep neural networks to find a real-time solution for path-planning and collision-checking problems. Specifically, we propose an end-to-end training algorithm that leverages the data aggregation algorithm to generate i.i.d. data for supervised learning. Using an expert planner in the training process, data aggregation can assist the agent in encountering unseen states. Another novel component of the proposed PPCNet algorithm includes two variations of the collision-checking network for approximating the exact geometrical collision function. We evaluate our proposed framework, PPCNet, by conducting simulation on two robotic arms namely UR5 and Kinova Gen3 in a bin-picking application. We have also examined the algorithm in a real-world scenario using Kinova Gen3 robot. Our results indicate that our approach significantly reduced planning time while producing paths with path lengths and success rates comparable to those of the state-of-the-art path planning methods.
## Acknowledgment
We would like to acknowledge the financial support of Apera AI and Mathematics of Information Technology and Complex Systems (MITACS) under IT16412 Mitacs Accelerate.
|
2309.04383 | Quantum Ising model on two dimensional anti-de Sitter space | This paper investigates the transverse Ising model on a discretization of
two-dimensional anti-de Sitter space. We use classical and quantum algorithms
to simulate real-time evolution and measure out-of-time-ordered correlators
(OTOC). The latter can probe thermalization and scrambling of quantum
information under time evolution. We compared tensor network-based methods both
with simulation on gated-based superconducting quantum devices and analog
quantum simulation using Rydberg arrays. While studying this system's
thermalization properties, we observed different regimes depending on the
radius of curvature of the space. In particular, we find a region of parameter
space where the thermalization time depends only logarithmically on the number
of degrees of freedom. | Muhammad Asaduzzaman, Simon Catterall, Yannick Meurice, Goksu Can Toga | 2023-09-08T15:25:50Z | http://arxiv.org/abs/2309.04383v2 | # Quantum Ising model on two dimensional anti-de Sitter space
###### Abstract
This paper investigates the transverse Ising model on a discretization of two-dimensional anti-de Sitter space. We use classical and quantum algorithms to simulate real-time evolution and measure out-of-time-ordered correlators (OTOC). The latter can probe thermalization and scrambling of quantum information under time evolution. We compared tensor network-based methods both with simulation on gate-based superconducting quantum devices and analog quantum simulation using Rydberg arrays. While studying this system's thermalization properties, we observed different regimes depending on the radius of curvature of the space. In particular, we find a region of parameter space where the thermalization time depends only logarithmically on the number of degrees of freedom.
## I Introduction
One of the most fruitful ideas in theoretical physics developed over the last twenty-five years has been the concept of holographic duality - that the physical content of a gravitational theory in anti-de Sitter space can be captured by a non-gravitational conformal field theory (CFT) living on the boundary of that space. Since the duality maps strong coupling to weak coupling, it has frequently been used to probe the strong coupling dynamics of a CFT living at the boundary by solving a classical gravity problem in the bulk [1; 2]. To gain insight into quantum gravity, one would like to invert the direction of this logic and use the non-perturbative quantum dynamics of the CFT to infer aspects of bulk quantum gravity.
As a first step in this direction, one performs a Wick rotation on anti-de Sitter space to obtain hyperbolic space, followed by a discretization of the latter to obtain a lattice theory.
There have been recent efforts to perform classical simulations of such theories using Monte Carlo methods [3; 4; 5; 6], tensor network methods [7; 8; 9; 10] and other numerical techniques [11]. However such studies cannot probe the real-time dynamics of such systems, and in this manuscript, we return to a simple toy model that can be _quantum_ simulated directly in anti-de Sitter space - the transverse Ising model formulated in two-dimensional anti-de Sitter space (AdS\({}_{2}\)).
This paper will study this model using exact diagonalization, tensor network methods, noiseless quantum simulators, and simulation on superconducting quantum devices. Since the boundary theory is conformal quantum mechanics, a prime focus of our work will be time-dependent correlation functions and, in particular, so-called "out-of-time-ordered" correlators (OTOCs). These provide information on how fast quantum information can propagate through the lattice and how long thermalization takes in such an interacting quantum system.
Contrary to naive expectation it is possible for a quantum mechanical system to undergo thermalization _locally_[12; 13]. Indeed such thermalization has also been observed experimentally [14].
The key idea is that one needs to focus on a subset \(A\) of the composite system comprising \(A\) and its environment \(B\). If \(A\) is entangled with \(B\) then one naturally obtains a density matrix for \(A\) by tracing out the degrees of freedom in the Hilbert space of \(B\). If \(\ket{\psi}\bra{\psi}\) denotes a pure state of the combined system, the density matrix of \(A\) is given by
\[\rho_{A}=\mathrm{Tr}_{\mathcal{H}_{B}}\ket{\psi}\bra{\psi}. \tag{1}\]
This density matrix corresponds to a mixed state if there is entanglement between \(A\) and \(B\), and this is manifested by a non-zero entanglement entropy given by the von Neumann formula:
\[S=-\mathrm{Tr}_{\mathcal{H}_{A}}\,\rho_{A}\ln\rho_{A}. \tag{2}\]
In this paper, we are particularly interested in mixed states corresponding to thermal systems. One simple way to construct a thermal density matrix for \(A\) is to start from a composite system comprising two identical copies of \(A\)
\[\ket{\Psi}=\frac{1}{Z^{\frac{1}{2}}}\sum_{n}e^{-\frac{\beta}{2}E_{n}}\ket{n_ {A}}\ket{n_{B}}. \tag{3}\]
In this case, tracing out \(B\) yields
\[\rho_{A}=\frac{1}{Z}\sum_{n}e^{-\beta E_{n}}\ket{n}\bra{n}, \tag{4}\]
in the case where the quantum mechanical system corresponds to a conformal field theory there is a holographic interpretation of the density matrix as describing a black hole in a dual geometry which contains the CFT on its boundary. Indeed the entanglement entropy in this case can then be shown to correspond to the Bekenstein-Hawking entropy associated with the area of the event horizon of the black hole [15; 16; 17; 18].
The next most obvious question that arises is how long it takes to realize this density matrix under Hamiltonian evolution starting from some pure non-generic state \(|\psi\rangle\). In general, this process resembles classical chaotic dynamics with initial states that differ only by small perturbations yielding radically different states at large times. This thermalization process is called scrambling and has been the focus of many previous studies [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. The scrambling time \(\tau_{S}\) is determined by the speed at which information can propagate across the system under time evolution and is related to the dimensionality of the system and the locality of the Hamiltonian. There are theoretical bounds on the scrambling time \(\tau_{S}\) which is bounded from below by
\[\tau_{S}\sim\beta\ln V,\]
where, \(V\) counts the number of microscopic degrees of freedom. Attaining this bound depends on an exponentially fast spread of information through the system [30; 31; 32; 33; 34].
It has been conjectured that CFTs with black hole duals provide one example of a system capable of such "fast scrambling" [35; 36]. Systems that show fast scrambling typically involve non-local Hamiltonians and all-to-all interactions such as the SYK model [37; 38; 39; 40; 41]. In this paper we will show that in certain regions of the parameter space the transverse quantum Ising model with nearest neighbor interactions living on a discretization of two dimensional anti-de Sitter space appears to exhibit similar behavior. However one should be careful with this interpretation - the spatial boundary of our system is just points and our quantum spins populate the bulk space as well as the boundary. So we are primarily looking at information spread in the bulk. To understand the thermalization properties better one would need to extend the model to three dimensional anti-de Sitter space which possesses a non-trivial spatial boundary.
We have performed both classical and quantum simulations of this system. In Sec. II, we find the ground state of this model using the density matrix renormalization (DMRG) algorithm [42; 43; 44] and time-evolve it with the time evolving block decimation (TEBD) algorithm using the ITensor library [45; 46; 47; 48]. In Sec. III, real time evolution of the magnetization is discussed and implemented for a thirteen qubit system and compared to the tensor method results. We discuss the information propagation in this model in Sec. IV. To study the scrambling properties of the model we have used matrix product operator (MPO) methods to calculate the OTOCs [49; 50] in Sec. IV.1. In the next subsection IV.2, the computation of OTOCS using a protocol developed by Vermersch et al. [51] is discussed and implemented for a model with seven qubits. Successful implementation of the model on quantum devices required applying some additional error mitigation techniques. We discuss the influence of the mitigation techniques on the results and other numerical aspects of the digital quantum simulation in Appendix A. We also sketch out how to implement this Hamiltonian via analog quantum devices like Rydberg arrays and perform simulations of the system on the Bloqade simulator developed by QuEra in Appendix B. In Appendix C, we include some details of the protocol used for the computation of the OTOC using a quantum computer.
## II Transverse Ising model on a hyperbolic space
In this section, we describe the Transverse Field Ising (TFI) model formulated on a one dimensional hyperbolic space. The model is an analogue of the classical Ising model on a two dimensional tessellation of hyperbolic space [52; 53]. The Hamiltonian that describes this Ising chain can be represented as a sum of local terms [54; 55; 56]
\[\hat{H} = \frac{-J}{4}\sum_{<ij>}\frac{\cosh(l_{i})+\cosh(l_{j})}{2}\sigma_{ i}^{z}\sigma_{j}^{z}+\frac{h}{2}\sum_{i}\cosh(l_{i})\sigma_{i}^{x} \tag{5}\] \[+ \frac{m}{2}\sum_{i}\cosh(l_{i})\sigma_{i}^{z}.\]
Here, \(\sigma_{i}^{p}\) is a local Pauli operator at site \(i\) with \(p=\{x,y,z\}\). The first term corresponds to a nearest neighbour interaction term coupling site \(i\) and site \(j=i+1\). The deformation factors \(\cosh l_{i}\) arise from the metric of Euclidean AdS\({}_{2}\) given in Eq. (6) and give rise to a site-dependent coupling for the Ising chain
\[ds^{2}=\ell^{2}(\cosh^{2}(\rho)dt^{2}+d\rho^{2}). \tag{6}\]
For an \(N\) site lattice the site-dependent deformation factor \(l_{i}\) is given by
\[l_{i}=-l_{\rm max}+i\frac{2l_{\rm max}}{N-1}, \tag{7}\]
where \(l_{\rm max}\) denotes a length scale that determines the degree of deformation. In the limit of \(l_{\rm max}\to 0\), the usual transverse Ising model is recovered.
We start the discussion of our numerical results with the von Neumann entropy Eq. (2) calculated from the reduced density matrix obtained by tracing out one half of the spins. Fig. 1 shows a plot of the half chain entropy and Fig. 2 shows the magnetic susceptibility at \(l_{\rm max}=3.0\)\(h=3.0\) and \(m=0.25\) using \(N=37\) spins as a function of \(J\).
For our DMRG calculation we used 50 sweeps of the chain with a cutoff of order \(\epsilon=10^{-12}\) which resulted in a bond dimension of order \(\chi=10\) on average. We see that there are peaks in the entropy and the susceptibility signaling a possible phase transition in the model. In our later work on OTOCs we will always tune our couplings to be close to their critical values.
## III Time evolution of the magnetization
In this section, we show results on the time evolution of the magnetization \(\langle S^{z}\rangle=\frac{1}{2}\langle\sigma^{z}\rangle\) computed using tensor methods compared with simulation on quantum devices.
We start by time evolving the system using the Time Evolving Blocked Decimation (TEBD) algorithm [57]. Historically, TEBD was adapted from the Suzuki-Trotter approximation for the Matrix Product State (MPS) [58]. In Fig. 3, the Trotter evolution of the magnetization \(\langle S^{z}_{i}(t)\rangle\) is plotted at each lattice site \(i\) for a lattice chain with \(N=37\) sites, and \(l_{\rm max}=3.0\), \(h=2.0\), \(J=2.0\), and \(m=0.25\) starting with all spins in the down state. Clearly, the dynamics of the magnetization shows warping effects in the bulk due to the curved background. One can think of this warping effect as due to time dilation effects in the bulk.
Next, we attempt to investigate the model using a quantum platform - namely the IBM Guadalupe machine. Currently, quantum devices experience both large coherent and incoherent noise in any given computation. Thus, we have attempted to investigate a system with modest system size of \(N=13\) spins where there is limited device noise and the warping effects can be observed. Using the quantum circuit representation of operations given in Fig. 4, we have computed the time-dependent expectation value of the magnetization \(\langle S^{z}_{i}(t)\rangle\) at site \(i\) using a first order Trotter approximation. Different orderings of the operators can be used for this approximation, see the discussion in the Appendix A.2. In this section, all the results presented use what we denote as 'odd-even' ordering in the Appendix.
Local magnetization results are shown for three different sites in the Fig. 5 and compared against classical simulation results obtained from TEBD. The parameters used were \(J=2.0\), \(h=1.05\) and \(l_{\rm max}=3.0\). The gate cost of such a circuit is similar to that of the Ising spin chain on a flat lattice [59]. The difference in our Trotter evolution of the deformed Hamiltonian lies in the site dependent phase factors of the rotation and entangling gates. This brings an inherent complication to the problem of selecting the optimal Trotter step \(\delta t\). Previous studies have shown that theoretical bounds of the first-order Trotter approximation can be relaxed for observing time evolution with current NISQ-era machines [59; 60; 61]. The phases (\(\theta_{i}\), \(\phi_{i}\)) of the rotation and entangling gates are of the form \(C_{i}\times\delta t\) and the optimal choice for the Trotter step is different for local operators \(\langle S^{i}_{z}\rangle\) at different sites. Thus, one constraint for choosing the optimal Trotter step \((\delta t)_{\rm optimal}\) comes from the local couplings \(C_{i}\)
Figure 1: von Neumann Entropy versus \(J\) for \(N=37\), \(l_{\rm max}=3.0,h=3.0,m=0.25\).
Figure 4: Trotter evolution circuit for 5-Qubit deformed hyperbolic spin chain. Here, \(\theta_{i}\) parameters are symmetric to the center of the lattice chain, for example, in the above circuit \(\theta_{0}=\theta_{4}\).
In NISQ-era devices, the other constraint comes from the maximum possible circuit depth \(d_{\rm max}=n\times\delta t\) that can be simulated before the noise swamps the signal. The accumulation of gate errors effectively limits our ability to go beyond a certain number of Trotter steps. We found that a compromise value of \((\delta t)_{\rm optimal}\sim 0.2\) and \(t_{\rm max}\sim 1.2\) is a good choice for time evolution of the system. For the computation of the local magnetization, the number of shots used is \(N_{\rm shots}=1000\). See Appendix. A.1 for a discussion of the statistical noise associated with different \(N_{\rm shots}\).
In Fig. 5, classical simulation results of the local magnetization with the TEBD algorithm are compared with the mitigated results obtained from the Guadalupe machine. The error-bars in the figures represent statistical errors associated with six different measurements. The measurements were performed on different days to demonstrate reliable systematic error on the current devices. Various error mitigation techniques were applied to obtain the results. Dynamical Decoupling (DD) [62; 63] was applied to reduce the coherent noise and the Mthree (M3) method [64] was used to reduce readout errors. We also created noise-scaled circuits with three-fold and five-fold amplification of the noise in comparison to the original circuit and applied the Zero Noise Extrapolation (ZNE) [65; 66] mitigation technique to reduce the incoherent noise. We used the built-in features of the IBM runtime system to apply DD and M3 while noise-scaled circuits were created by inserting an appropriate number of identity operators for each CNOT gate. This choice is justified for current IBM devices, where two-qubit gates have significantly larger errors than single-qubit rotation gates1. See Appendix. A.3 for the discussion of how different error mitigation techniques improved our results.
Footnote 1: For the Guadalupe machine, the ratio of the median-errors for the two-qubit and single-qubit gates is \(\frac{\epsilon_{\rm CNOT}}{\epsilon_{\rm qubit-gate}}\sim 25\).
After post-processing the data with different error mitigation techniques, we found that the magnetization results obtained from the Guadalupe machine Fig. 7 show the warping effects. For comparison purposes, the TEBD results are plotted in Fig. 6. The CNOT gate cost for
Figure 7: Trotter evolution of local magnetization \(\langle S_{i}^{z}(t)\rangle\) computed using quadalupe Quantum Processing Unit (QPU). Parameters: \(N=13\), \(J=2.0\), \(h=1.05\), \(l_{\rm max}=3.0\). Magnetization data on the edges of the lattice chain are omitted due to the large Trotter-error associated with it. Note that the deformation strength is stronger on the edges.
computing time-evolution with first order Trotter approximation of a \(N\)-qubit quantum spin chain is \(2(N-1)\) per Trotter-step and the circuit depth at Trotter step \(n=6\) is \(d=48\). The results from the QPU track the peak of the local magnetization quite well. The QPU results also demonstrate that the initial state with all-down spins is disrupted by the boundary at a slower rate as we move from the edge to the center of the lattice chain. While the quantum simulation results align qualitatively with tensor methods, it is clear that larger numbers of qubits would be needed to identify the warping effects in a greater detail. We have also explored a possible implementation of the real-time magnetization evolution on QuEra's analog quantum computers based on Rydberg arrays. See Appendix. B for the discussion of the analog computation of the local magnetization.
## IV Out-of-time-ordered correlators
We now turn to the question of how information spreads in the model. To answer that, we computed an out-of-time-ordered-correlator (OTOC). This observable is known to capture information spread and scrambling in quantum systems [67; 68; 69] and can be thought of as a quantum mechanical counterpart of the Loschmidt echo [70]. To construct the OTOC, we use two operators \(W_{i}(t)\) and \(V_{j}\) where \(W(t)=\exp^{iHt}W(0)\exp^{-iHt}\). From these we construct the commutator of these operators
\[C(t)=\langle||[W_{i}(t),V_{j}]||^{2}\rangle=2(1-\text{Re}[F_{ij}(t)]), \tag{8}\]
where \(F_{ij}(t)\) is the the required out of time ordered correlator (OTOC)
\[F_{ij}(t)=\langle W_{i}(t)^{\dagger}V_{j}(0)^{\dagger}W_{i}(t).V_{j}(0)\rangle. \tag{9}\]
This equality is obtained under the assumption that \(W\) and \(V\) are unitary and that terms that correspond to local observables thermalize to a constant after a short time and hence can be omitted. The connection between \(F_{ij}(t)\) and the information spread can be made clear by considering \(W\) as a simple local perturbation. Under time evolution this Heisenberg operator becomes more and more non-local. The growth of these non-local effects can be captured by calculating the commutator of \(W(t)\) with another local operator \(V\). When the operators commute, \(C\) vanishes and \(F\) is one. So by measuring the double commutator or the OTOC we can track the propagation of \(W(t)\) along the system.
The relationship between the double commutator and operator growth can be made clear by considering a simpler setup. Let's start by representing a unitary time evolution operator out of local two qubit unitaries. Using this representation we can obtain the Heisenberg time evolution for a local operator \(A(t)=U^{\dagger}AU\).
Where in Fig. 8 blue and red boxes represent \(U^{\dagger}\) and \(U\) while the green circle represents the operator \(A\). One can clearly see from the above figure that any contraction that doesn't involve the operator \(A\) will be the identity so we can ignore those and focus on the contractions that involve the operator. This clearly shows us the lightcone for the operator growth in the Heisenberg picture and demonstrates that the OTOCs capture the characteristics of the operator spread in the system.
However, this general form of the OTOC is not the easiest to deal with in our simulations. Instead, we choose the following form for the OTOC operator which can be seen from Eq. 10 [51; 71]
\[O_{i}(t)=\frac{\text{Tr}(\rho W(t)_{\frac{N+1}{2}}V_{i}^{\dagger}W(t)_{\frac{N +1}{2}}V_{i})}{\text{Tr}(\rho W(t)^{2}V^{\dagger}V)}. \tag{10}\]
In our calculations, we take \(W(t)=\sigma^{z}(t)\), \(V=\sigma^{z}\) and fix the position of \(W(t)\) operator at the center of the lattice chain. To see the effect of the interaction of two local operators, we then place the operator \(V\) at different lattice sites \(i\). We have focused on the infinite temperature limit which corresponds to taking a density matrix \(\rho\sim\mathbf{I}\) in Eq. 10. Infinite-temperature OTOCs bear the signature of entanglement growth after a quench is applied to an energy eigenstate [72] and are easier to compute. Furthermore, many of the protocols used in finite-temperature-OTOCs can be developed from the corresponding protocols used in the infinite temperature case [51; 73]. Additionally, the exponents computed from the infinite-temperature OTOCs are insensitive to slightly different OTOC definitions that exist in the literature, see the appendix in [73].
### Classical Simulations of OTOCs
For the classical computation of the OTOC the time evolution for \(W(t)\) is obtained by expressing \(W\) and \(V\) as matrix product operators (MPO) and applying Heisenberg time evolution to \(W\) via the TEBD algorithm. In Fig. 9, we show how to apply Heisenberg time evolution to a generic operator \(W\) for one Trotter step.
Then the resulting time evolved operator \(W(t)\) can be plugged into the OTOC calculation.
Figure 8: Heisenberg time evolution for a local operator.
Initially we show results for the flat space transverse Ising model and as expected we see a linear light cone.
If we turn on the hyperbolic deformation and set \(l_{\rm max}=3.0\) we see that the system develops a warped lightcone and faster than linear propagation of information. In Fig. 10, Fig. 11 and the rest of the plots of OTOCs, the red dots represent the times where the OTOC at that lattice site first deviates from \(1.0\) by some amount \(\epsilon\). Here \(\epsilon=0.25\). These resultant points trace out the lightcone shown in the plot. The purple line which is shown to guide the eye corresponds to a curve of the form
\[t=\log\Big{|}x-\frac{N+1}{2}\Big{|}+B,\]
where, \(B\) is a constant.
We found that to access the logarithmic regime of the model the physical couplings \(J\) and \(h\) need to be tuned to be close to their critical values. The remaining physical coupling \(m\) then controls the thermalization dynamics. In Fig. 13 we plot the time evolution of the half-chain von Neumann entropy which shows how \(m\) controls the thermalization. We can also look at the site-averaged OTOCs which are plotted in Fig. 12. This clearly show a power-law dependence on \(t\) as the system thermalizes.
Note that the value of \(m\) doesn't affect the structure of the light cone and only controls the thermalization time. In fact the shape of the lightcone is determined by the value of \(l_{\rm max}\). For \(N=37\) we found four distinct behaviors for the lightcone. For \(0.0<l_{\rm max}<1.0\) we find a linear lightcone. Then for \(1.0<l_{\rm max}<2.0\) we see a power-law behavior while for \(2.0<l_{\rm max}\leq 3.0\) the light cone takes on a logarithmic behavior. Finally for \(l_{\rm max}>4.0\) the system confines and a deformation that's been
Figure 12: Site averaged OTOC for \(J=6,h=3.05,m=0.25,l_{\rm max}=3.0\)
Figure 13: von Neumann entropy for \(J=6,h=3.05,l_{\rm max}=3.0\)
initialized in the bulk never reaches the boundaries of the chain. We summarize this structure in the following cartoon of the OTOC phase diagram of the model for \(N=37\) Fig.14.
The dependence on \(l_{\rm max}\) can be clearly seen in Fig.15 where we plot the local light-cone time obtained from OTOC calculations vs the lattice site, starting from the middle of the chain and ending at the first site. The black curves show the logarithmic fits for \(l\geq 3.0\). Error bars on the points are obtained by taking multiple cutoff values and averaging over them.
Even though we focused solely on the choice of \(W(t)=\sigma^{z}(t)\) and \(V=\sigma^{z}\) for the OTOC calculations it is possible to choose other combinations of operators. One such choice corresponds to taking \(\sigma^{x}\) operators for both \(W(t)\) and \(V\) operators which results in the plot shown in Fig.16. This behavior of the \(XX\)-OTOC is consistent with previous computations of OTOCs in the flat-space transverse Ising model by Lin and Motrunich in [68] where they found a shell like structure for the \(XX\)-OTOC operator. As can be seen from the Fig. 16 the points inside the light cone are significantly less prominent as compared their \(ZZ\) counterparts.
### Quantum Simulation of OTOCs
In this subsection, the computation of the OTOC with digital quantum computers is discussed. First, let us write down an alternative definition of the OTOC for a \(N\)-qubit system
\[O_{i}^{\rm eig}(t)=\frac{\left\langle\psi\right|\left(W(t)_{\frac{N+1}{2}}V_ {i}^{\dagger}W(t)_{\frac{N+1}{2}}V_{i}\right)\left|\psi\right\rangle}{\left \langle\psi\right|\left(W_{\frac{N+1}{2}}(t)^{2}V_{i}^{\dagger}V_{i}\right) \left|\psi\right\rangle}, \tag{11}\]
where \(\left|\psi\right\rangle\) represents an arbitrary state. The schematic circuit diagram to compute this quantity is shown in the Fig 17. From this schematic diagram and the discussion of the Trotter evolution in the previous section, it is evident that to compute the OTOC with Trotterized evolution operator requires \(8n(N-1)\) CNOT gates for the n\({}_{\rm th}\) Trotter step. In our work we considered a spin chain of length \(N=7\), and used a trotter step \(\delta t=0.5\)
Figure 17: Schematic circuit diagram of OTOC using the definition at Eq. (11).
Figure 14: OTOC Phase Diagram for \(N=37\)
Figure 15: Curvature dependence of the propagation behavior of OTOC for \(N=37,J=6.0,h=3.05,m=0.25\)
up to a maximum time \(t_{\max}=3.5\). This indicates that a quantum computation of the OTOC with a quantum circuit like that of Fig. 17 would require more that 200 CNOT gates in just four Trotter steps. Hence extracting any useful results would become impossible at early times due to coherent and incoherent noise in the device. Using a weaved Trotterization technique, similar circuits were implemented to compute OTOCs for a small system of four qubits in [74].
Our goal in this section is to investigate if we can extract the scrambling time at infinite temperature (\(\rho\propto\mathbf{I}\)) with the current IBM devices for a system with 7 spins. As for tensor network simulations we position the \(W\) operator at the center of the lattice chain and vary the position \(i\) of the \(V\) operator. Our choice for the \(W\) and \(V\) operators remain the same as that of the previous section. With quantum simulation, we also would like to see if the simulation can identify the difference in the scrambling time as we vary the position of the \(V\) operator. Many protocols for computing OTOCs have been proposed [75; 76; 77; 78; 51; 79] and many authors have also been able to extract the scrambling time at infinite temperature.
Figure 18: Modified OTOCs are computed from the correlation of the measurement of two different operators (a) \(\langle W(t)\rangle\) and (b) \(\langle V^{\dagger}W(t)V\rangle\). The same set of unitaries are required to find the correlation between the measurements. The process is repeated for many different sets of unitaries.
suggested some modified quantities that also contain scrambling information [74; 77]. For example, to reduce the computational cost the magnitude-squared of OTOC (\(|F|^{2}\), see Eq. (9) for definition of \(F\)) can be computed ignoring the phases [77]. In this paper, we have used the protocol proposed by Vermersch _et. al._ to compute both the OTOC and the modified OTOC [80]. The gate cost per circuit for computing the modified OTOC of zeroth order using this protocol is \(\sim 2n(N-1)\), which is significantly lower than the gate-count needed in the straightforward evaluation presented by Fig. 17. Also the protocol we have chosen does not require any ancilla qubits unlike some other OTOC computation protocols.
Vermersch _et. al._[81] discussed a 'global protocol' to compute the OTOC and a 'local protocol' for computing modified OTOCs. Both protocols require state preparation of random states created from random unitary operators. The idea is to sample enough random states to mimic a thermalized scenario for the computation of the OTOC. Mathematically, the global protocol relies upon the following equation
\[\mathrm{Tr}\left[W(t)V^{\dagger}W(t)V\right]\] \[=\frac{1}{\mathcal{D}(\mathcal{D}+1)}\overline{\langle W(t) \rangle_{u,k_{0}}\left\langle V^{\dagger}W(t)V\right\rangle_{u,k_{0}}}, \tag{12}\]
where, \(\mathcal{D}\) is the dimension of the Hilbert space. On the right hand side, the overline denotes an ensemble average of measurements over a set \(U=\{u_{0},u_{1},\cdots u_{N_{U}}\}\) of random unitary operators and \(k_{0}\) is an abitrary initial state. Each unitary in the set \(U\) is a \(N\)-qubit unitary. Implementation of the global protocol requires creating a \(N\)-qubit random unitary operator that is applied to an input state of \(N\) qubits. Decomposition of an \(N\)-qubit unitary is costly in terms of the entangling gates. Moreover, for a specific precision, the local protocol needs a smaller number of measurements [51]. As a result, we have found it convenient to implement the local protocol in Fig. 18 which requires just \(N\) random unitaries per run. Depending on the number of initial states \(|k_{i}\rangle=\{k_{0},k_{1},\cdots\)\(k_{2^{n}}\}\) being used, the modified OTOCs of different orders \(n\) can be computed. The larger the order \(n\) of the modified OTOC, the better it approximates the original OTOC while the specific \(n\) needed is model dependent. Indeed, there is evidence that the modified OTOCs contain the needed information on entanglement spreading [51]. For numerical justification of the Eq. (12) and for the connection of the different OTOC definitions, readers are advised to consult Appendix C. Here, for completeness, we outline the steps to compute the modified OTOC of zeroth order:
* We prepare an arbitrary initial state \(|k_{0}\rangle\) (position 1 in Fig 18a). The initial state preparation step can be avoided if the all-zero state \(|\mathbf{0}\rangle=|0000000\rangle\) is chosen as the starting quantum state. Then, a set of unitary gates \(u^{i}=\{U_{0}^{i},U_{1}^{i},\cdots,U_{N}^{i},\}\) are applied to each qubit, which results in a random state \(|\psi_{1}\rangle=U_{0}^{i}\otimes U_{1}^{i}\otimes\cdots\otimes U_{N}^{i}\,| \mathbf{0}\rangle\) at position 2 in the Fig. 18a.
* Next the time evolution of the random state is computed using the Trotterized evolution operator \(U(n)=\Big{[}\exp\!\left(-i\hat{H}\delta t\right)\Big{]}^{n}\). This yields \(|\psi_{2}\rangle=U(n)|\psi_{1}\rangle\) at position 3 in the Fig. 18a.
* The necessary gates are then applied to compute the observable \(W\) in the computational basis. In our case, since \(W=\sigma_{i}^{z}\), projective measurements of qubit \(i\) allows us to compute \(\langle W(t)\rangle=p_{0}-p_{1}\), where \(p_{0(1)}\) is the probability of measuring the qubit in the zero (one) state. We use \(N_{\mathrm{shots}}=200\) for computing the expectation value of the operator.
* In a similar fashion, if we include the \(V\) operator after creating the random state \(|\psi_{1}\rangle\) (Fig. 18b), the previous two steps can be applied to compute \(\langle V^{\dagger}W(t)V\rangle\).
* the Haar measure of the unitary group \(U(n)\).
* Finally, an ensemble average of the quantity \(\overline{\langle W(t)\rangle_{u,\mathbf{k}_{0}}\langle VW(t)V\rangle_{u, \mathbf{k}_{0}}}\) is computed which is a measure of the modified OTOC of the zeroth order.
With the proper normalization, the modified OTOC of the zeroth order \(O_{0}(t)\), can be described by the following
equation
\[O_{0}(t)=\frac{\overline{\langle W(t)\rangle_{u,\mathbf{k}_{0}}\langle VW(t)V\rangle _{u,\mathbf{k}_{0}}}}{\overline{\langle W(t)\rangle_{u,\mathbf{k}_{0}}\langle W (t)\rangle_{u,\mathbf{k}_{0}}}}. \tag{13}\]
Using the steps described above, operator expectation values \(\langle W(t)\rangle\) and \(\langle VW(t)V\rangle\) are computed with the same set of unitaries. Fig. 19 shows measurements of these operators. Initially the operators are correlated (Fig. 19a) while over time due to operator spreading the operators become decorrelated (Fig. 19c) which signifies a loss of memory of the initial state. As the resources required for the computation of higher order OTOCs is large we have only computed the zeroth order OTOC in this study corresponding to the plot in Fig 20. \(N_{U}=180\times N\) unitaries were used for this simulation and each measurement required \(N_{M}=200\) shots. These numbers were chosen carefully using a noise model simulation so as to minimize the overall cost for implementing the protocol with current quantum devices. From the figure, it is seen that mitigated results with the IBM Sherbrooke machine compare well with results from exact diagonalization. Dynamical decoupling (DD) was used to compensate coherent noise and M3 was used for the readout error mitigation. Our studies show that applying noise mitigation techniques is important in recovering scrambling information with current NISQ-era devices.
The dependence of the speed of information spread on the position of the \(V\) operator can be seen in Fig. 21 where it is compared with classical Python-Trotter simulations. The error bars in the simulation indicate the jackknife error due to the choice of different sets of random unitaries. For a fixed number of unitaries \(N_{U}\), the error can be reduced at the expense of increased computational resources, that is, by increasing the number of shots \(N_{\text{shots}}\). On the other hand, increasing the number of unitaries \(N_{U}\) also reduces the error, allowing us to better approximate the trace in Eq. (12) with the ensemble average on the right-hand side. Clearly, the measured values obtained with the IBM device without mitigation deviate from the ideal Python Trotter results, indicating the presence of different sources of noise in the device. The mitigated results agree rather well and can depict the difference in speed due to the varied distance \(d=|j-i|\) of the \(W_{i}\) and \(V_{j}\) operators. It would be intriguing to see in the future whether we can use more computational resources to compute higher-order modified OTOCs.
Additionally, investigating the scrambling time and quantum Lyapunov exponents with quantum computers could be an exciting avenue for the future research.
## V Conclusion
In this paper, we have investigated the transverse quantum Ising model discretized on two dimensional anti-de Sitter space. In practice this is implemented by using site dependent couplings which mock up the metric factors corresponding to a one dimensional hyperbolic space. We computed the time evolution and OTOCs of the model using both tensor network methods and quantum simulations using both gate based quantum computers as well as analog quantum computers that use Rydberg arrays. We showed that the time evolution and OTOCs obtained from the quantum simulations agree well with the tensor network calculations.
The use of new publicly available universal quantum computers and new mitigation techniques allowed reliable time-evolution calculations with up to 13 qubits. In previous work on related real time evolution of systems of comparable difficulty [83; 84; 59; 61], reliable 4 qubit calculations were reported but extensions to 8 qubits were unsuccessful. Additionally, to the authors' best knowledge, this is the first time a protocol to compute OTOCs has been implemented for a seven qubit system using an IBM QPU, superseding a previous attempt with four qubits. From this perspective, the results presented here give a sense of progress in quantum hardware and software in the last few years. Nevertheless, this remains a relatively small number of qubits and the boundary effects are significant. These boundary effects are of potential interest [85; 86; 87] and could be studied in more detail for their own sake.
We found that depending on the parameters of the model it's possible to have different profiles for the light cones that describe the propagation of information in the system. Perhaps most intriguingly we find a regime of the critical system where the direction of the light cones in global coordinates displays a logarithmic dependence on bulk distance. This behavior implies that the scrambling time characterizing thermalization in this system depends only logarithmically on the number of degrees of freedom. Such a behavior is usually seen in models with long or even infinite range interactions while our model has only nearest neighbor interactions. We believe that this makes this model a very interesting candidate for future studies of scrambling in quantum spin models.
## Acknowledgements
We acknowledge useful discussions with the members of QuLat collaboration, Richard Brower and Evan Owen. We thank the IBM-Q hub at Brookhaven National Laboratory for providing access to the IBMQ quantum computers. S.C, M.A and G.T were supported under U.S. Department of Energy grants DE-SC0009998 and DE-SC0019139. Y. M. is supported under U.S. Department of Energy grant DE-SC0019139.
## Appendix A Digital Quantum Simulation of the Magnetization
In this appendix, we present some observations of the simulation with digital quantum computing processors
which would be useful for investigation of quantum field theories with quantum computers for interested readers.
### Statistical error
In this subsection, we discuss statistical errors associated with different number of shots. Fig. 22 shows mag
Figure 23: Comparison of Trotter evolution of magnetization results with different operator ordering. Parameters: \(J=2.0\), \(h=1.05\), \(l_{\max}=3.0\).
Figure 22: Shot noise analysis at Guadalupe machine is presented with local magnetization data. Shot noise associated for each trotter step is demonstrated in the bottom panel for the better visualization. The number in the labels denote the number of shots applied for measurements. Gap between the corresponding classical TEBD simulation results and QPU results indicate the presence of other coherent and incoherent sources of noise. Parameters: \(J=2.0\), \(h=1.05\), \(l_{\max}=3.0\).
netization results obtained from the Guadalupe quantum computer with 200, 500 and 1000 shots. With our choice of the parameters, we find that the information about the magnetization is completely lost after \(\sim 7\) Trotter steps for some of the cases. As a result, for the followup discussion, we considered data up-to the sixth trotter step. Varying the number of shots (\(N_{\text{shots}}\)) reduces the statistical error (\(\epsilon_{\text{stat}}\)) and is roughly consistent with the relation \(\epsilon_{\text{stat}}\propto 1/\sqrt{N_{\text{shots}}}\). Statistical errors were computed from data obtained from the six measurement sessions at different times. It's noteworthy that we do not see significant differences in the central value of the measurements. The central value stabilizes with the increase in the number of sessions. From our analysis, we find that the systematic error is much larger than the shot noise error. Hence, it is necessary to develop error-correction routines to recover correct results. With the NISQ-era devices, fault-tolerant computation is not feasible due to
Figure 26: (a-b) Example of the extraction of the zero noise extrapolated data (red cross) at the second and the sixth trotter step, obtained from the measurements of the noise-scaled-circuits at quadalupe machine. (c) Extrapolated values are obtained for all trotter steps to plot ZNE data of the local magnetization. Parameters: \(J=2.0\), \(h=1.05\), \(l_{\text{max}}=3.0\), \(\delta t=0.2\).
Figure 24: Comparison of magnetization results with different mitigation techniques and their combinations. Parameters: \(J=2.0\), \(h=1.05\), \(l_{\text{max}}=3.0\).
Figure 25: Comparison of Trotter evolution of magnetization results in different noise scaled circuits. Noise scale=\(n\) indicates \(n\)-fold noise compared to the original circuit for the Trotter evolution of the local magnetization. Parameters: \(J=2.0\), \(h=1.05\), \(l_{\text{max}}=3.0\).
conflicting requirements of low fidelity of the qubits and the large qubit overhead for error-correction protocols. However, different error mitigation techniques can be applied to scale up the number of qubits for simulation in the current NISQ-devices. In the following section, we discuss the application of the different error mitigation techniques to improve results obtained from the quantum processing units.
### Operator ordering
Fig. 23 demonstrates how the local magnetization of \(N=13\) qubit lattice chain obtained from the Guadalupe QPU compares with different operator-ordering. To address the question of the operator-ordering we exclude mitigation techniques and circuit optimization techniques. Each data point was obtained from the average of six experiments each with 200 shots. Here, the label'sequential' implies that the continuum evolution operator is approximated as
\[U_{\text{seq}}=\prod_{i}h_{x}\prod_{\left\langle ij\right\rangle}h_{\text{int} }^{ij}, \tag{10}\]
whereas, the following ordering of operator denotes 'odd-even' ordering of operators
\[U_{\text{seq}}=\prod_{i}h_{x}\prod_{\left\langle ij\right\rangle,i\text{ even}}h_{\text{int}}^{ij}\prod_{\left\langle ij\right\rangle,i\text{odd}}h_{ \text{int}}^{ij}. \tag{11}\]
Local operators are defined as \(h_{x}=\exp(i\frac{h}{2})\eta_{ij}\sigma_{i}^{x}\) and \(h_{\text{int}}=\exp(i\frac{T}{4})\eta_{i}\sigma_{i}^{z}\sigma_{i+1}^{z}\).
We did not find a particular choice of operator ordering to be an important factor in the noisy Guadalupe device. Indeed, it is likely that the systematic errors are much larger than the differences in measurements associated with different choices of operator ordering.
### Error mitigation
In this subsection, we discuss the importance of different error mitigation techniques in the context of computations of the real time evolution of the magnetization of our model. We first analyze results obtained with dynamical decoupling (DD), then with a combination of dynamical decoupling and M3 (DD+M3) mitigation techniques, and finally with a combination of dynamical decoupling, M3 and Zero Noise Extrapolation (DD+M3+ZNE) techniques.
Further observation on the combined cases of error mitigation revealed that for some cases like \(\left\langle S_{3}^{z}(t)\right\rangle\) in Fig 24a, local magnetization data did not improve the results much. In contrast, for some cases like Fig 24b, the results were significantly improved and for the rest (Fig 24c), it is found that the results were improved only for large trotter steps.
On top of the dynamical decoupling and readout error correction technique, we applied Zero Noise Extrapolation (ZNE) to mitigate incoherent noise. The first step in the process is to scale up the noise systematically by generating unitary gate-folding or pulse-stretching 2. We used unitary folding by mapping a two-qubit operator \(U\to UU^{\dagger}U\). For pair of CX gates that are added one increases the noise-level by a factor of three. The second step is to perform measurements in the folded circuits and finally use these measurements with different noise levels to extrapolate a zero noise limit of the observables. Fig. 25 clearly demonstrates that increasing the noise by adding more unitaries causes the experimental values to deviate further away from the classically computed results with TEBD. The noise scaled values that are obtained for local magnetization \(\left\langle S_{i}^{z}\right\rangle\) at a time \(t_{0}\) are then used to extrapolate zero-noise value by linear extrapolations (Fig 26a,26b). Extrapolated values obtained at different trotter step are then combined to produce the time dependent magnetization curve Fig 26c.
Footnote 2: Pulse stretching needs pulse level access to device where the amount of noise introduced is controlled by the duration of the pulse applied to implement different gates.
## Appendix B Analog Quantum simulation of Magnetization
In this appendix, we report on quantum simulations of this model using Rydberg arrays. The Hamiltonian that governs the Rydberg simulator can be written as,
\[\hat{H}_{R}(t) =\sum_{j}\frac{\Omega_{j}(t)}{2}(e^{i\phi_{j}(t)}\left|g_{j} \right\rangle\left\langle r_{j}\right|+e^{-i\phi_{j}(t)}\left|r_{j}\right\rangle \left\langle g_{j}\right|)\] \[-\sum_{j}\Delta_{j}(t)\hat{n}_{j}+\sum_{j<k}V_{jk}\hat{n}_{j}\hat{ n}_{k}, \tag{12}\]
where, \(\Omega_{j}(t)\) is the Rabi frequency, \(\phi_{j}(t)\) denotes the laser phase, \(\Delta_{j}(t)\) the detuning parameter at site \(j\). Van der
Figure 27: Time evolution of the Rydberg density
Walls interaction \(V_{jk}=C_{6}/|r_{j}-r_{k}|^{6}\) is known as the Rydberg interaction term with \(C_{6}=2\pi\times 862690\)MHz\(\mathrm{\SIUnitSymbolMicro}\)[88; 89; 90].
Different operators in the hyperbolic Ising Hamiltonian can be mapped to different operators of the Rydberg Hamiltonian with the choice of zero laser phase \(\phi_{j}(t)\) at all sites,
\[\hat{H}_{R}(t) =\sum_{j}\frac{\Omega_{j}(t)}{2}\underbrace{\left(\left|g_{j} \right\rangle\left\langle r_{j}\right|+\left|r_{j}\right\rangle\left\langle g_ {j}\right|\right)}_{\sigma_{j}^{x}}\] \[-\sum_{j}\Delta_{j}(t)\underbrace{\hat{n}_{j}}_{(1-\sigma_{j}^{x })}+\sum_{j<k}V_{jk}\underbrace{\hat{n}_{j}\hat{n}_{k}}_{(1-\sigma_{j}^{x})(1- \sigma_{k}^{x})}. \tag{10}\]
The Rydberg interaction potential, \(V_{jk}\) determines the position of the atoms to quantum simulate the hyperbolic Hamiltonian. Due to the hyperbolic deformation, it is expected that we need to position the atoms non-uniformly. This is achieved by placing the atoms starting at location \((0,0)\) and using Eq. (10) to find the distances between successive spins:
\[\delta_{i+1}=(A/\eta_{i})^{1/6}+r_{i}. \tag{11}\]
This equation is just the rearranged form of \(\frac{A}{(r_{i+1}-r_{i})^{6}}=\cosh l_{i}\) which is the form of the Rydberg potential. Here, \(A=2\pi\times 512\) is a constant for adjusting the scale, \(\eta_{i}=J\cosh(l_{i})\) is the hyperbolic deformation and \(r_{i}\) is the location for the \(i_{\mathrm{th}}\) site. We set \(J=1\) for the rest of our discussion of Rydberg simulations.
Using this procedure we get the following locations for the Rydberg atoms for \(l_{\mathrm{max}}=3.0\) where the resulting distances between atoms range from \(12.13\)m, to \(17.72\)m with the furthest atom located at \(180.77\)m from the origin.
The form of \(\Delta_{j}\) and \(\Omega_{j}\) is then given by equating the coefficients to the form of the Rydberg potential between the atoms
\[\Delta_{j},\Omega_{j}=\frac{10\times C_{6}}{(r_{j+1}-r_{j})^{6}}. \tag{12}\]
However, currently commercially available Rydberg machines are constrained to have only global laser parameters. Hence we have turned to the Bloade Simulator developed by QuEra to perform simulations [91]. Fig. 27 shows a picture of the time evolution of the Rydberg density (essentially \(\left\langle S_{z}\right\rangle\)). Notice that Fig. 27 exhibits similar warping effects to those seen in the TEBD simulations of the model. This shows us that our model can be simulated with Rydberg Arrays. We hope that in the future with advancements in the Rydberg array technologies, we will be able to probe information propagation in this model with Rydberg simulators. However even with a local detuning it might not be possible to probe the whole spectrum of the model due to limitations in chain length and largeness of the Rabi and detuning term.
## Appendix C Digital Quantum Simulation: OTOC
In this section of the appendix, we will discuss some of the details of the OTOC computation with quantum simulators and quantum processing units.
Just like the magnetization, we need to pick a suitable Trotter step to observe physics with current NISQ era machines. Fig 28 demonstrates that \(\delta t=0.5\) is a suitable choice. As the OTOC drops from one to zero in four Trotter steps, the entangling gate-cost for the measurements of \(\left\langle W\right\rangle\) and \(\left\langle VWW\right\rangle\) (see Fig 18 in the main text) is manageable with current NISQ devices and a comparison of the Trotterized version (without shot noise) of the results and the exact-diagonalized results reveal that the Trotter error associated with the trotter step is not large enough to obscure the physics we are interested in (Fig 28).
We conclude this section of the appendix by justifying Eq. (12) numerically. In Fig 29, we compared the OTOC computed from the trace definition with the results obtained from the global protocol developed by Vermersch _et. al._[51] with numerics. For a mathematical proof of the identity, please see the appendix in [92]. Higher order modified OTOCs computed from the local protocol yields the same result as that of global protocol [81].
Figure 28: Choice of the Trotter step \(\delta t\sim 0.5\) seems a good choice for the OTOC computation with our choice of parameters
Figure 29: OTOC computed with the protocol with global unitaries match with traced data of products of operators. |
2310.00086 | FPGA-based feedback control of quantum optics experiments with the open
source software package PyRPL | We present PyRPL, an open source software package that allows the
implementation of automatic digital feedback controllers for quantum optics
experiments on commercially available, affordable FPGA boards. Our software
implements the digital generation of various types of error signals, from an
analog input through the application of loop filters of high complexity and
real-time gain adjustment for multiple analog output signals, including
different algorithms for resonance search, lock acquisition sequences and
in-loop gain optimization. Furthermore, all necessary diagnostic instruments
such as an oscilloscope, a network analyzer and a spectrum analyzer are
integrated into our software. Apart from providing a quickly scalable,
automatic feedback controller, the lock performance that can be achieved by
using PyRPL with imperfect equipment such as piezoelectric transducers and
noisy amplifiers is better than the one achievable with standard analog
controllers due to the higher complexity of implementable filters and
possibilities of nonlinear operations in the FPGA. This drastically reduces the
cost of added complexity when introducing additional feedback loops to an
experiment. The open-source character also distinguishes PyRPL from commercial
solutions, as it allows users to customize functionalities at various levels,
ranging from the easy integration of PyRPL-based feedback controllers into
existing setups to the modification of the FPGA functionality. A community of
developers provides fast and efficient implementation and testing of software
modifications. | Leonhard Neuhaus, Michaël Croquette, Rémi Metzdorff, Sheon Chua, Pierre-Edouard Jacquet, Alexandre Journeaux, Antoine Heidmann, Tristan Briant, Thibaut Jacqmin, Pierre-François Cohadon, Samuel Deléglise | 2023-09-29T18:53:51Z | http://arxiv.org/abs/2310.00086v1 | FPGA-based feedback control of quantum optics experiments with the open source software package PyRPL
###### Abstract
We present PyRPL, an open source software package that allows the implementation of automatic digital feedback controllers for quantum optics experiments on commercially available, affordable FPGA boards. Our software implements the digital generation of various types of error signals, from an analog input through the application of loop filters of high complexity and real-time gain adjustment for multiple analog output signals, including different algorithms for resonance search, lock acquisition sequences and in-loop gain optimization. Furthermore, all necessary diagnostic instruments such as an oscilloscope, a network analyzer and a spectrum analyzer are integrated into our software. Apart from providing a quickly scalable, automatic feedback controller, the lock performance that can be achieved by using PyRPL with imperfect equipment such as piezoelectric transducers and noisy amplifiers is better than the one achievable with standard analog controllers due to the higher complexity of implementable filters and possibilities of nonlinear operations in the FPGA. This drastically reduces the cost of added complexity when introducing additional feedback loops to an experiment. The open-source character also distinguishes PyRPL from commercial solutions, as it allows users to customize functionalities at various levels, ranging from the easy integration of PyRPL-based feedback controllers into existing setups to the modification of the FPGA functionality. A community of developers provides fast and efficient implementation and testing of software modifications.
## I Introduction
Feedback control is ubiquitous in scientific experiments to minimize the effects of fluctuations and environment conditions. The theory behind feedback control, from the definition of "plant" and "controller", to transfer functions, phase delays and stability conditions... is well known [1; 2]. So are the challenges implied: a properly designed controller has to diminish the effect of fluctuations with a large gain at Fourier frequencies where delay is insignificant, while the gain remains low enough at frequencies where delay is significant to avoid instabilities, without the controller introducing excessive delay itself. These challenges create a trade-off between the benefits that arise from introducing feedback into an experiment (stability, reproducibility) to the increase in costs and effort to achieve those controls.
The implementation of feedback controllers has long been performed with analog electronics and circuitry. However, cutting-edge experiments can be dynamic and complex, with changing requirements between measurements due to even minor differences in initial input parameters. This has motivated the movement towards digital feedback control [3; 4; 5; 6; 7], using digital signal processing (DSP), software modules/modular structures, and re-programmable electronics. This allows flexibility and customization, as well as lowering costs and multiplying control loop circuits through duplication of software instead of hardware electronics. These developments are enabled by the growing availability of Micro-Controller Units (MCUs) and Field Programmable Gate Arrays (FPGAs).
Owing to their sequential logic, MCUs can be programmed and reconfigured easily in common programming language, such as C, making them a good choice when bandwidth requirements are not too tight. For instance, MCUs have been successfully used to stabilize laser-cavity systems with various locking algorithms [4; 5; 6]. On the other hand, FPGAs are advantageous when high locking bandwidth is required. They have been used in quantum optics experiments to implement complex control loops based on cascaded integrator comb filters [3], Finite Impulse Response (FIR) [8] or Infinite Impulse Response (IIR) [9] filters, or phase-lock loops [10]. Furthermore, fast digital modulation and demodulation can be used to perform direct Pound-Drever-Hall signal generation and self-diagnose the loop with a digital network analyzer [3]. In contrast with MCUs, FPGAs demand a more specialized hardware description language and a lengthier, more intricate compilation process because of their parallel architecture. The technical subtleties involved in programming digital servo-loops with FPGAs [11] is beyond the reach of many small research groups, who often struggle to maintain the necessary level of expertise over time. One way to leverage the power of FPGAs without the challenges of managing the hardware description language is to design a versatile system that can be reconfigured in real-time to suit diverse requirements, without recompiling the FPGA code. Furthermore, to maximize flexibility and customizability, and keep costs low, a digital controller should be built with mostly an open-source architecture. One such platforms has recently been de
veloped with atomic spectroscopy applications in mind [12].
In this article, we present the Python Red Pitaya Lockbox open-source software package, or _PyRPL_, based on the widely-available Red Pitaya field-programmable gate array (FPGA) platform, for digital feedback control. PyRPL has been available in various evolving forms since 2016 and has already found its way in physics experiments in fields such as optomechanics [13; 14; 15], quantum optics [16; 17; 18; 19], atomic physics [20; 21] and frequency combs [22].
The idea behind the architecture of the program is to allow highly-flexible reconfiguration between DSP module connections, without the need for FPGA recompilation. This enables the user to switch instantly between different configurations, and to efficiently diagnose the control loop characteristics by connecting to acquisition modules (digital oscilloscope, spectrum analyzer, network analyzer - all available as parts of PyRPL). A Python program and Graphical User Interface (GUI), running on a separate computer, completes the Lockbox set to allow the monitoring of various signals and control of parameters and FPGA registers (via a TCP-IP connection) in real time, for full flexibility. This paper is divided into three sections. The first section presents an overview of the backbone architecture of PyRPL: the board, the client-computer communication and DSP multiplexer. The second section presents the various DSP modules within PyRPL. The last section presents example uses that call upon and arrange the various modules: Pound-Drever-Hall error signal generation, auto-feedback control sequences, and complex loops filters. The first section is mostly here for completeness and reference for the advanced reader. A reader only interested in a quick set-up of a given experimental platform is welcome to directly jump to the two latter sections.
## II Python Red Pitaya Lockbox (PyRPL) Architecture
A simplified schematic of the PyRPL architecture, with its key components is shown in Fig. 1. The Red Pitaya [23] is an affordable FPGA board, with fast analog inputs and outputs, that has recently gained wide recognition in the scientific and electronics communities. PyRPL encompasses a user-interface and high-level functionality written in the Python language, client-computer communication protocol written in C language, and a background custom FPGA design in Verilog hardware description language for DSP multiplexing. The client computer hosts the first component for user flexibility and readout, while the Red Pitaya board hosts the latter two components for control loop implementation.
### Red Pitaya FPGA Board
The Red Pitaya board combines a Xilinx Zynq-7010 System-on-chip (SoC) [24] with two-channel analog-to-digital (ADC) and digital-to-analog (DAC) converters with 14 bit resolution and 125 MHz sampling rate. The Zynq-7010 combines a FPGA allowing programmable logic (PL) with a dual-core processing system (PS), composed of an ARM Cortex-A9 processor, and a high-bandwidth connection to the FPGA. The PS allows to run a Linux operating system (OS) on the board and thereby to use, interface, and program it like a standard personal computer. The PL is defined by a hardware description language, in this case Verilog, whose compiler generates a bitfile for the FPGA that can be loaded via the PS. The manufacturer of the Red Pitaya provides a sample Verilog source code that includes an example interface to the ADC and DAC, and an implementation of
Figure 1: **Overview of the PyRPL architecture**. It depicts the system components using different colors: blue blocks represent software written in Python, red blocks represent software in C running on the Red Pitaya’s CPU, and green blocks represent Verilog code running on the FPGA chip. The system is controlled by a ”Client PC” that incorporates a high-level programming layer, which may include an optional Graphical User Interface (GUI). During program startup, the Client PC establishes an SSH session with the Red Pitaya’s operating system. It then proceeds to flash the FPGA board with the most recent version of the FPGA bin file, which contains the hardware description of the Digital Signal Processing (DSP) modules. Simultaneously, the Client PC initiates a ”Monitor Server” on the Red Pitaya’s programmable system. This server operates continuously, listening to the Client PC and ensuring synchronization between the DSP registers and attributes of Python objects that mirror the FPGA structure.
a bus between PS and PL that allows a Linux user connected to the board through an ethernet connection to read and write registers of the FPGA.
The PS clock rate is 1 GHz and the FPGA clock rate is 125 MHz. The FPGA clock is derived from an on-board quartz oscillator by default, but can be synchronized to an external clock. The DAC and ADC are clocked synchronous with the FPGA. However, feedback control loops are limited in bandwidth range to below 2 MHz, due to significant delay caused mainly by the employed ADC and DAC converter architecture.
The resolution of the ADC and DAC is 14 bits and both the differential nonlinearity and transition noise of both components are of the order of the least significant bit. However, the implementation of these components on the Red Pitaya board results in slightly worse performance. In agreement with previous work [25], we have measured effective resolutions of 12 bits for the ADC and 11 bits for the DAC.
### Client-computer communication
The start up and connection to the Red Pitaya (with the starting of a PyRPL instance) begins with the link to the PS through a secure shell (SSH) connection. The client then uploads a bitfile which contains the compiled PL design and loads this file into the FPGA through a Linux command. Next, the client uploads and executes the C program "monitor_server". The server program allows the client to connect to it through the TCP/IP protocol and listens for commands from the client. The possible commands are'read' and 'write', followed by an address and the number of 32-bit packets of data to read or write from or to the FPGA, starting from the given address. If the address lies within the address space that the OS of the RedPitaya has assigned to the FPGA bus, the read or write request becomes available to the PL on the various bus signals.
The requested command is interpreted by the logic implemented in the Verilog code, which essentially maps a number of the registers of each DSP module to specific addresses within an address subspace reserved for the module. By knowing the names and addresses of all registers of a DSP module, a Python object corresponding to the DSP module can be created on the client computer, such that the properties of the object correspond to the FPGA registers of the DSP module. When these properties are read or written, the Python value is automatically synchronized with the content of the corresponding registers through the communication chain. The end result is that a Python programmer can assign or read the FPGA registers in the same way as local Python variables. This provides rapid flexibility of the controller.
Communication speed has been a major concern in the development of PyRPL. Without excessive code optimization, we achieve typical read and write delays for a single 32-bit register of the order of 300 \(\mu\)s. When a block of registers with adjacent addresses are read or written, for example the memory storing an oscilloscope trace, the read/write delay scales sublinearly. For reading or writing \(2^{14}\) 32-bit values, a time of the order of 7-10 ms is required. The delay is thus limited by the speed of the ethernet connection, and could be further improved by, for example, compressing the data sent over the network.
### DSP multiplexer
The DSP multiplexer serves as a router between the analog signals and the various DSP modules of PyRPL. Its Verilog implementation is shown in Fig. 2. The module definition fits into a few lines of code, but is nevertheless very resource-intensive. The multiplexer can accommodate up to 16 DSP modules that each provide a port for a 14-bit input signal ("input_signal"), a 14-bit output signal ("output_signal"), and an additional 14-bit signal called "output_direct" which can be routed directly towards the analog outputs. The value of one register for each module called "input_select" determines which other module's output signal is connected to the former module's input port. Another 2-bit register "output_select" per DSP module is used to select whether the module's signal "output_direct" is sent to the analog output 1, output 2, to both outputs or to none. As shown at the bottom of Fig. 2, the sum of the selected "signal_direct" of all DSP modules is computed for each of the two analog outputs and then sent to the DACs. The analog input signals from the ADCs and the output signals that are sent to the DACs are also represented in the form of DSP modules which only provide an "output_signal" port in the DSP multiplexer in order to keep the interface simple. Some DSP modules have by definition only inputs or only outputs, and for others, the signals "output_direct" and "output_signal" are identical.
By combining a large number of DSP modules in series, or by rapidly (\(<1\) ms) modifying the connections between modules in real time, the DSP multiplexer allows the implementation of highly complex functionalities. Furthermore, it allows analysis modules (oscilloscope and analyzers) to be connected to any DSP module without the need of signal conversion to the analog domain, which has proven a highly practical tool for debugging, setup and in keeping analog cross-talk low.
## III Basic operation of the FPGA modules
As explained in the section above, the core of PyRPL is a set of FPGA modules that perform several DSP tasks, and that can be interconnected in various configurations by a DSP multiplexer without changing the FPGA bitfile. The various FPGA registers controlling the behavior of each FPGA module can be inspected and controlled transparently via a Python object mirroring each DSP module on the client computer. The state of the various
modules (i.e. the value of the corresponding registers) are stored in a yml configuration file that is kept in-sync with the FPGA registers at all times. The state of each module can also be controlled via a GUI, and archived at any point for future usage.
The FPGA architecture is fully parallel and the modules are thus operating simultaneously on the digital signals sampled at the FPGA clock rate of 125 MHz. The default configuration of PyRPL instantiates several copies of the following modules:
* dual channel cscilloscope
* Arbitrary Signal Generator (ASG, \(\times 2\))
* Proportional Integrator Differential filter (PID, \(\times 3\))
* IQ modulator/demodulator (\(\times 3\))
* Infinite Impulse Response filter (IIR)
This section discusses the principle of the various FPGA modules with emphasis on the underlying theory of DSP. Furthermore, we provide basic usage examples, together with full configuration instructions using the Python API [26].
### Data acquisition modules
The possibility to monitor digital signals originating either from the input analog-to-digital converters of the Red Pitaya board or from any of the internal DSP modules is one of the key features of PyRPL, enabling fast debugging and prototyping of DSP setups. While reading or writing an FPGA register can be done in a synchronous manner--i.e the program execution on the computer is halted until the value has been successfully updated or read from the board--launching a data acquisition task, waiting for the data to be ready, and then displaying it on screen, cannot be done by a blocking program, otherwise, the GUI or other critical program components would be blocked for a long, and potentially unpredictable time.
In PyRPL, we have decided to use the Qt event-loop framework to deal with asynchronous tasks. Compared to multithreading, an event-based approach avoids potential problems with race conditions and simultaneous memory access by concurrent threads. Thanks to the external library quamash, this approach is also compatible with the asynchronous framework _asyncio_ introduced natively in the Python syntax since version 3.3, thus greatly improving the readability of asynchronous codes. Since PyRPL has been primarily designed for quick prototyping and testing, we have made sure that an interactive IPython shell (such as Jupyter notebook) can be used in conjunction with the GUI without blocking the data acquisition tasks.
#### ii.1.1 Oscilloscope
The FPGA implementation of the two-channel oscilloscope is based on the example provided by Red Pitaya. The module records 2 synchronized time-traces of any DSP signals with \(2^{14}\) consecutive points. An integer decimation register of the form \(\{2^{n}\}_{0\leq n\leq 16}\) controls the number of consecutive samples averaged in each point, thus changing the total trace duration between 131 \(\mu\)s and 8.59 s. We have extended the triggering functionality and added various time-stamps which allows the user to keep track of the exact time of different traces. Furthermore, for trace durations above 0.1 s, a "rolling-mode" is available: in this mode, data are acquired continuously in a circular buffer and displayed at the maximum refresh rate on screen. This mode of operation is typically used for optical alignments or for the monitoring of slow drifts over time.
Figure 2: **Signal routing architecture of PyRPL.** The DSP modules are represented as triangles. Each module, with the exception of the scope, has an output signal. A DSP multiplexer which functions as a signal router for the 14-bit signals traversing among the various DSP modules and the analog inputs (In1, In2) and outputs (Out1, Out2). Furthermore, each module (excluding the scope) boasts an additional ”output_direct” that can be directly routed to one of the analog outputs - ”Out1” or ”Out2” or ”Out2” or ”Out2” or ”Out2” or ”Both. All signals directed to the same analog outputs undergo summation before being dispatched to the Red Pitaya board’s Digital-to-Analog Converter (DAC).
Apart from mirroring the various registers necessary to control the operation of the FPGA oscilloscope (such as the trace duration, input channels, trigger source and thresholds...), the corresponding Python module exposes some high-level methods to acquire individual or averaged traces. The fast communication interface presented above allows the display of both oscilloscope channels in real time on the client computer with a typical refresh rate exceeding 20 frames per seconds (see the GUI in Fig. 3).
#### ii.1.2 Spectrum analyzer
By performing the FFT of individual oscilloscope traces, the spectrum analyzer can display in real-time the spectrum of the ADC inputs or internal DSP signals. The spectrum is estimated by implementing the Welsch method [27]: the user can pick a filter window among several options: 'blackman', 'flattop', 'boxcar', 'hamming', or 'gaussian'. The span, given by the scope sampling rate \(1/T_{\text{scope}}\), is selected among a list of predefined values. Equivalently, the residual bandwidth RBW can be chosen from a predefined list, via the relation
\[\text{RBW}=\text{ENBW}/T_{\text{scope}}, \tag{1}\]
where the equivalent noise bandwidth (ENBW) depends on the filter function samples \(\{f_{n}\}_{0\leq n<N}\) via \(\text{ENBW}=\left(\sum_{n}^{N}f_{n}^{2}\right)/\left(\sum_{n}^{N}f_{n}\right)^ {2}\). The spectrum analyzer has two modes of operation: in "baseband" mode, the FFT operations are performed on the 2 real channels of the oscilloscope, allowing us to extract the individual real spectra and the complex cross-spectrum of the 2 input signals. In "IQ-mode", a single input signal is demodulated around a central frequency \(f_{0}\) by an IQ-module and the 2 slowly varying quadratures are merged to form a complex time series \(z_{n}=I_{n}+jQ_{n}\). By performing the FFT of this complex time series, we extract the spectrum of the input signal in a narrow window around the central frequency \(\nu_{0}\).
### Arbitrary signal generator module
The arbitrary signal generator module (ASG) module is an adapted version of the two-channel ASG provided by the Red Pitaya manufacturer. Our Python interface allows to slightly reduce the necessary FPGA resources for the ASG in order to reserve more FPGA space for other modules. Waveforms defined by \(2^{14}\) values are easily loaded through the Python interface. The ASG supports frequencies from 0.1 Hz to 62.5 MHz and various burst and pulse modes. We have added an extra triggering functionality to the ASG in order to allow arbitrary delays for its turn-on or turn-off with respect to the arrival time of an external trigger signal. Furthermore, we have implemented a pseudo-random noise generator (PRNG) based on a Lehmer PRNG [28]. While the current generator has a period of the order of 2 s, true random number generation is possible with FPGAs [29] and might be implemented in the future.
### Proportional-Integrator-Differential module
The Proportional-Integrator-Differential (PID) module is an extended version of a Red Pitaya example. The output signal \(s_{\text{out}}\) computed by a standard PID module with integral gain \(g_{\text{i}}\), proportional gain \(g_{\text{p}}\) and derivative gain \(g_{\text{d}}\) can be written as
\[s_{\text{out}}(t)=g_{\text{i}}\int_{-\infty}^{t}e(\tau)\text{d}\tau+g_{p}e(t) +g_{\text{d}}\frac{\text{d}}{\text{d}t}e(t)\,, \tag{2}\]
where \(e=s_{\text{in}}(t)-s_{0}\) is the difference between the input signal \(s_{\text{in}}\) and the setpoint \(s_{0}\). A very practical modification of the PID module is the possibility to modify the current value of the integral in the above equation through a register termed "ival" that is accessible in the Python interface. This way, the PID integrator can be re-set to arbitrary output voltages, or voltage ramps can be easily implemented as for-loops in Python that periodically change the value of the integral.
In addition to this, we have added 4 first-order filters with selectable low-pass or high-pass cut-off frequencies in series to pre-filter the input signal of the PID module. An additional saturation stage allows to define arbitrary
Figure 3: **Graphical user interface of the oscilloscope module**. The FPGA registers controlling the scope operation can be modified either from the controls located on the top panel or via an interactive python shell. In the latter case, the graphical displays are automatically synchronized with the updated value.
maximum and minimum voltages for the PID output signal. We note that, while the saturation stage is often practical to protect sensitive components connected to a Red Pitaya output from undesired voltages, it is always preferable to build an analog attenuator circuit instead of using digital saturation when possible, in order to benefit from a maximum dynamic range and a better analog noise performance. Similarly, it is always preferable to replace digital low-pass filters acting on the output signal of a Red Pitaya by analog ones in order to improve the averaging of the DAC output noise.
### IQ module
The linear response of a system subjected to a sinusoidal perturbation \(V_{\mathrm{exc}}(t)=A_{\mathrm{exc}}\cos(\omega_{0}t)\) is a signal at the same frequency: \(V_{\mathrm{meas}}(t)=H(\omega_{0})A_{\mathrm{exc}}\cos(\omega_{0}t+\phi( \omega_{0}))\) that can be decomposed in two orthogonal components \(V_{\mathrm{meas}}(t)=I(\omega_{0})\cos(\omega_{0}t)+Q(\omega_{0})\sin(\omega_{0}t)\). In practice, the \(I\) (resp. \(Q\)) quadratures can be retrieved by applying a low-pass filter on the product \(V_{\mathrm{meas}}(t)\cos(\omega_{0}t)\) (resp. \(V_{\mathrm{meas}}(t)\sin(\omega_{0}t)\)). This so-called IQ demodulation is the core operating principle of vector network analyzers and forms the basis of various sensitive measurement technics such as lock-in detection or the Pound-Drever-Hall error signal generation widely used to stabilize a laser frequency to an optical cavity [30]. While this vector demodulation is routinely achieved with analog electronic components, the IQ module of PyRPL allows to perform this operation using the digital signal processing capabilities of the FPGA board provided the excitation frequency \(\omega_{0}/2\pi\) doesn't exceed the Nyquist frequency of the analog-to-digital converters (\(\omega_{N}/2\pi=62.5\) MHz). Generating these error signals numerically is advantageous as analog mixers usually suffer from imperfections such as DC-offsets that directly translate into the generated error signal. Furthermore, in phase locked loop applications, it can be advantageous to extract an error signal proportional to the phase of the slowly evolving quadrature signal. This phase can be estimated and unwrapped in real time thanks to a CORDIC phase detection algorithm implemented in the IQ module of PyRPL.
Finally, for an incoherent input signal \(V_{\mathrm{meas}}(t)=\mathrm{Re}\left(\int_{0}^{\infty}V(\omega)e^{i\omega t}d \omega/2\pi\right)\), the IQ demodulation procedure provides a frequency-shifted version of the signal:
\[I(t)+jQ(t)\approx\int_{0}^{\infty}V(\omega-\omega_{0})e^{i\omega t}H_{f}( \omega)d\omega/2\pi, \tag{3}\]
where the transfer function \(H_{f}(\omega)\) of the low-pass filter is assumed to have a cutoff frequency \(\omega_{f}\ll\omega_{0}\). A bandpass filtered version of the input signal can then be generated by applying the reverse IQ modulation:
\[V_{out}(t) =I(t)\cos(\omega_{0}t)+Q(t)\sin(\omega_{0}t)\] \[=\int_{0}^{\infty}V(\omega)H_{f}(\omega-\omega_{0})e^{i\omega t} d\omega/2\pi.\]
The design of the IQ module is schematized in Fig. 4. The four phase-shifted sine functions required for each IQ-module are generated from a look-up table (LUT) of \(2^{11}\) 17-bit values stored in read-only memory (ROM) of the FPGA. For minimum resource requirements per IQ module, only a quarter period of the sine function is stored in the LUT. The phase of the sines is given by the most significant bit of a 32-bit register that is incremented each clock cycle by the value of the register that defines the frequency.
Even though the wiring is fixed by design in the FPGA code, the module can be instantly reconfigured for various applications, by setting some of the registers "gain", "amplitude", and "quadrature_factor" to zero, effectively disabling the corresponding signal paths. An output multiplexer "output_signal" is also present to afford more flexibility. In the following, we provide examples and practical configuration instructions for various modes of operation: as a vector network analyzer, as a tunable bandpass filter and for Pound-Drever-Hall error signal generation.
#### iv.4.1 Network analyzer
In order to use an IQ module as a network analyzer, the register "gain" is set to zero, effectively decoupling the demodulation and modulation stages of the module
Figure 4: **IQ module layout**. The input signal is first high-pass filtered with a cutoff frequency defined by the register ”ac_bandwidth”. The signal is then demodulated at a frequency defined by the register ”frequency”, with a phase defined by the ”phase” register. The slowly varying quadratures are then obtained by applying second order low-pass filters on the multiplier’s outputs (cutoff controlled by the register ”bandwidth”). The quadrature signals can then be used for various applications: they are sent to a CORDIC phase estimator to extract a phase-lock error signal, accumulated in high-precision registers for the network analyzer, or directly used as the module’s ”output_signal” depending on the value of the register ”output_signal”.
(see Fig. 5). The "amplitude" register is set to a non-zero value governing the amplitude of the sinusoidal excitation at the "output_direct" of the module. This excitation can be routed either to an analog device-under-test (such as an electro-optic modulator or a piezo-actuator) by setting "output_direct" to "out1" or "out2", or to an internal DSP module, as demonstrated in III.4.2.
After going through the device-under-test, the signal is connected to "input_signal", and optionally high-pass filtered at the beginning of the IQ module. The signal is then demodulated at the modulation frequency. One or two subsequent low-pass filters remove the demodulation component at twice the modulation frequency before being accumulated in two registers corresponding to the real and imaginary parts of the transfer-function. The sweep over a list of frequencies is performed by a for-loop on the python client. Updating the frequency register of the FPGA module1 triggers a counter of a pre-defined number of cycles specified by a register "sleep_cycles". This is done to allow for the settling of the system under test before the acquisition of its transfer function begins. Next, during a number of cycles defined by a register "na_cycles", the values of both demodulated quadratures are additively accumulated inside the two 62-bits accumulators shown in Fig. 4. Once the "na_cycles" counter reaches zero, the python client reads the values of the two accumulators as 62-bit numbers and computes the averaged complex transfer function.
Footnote 1: In zero-span mode, the frequency register is updated with its current value, which also triggers the acquisition of a new network analyzer value.
The configuration of the IQ module and the data-acquisition loop is handled by a pure software module that exposes high-level functionalities such as setting the start and stop frequencies of the scan, or the IF bandwidth of the measurement, as well as curve acquisition functions identical to those of the scope and spectrum analyzer. The GUI interface of the network analyzer is represented in Fig. 5.
While the analog performance of the network analyzer does not compete with commercial network analyzers, a number of custom adjustments often allow better functionality. For example, by iterating over a list of frequencies in reversed order, frequency sweeps in both frequency directions have been performed. This feature is commonly inaccessible in commercial network analyzers, and can be used for instance to probe hysteretic behaviours such as Duffing non-linearities in mechanical systems.
Another practical customization involves the stabilization of the magnitude of the input signal by adjusting the amplitude of the output tone, which was found to yield better measurement results in systems with response of high dynamic range and nonlinear response, such as high-\(Q\) mechanical resonances detected with a Fabry-Perot
Figure 5: **Graphical User Interface of the network analyzer**. The user can select the input and output of the network analyzer with the corresponding registers (within the ”Channels” group). The frequency sweep is defined by the parameters “start_freq”, ”stop_freq” and ”points” within the ”Frequency” group. When the logscale checkbox is ticked, the frequency-bins are logarithmically distributed. The ”amplitude” and ”acbandwidth” control output voltage and high-pass filter cutoff at the input of the iq-module. The ”rbw” and ”average_per_point” registers determine the averaging time for each point. When “auto_bandwidth” is ticked, the residual bandwidth is optimized at each frequency bin. ”auto_amplitude” is used to stabilize the input amplitude based on the value of the transfer function measured on previous points.
cavity. Finally, the simultaneous availability of network analyzer and feedback control modules inside a single DSP system makes PyRPL a convenient tool to probe the in-loop transfer function of a closed-loop system, as demonstrated in section IV.2. This ability is particularly useful when working with optical Fabry-Perot cavities, where a stable lock is mandatory to measure any optical response.
#### iv.1.2 High-Q band-pass filter
In this example, an IQ module is employed to realize a band-pass filter. The transfer function of the implemented filter is measured in-situ by the integrated network analyzer of PyRPL, without transiting in the analog domain. Since this example doesn't require any external hardware, it can be seen as a basic tutorial for early users to gain hands-on experience with PyRPL.
For the band-pass filter operation mode of the IQ module, the value of the "amplitude" register is set to zero and instead the "gain" register is set to the desired filter gain. An incoming signal is then demodulated with a phase-shifted sin/cos-pair, low-pass filtered, and modulated with the un-shifted sin/cos-pair (see Fig. 6a). The center frequency of the filter is thus determined by the frequency register of the IQ module, while the phase lag can be tuned by the relative phase-shift between the demodulation and modulation oscillators [31]. The filter bandwidth is defined by the cut-off frequencies of the low-pass filters acting on the demodulated quadratures. If only one low-pass filter is used, the shape of the band-pass filter is Lorentzian.
By connecting the implemented filter to the integrated network analyzer of PyRPL (internally employing another IQ module), it is possible to measure the filter's transfer function. Figure 6b shows the transfer function measured for a "gain" register of 1, a first-order filter of bandwidth 2.3 kHz and a set of three different phases \(\phi=0^{\circ}\), \(120^{\circ}\) and \(240^{\circ}\).
By combining up to three (limited by the number of IQ modules) band-pass filters of the described type in parallel, complex transfer functions can be realized. The digital quadratures of narrow band-pass filters can benefit from significant reduction of the ADC noise through averaging and are therefore internally represented with 24 instead of 14 bits. This advantage is easily lost through the noise added by the DACs. For optimal performance, it is therefore crucial to adjust the filter gain such that the output signal nearly saturates the available voltage range, and to use analog attenuation afterwards, which attenuates both the signal and the DAC noise.
#### iv.1.3 Pound-Drever-Hall error signal generation
In order to use the IQ module for the generation of a Pound-Drever-Hall (PDH) error signal, the register "gain" is set to zero, the registers "amplitude" and "quadrature_factor" to non-zero values and the output multiplexer "output_signal" is set to "quadrature" (see Fig. 7b). With this setting, the signal "output_direct" features a sinusoidal modulation that can be sent through one of the analog outputs to a modulator such as an EOM or a piezoelectric actuator. In the example depicted in Fig. 7, a phase modulation is imprinted on the laser via an EOM. The detected analog signal is digitized and connected to "input_signal". The signal is optionally high-pass filtered at the beginning of the IQ module and then demodulated. One or two subsequent low-pass filters remove the demodulation component at twice the modulation frequency before an amplified version of one demodulated quadrature is available as "output_signal" of the IQ module.
Figure 6: **IQ module as bandpass filter**. (a) Schematic of the IQ module configured for bandpass filter implementation. Signal paths that are deactivated by setting the register values ”quadrature_factor” and ”amplitude” to 0 are dimmed in the diagram. (b) Bode plot of the bandpass filter measured with the internal network analyzer of PyRPL for 3 values of the phase register (\(0^{\circ}\), \(120^{\circ}\), \(240^{\circ}\)), a demodulation frequency of 15 MHz, and a first order low-pass filter of bandwidth 2.3 kHz.
Fig. 7c shows the generated PDH error signal, together with the reflection as the cavity length is swept across the resonance via a piezo-electric actuator. The figure also shows how the error signal amplitude is optimized by varying the demodulation phase.
### Infinite Impulse Response module
A common problem in practical control applications is to compensate for the complicated transfer function of an imperfect actuator. A prominent example from optical cavity stabilization are the sharp resonant features arising from mechanical resonances of a piezoelectric actuator. While simple filters can be realized by combining several PID and IQ modules, the shape and complexity of the filters that can be constructed with a small number of modules is limited. On the contrary, digital infinite impulse response (IIR) filters can be used to generate quite complex transfer functions with a relatively small number of operations per clock cycle. We present here the implementation of an IIR filter in PyRPL. A more detailed theoretical description of the IIR filter is given together with its practical implementation in Appendix A.
A linear shift-invariant digital filter can be characterized by its impulse response \(h(n)\):
\[y(n)=\sum_{i=0}^{\infty}h(i)x(n-i), \tag{4}\]
where \(y(n)\) is the output signal and \(x(n)\) the input signal. All time-dependent signals are here referenced to the discrete sampling times \(t=nT\), with \(T\) the inverse sampling rate. A useful representation of the impulse response \(h\) is given by its Z-transform:
\[H(z)=\sum_{n=0}^{\infty}h(n)z^{-n}. \tag{5}\]
For all practical filters, the summation in Eq. (4) is limited to a few cycles \(i\) (typically 2 for this article), corresponding to a _Finite Impulse Response_ (FIR) filter.
A more general IIR filter, which corresponds to an infinite number of non-zero terms, can be mimicked with a few cycles and an additional dependence on the former values of the output:
\[y_{j}(n)= b_{0}x(n)+b_{1}x(n-1)\] \[-a_{1}y_{j}(n-1)-a_{2}y_{j}(n-2), \tag{6}\]
with the corresponding \(H\) transform:
\[H(z)=D+\sum_{j=1}^{N^{\prime}}\frac{b_{0j}+b_{1j}z^{-1}}{1+a_{1j}z^{-1}+a_{2j} z^{-2}}, \tag{7}\]
where \(D\) and all \(a\) and \(b\) coefficients are real. The overall filter therefore appears as a sum of simple filters with zeroes and poles.
Fig. 8 shows the GUI of the IIR module. A graphical representation of the total transfer function (IIR filter + analog plant) helps the user to iteratively optimize the controller by adding poles and zeros to the IIR filter design. Typically, transfer func
Figure 7: **Pound-Drever-Hall error signal generation**. (a) Experimental setup: a laser is sent into an optical cavity. An electro-optic modulator (EOM) connected to the output ”out1” of the Red Pitaya generates a phase modulation. When the laser is detuned from the cavity resonance, this creates an intensity modulation, that is detected by a fast photodiode (PD), digitized by the input ”in1”, and subsequently demodulated to extract a dispersive error signal. (b) The phase modulation tone is generated by the ”output_direct” of the IQ module, with an amplitude of 1 V, and a frequency of 50 MHz. The PD signal is demodulated at the same frequency, and low-pass filtered with a second-order cutoff frequency of 3 MHz. (c) Error signal obtained by sweeping the cavity length for various IQ demodulation phases. The optimal demodulation phase, for which the slope of the error signal is maximum, is 35 \({}^{\circ}\).
zeros and poles can be implemented with a negligible delay up to several hundred kilohertz. We provide a concrete example of loop optimization using the IIR filter in section IV.3.
## IV Applications of Pyrpl in Quantum Optics Experiments
In this section, we provide several example applications, where the capabilities of PyRPL are exploited to solve real-life laboratory problems. The applications, sorted from the simplest to the most complex, cover a wide range of experimental problems that are ubiquitous in laser physics and quantum optics laboratories:
* Configuration of an automated lock-acquisition sequence for an optical Fabry-Perot cavity.
* milliradian precision stabilization of 2-independent lasers with a digital phase-lock-loop.
* Measurement of the transfer-function of a feedback-loop used to stabilize a high-finesse cavity up to 500 kHz.
* Active compensation of the acoustic resonances of a piezo-electric actuator using PyRPL's IIR filter.
In order to enhance the reproducibility of these results and enlarge the adoption of PyRPL among the community, we provide, for each application, a Jupyter notebook demonstrating the step-by-step configuration of PyRPL towards the desired goal [26].
### Fabry-Perot lock acquisition
With analog electronics systems, the lock acquisition sequence is often a tedious manual process, involving several steps, such as manually approaching the resonance, switching on the integrator gain, increasing the proportional gain... The situation is even more dramatic when several optical elements are placed in series: then a loss-of-lock of a single element requires a manual intervention to relock all subsequent elements in the chain. In contrast, the possibility to control and reconfigure the DSP modules remotely with PyRPL enables to fully automate the lock acquisition sequence.
In order to quickly configure and optimize lock-acquisition sequences, we have developed a purely software module (with no direct counterpart in the FPGA board): the Lockbox module. A specific lockbox type, containing a list of inputs and outputs, is defined for each sort of optical system by subclassing the Lockbox class. For instance, the FabryPerot Lockbox contains "reflection", "transmission", and "pdh" error signals, all displayed on Fig. 9.
Since the expected signal shape \(x_{e}(\Theta)\) is explicitly provided by the model, it is possible to stabilize the system at different setpoints \(\Theta\) (in the example of the FabryPerot Lockbox, \(\Theta\) is the cavity detuning). Furthermore, since optical elements are often subject to misalignments and loss of contrasts, we typically allow for a vertical offset \(y_{0}\) and scaling factor \(g\) in the actual error signal \(\langle x_{e}\rangle(\Theta)\). These parameters are experimentally determined by recording the error signal while sweeping the actuator over the relevant range. This calibration procedure is fully automated such that it can be repeated whenever an alignment change is suspected2.
Footnote 2: The overall gain of the loop is specified in dimensionless units, such that the controller gain can be adjusted to compensate for variations of the slope \(\frac{\partial x_{e}(\Theta)}{\partial\Theta}|_{0}\).
We have found that a simple yet robust lock-acquisition could be achieved on most optical systems thanks to a generic sequential process: the lock-acquisition sequence is divided into successive stages, where each stage activates a specific feedback loop (see Fig. 10). For each stage in the sequence, the user specifies the error signal \(x_{e}(\Theta)\) to stabilize, and the specific setpoint \(\Theta_{0}\). Switching between various error signals proves to be useful since some error signals commonly feature multiple fixed points \(\{\Theta_{i}\}_{0\leq i<N}\) such that \(\langle x_{e}\rangle(\Theta_{i})=\langle x_{e}\rangle(\Theta_{0})\). On the other hand, by choosing \(\Theta_{0}\) on a per-stage basis, the user can avoid singular points where \(\partial\langle x_{e}\rangle/\partial\Theta=0\).
This example demonstrates how the extra abstraction layer introduced by the Lockbox module can be used to configure a sequence to approach the resonance and acquire the lock reproducibly on a high-finesse optical cavity. Fig. 10a is a screenshot of the widget used to define the various stages in the lock-acquisition sequence. We record and display in Fig. 10b the evolution of the various error signals (reflection in red, pdh in black) as well as the piezo output (in blue) during the sequence:
* Stage 0: The integral register of the piezo output is set to 1 V in order to reproducibly approach the cavity resonance from the high-voltage side.
* Stage 1: The controller attempts to stabilize the "reflection" signal to a detuning equal to 3 cavity bandwidths. As a consequence, the integrator of the piezo output immediately starts to drift towards lower voltages until the cavity resonance is nearly reached (at a voltage of 0.36 V). In order to avoid overshooting over the resonance, the gain factor is reduced to 0.001 during this stage. At the end of Stage 1, the cavity is thus safely stabilized close to resonance, in a region where the sign of the PDH error signal matches that of the laser detuning.
* Stage 2: After 630 ms in the previous state, the controller switches to a stabilization on the PDH
error signal. The system thus quickly converges towards the stable attractor at 0-detuning. This is clearly visible on the reflection error signal which reaches its minimal level (see Fig. 10c).
The Lockbox even has an "auto-lock" option that monitors the error signals continuously, detects a potential loss-of-lock and automatically launches the re-locking sequence.
### Loop transfer-function measurement
While a simple optimization based on trial and errors is often sufficient to tune the parameters of a PID controller, the knowledge of the total loop transfer function is of utmost importance in order to assess the true limitations and optimize the system. However, in many cases, some elements in the loop, such as piezoelectric actuators, have a convoluted frequency response that is hard to characterize separately. In this example, we use the network analyzer embedded in PyRPL to probe the closed-loop transfer function of a laser-cavity lock, and deduce the complex frequency response of the piezoelectric actuator.
To this end, we set the "output_direct" of the network analyzer to the Red Pitaya output "out2" that is also used for feedback control (see the web page for full configuration instructions [26]). Owing to the output summation stage of PyRPL, the signal fed to the piezo-electric actuator is thus the sum of the network analyzer mod
Figure 8: **Graphical User Interface of the IIR module**. To help with the design of the filter, the measured actuator response curve can be selected in the upper panel. The actuator response (pink), current filter response (blue), and their product (cyan) are represented in the Bode plot below. A response as flat as possible in both amplitude and phase is usually achieved by adding zeros and poles in the lists at the bottom of the screen. The rightmost panel allows to choose the input, optional filter, “output_direct”, the downsampling factor (loops), and overall gain of the filter.
ulation, and the feedback signal derived from the PDH error signal (see Fig. 11a). The network analyzer input is set to "out2" as well, such that the measured transfer function \(G_{\text{closed-loop}}\) approaches 0 at low frequency, where the feedback signal nearly compensates the modulation, and 1 at high frequency where the gain of the controller is negligible. The closed-loop and open-loop transfer functions are linked via the relation:
\[G_{\text{open-loop}}=1-1/G_{\text{closed-loop}}. \tag{8}\]
Fig. 11c shows the measured transfer-function in orange and deduced open-loop transfer-function in blue. In order to accurately deduce the high-frequency open-loop behavior, where the \(G_{\text{closed-loop}}\) closely approaches 1, careful characterization of the background signal has to be performed (see the webpage for details [26]).
The open-loop transfer function can be decomposed as the product of three elements: the PID controller, \(G_{\text{controller}}\), an analog low-pass filter \(G_{\text{RC}}\) with a cutoff frequency 50 Hz, and the electro-mechanical response of the piezoelectric actuator itself \(G_{\text{piezo}}\). The total open-loop transfer function matches very well the product \(G_{\text{RC}}G_{\text{controller}}\) below 10 kHz. However, at higher frequency, mechanical resonances of the piezoelectric actuator manifest themselves as sharp peaks in the transfer function.
The stability of the loop can be assessed in a straightforward manner from the Nyquist diagram represented in Fig. 11b. In this plot, the total open-loop gain is represented as a parametric curve in the complex plane. The Nyquist stability criterion states that the loop is stable until the point (1, 0) doesn't get encircled by the response curve. We thus materialize the phase margin--the complex phase of the open-loop transfer-function at the unit-gain frequency--as a green dot. In this example, we have chosen to match the PI corner frequency of the
Figure 10: **Complex lock acquisition sequence**. (a) As part of the Lockbox module, a locking sequence is defined as a succession of steps, which can use different locking schemes configurable via the GUI. (b) Evolution of the reflection and PDH error signals, together with the piezoelectric actuation signal as a function of time, during the lock-acquisition sequence. Stage 1 consists in a ”side-of-the-fringe” lock with a detuning of -3 cavity bandwiths, during which an integrator is drifting until the cavity resonance condition is reached. At stage 2, the controller switches to PDH stabilization at the cavity resonance. (c) Reflection and PDH error signal shapes acquired during a sweep of the cavity length (with the same y-scale). (d) Zoom on the central part of trace (b).
Figure 9: **GUI panel describing the various inputs of the FabryPerot Lockbox**. In this example, the transmission signal is connected to the analog input “in2” of the Red Pitaya, the reflection signal is connected to “in1”, and the PDH error signal is generated thanks to an EOM connected to “out1” (see Fig. 7). The parameters for the modulation/demodulation, directly editable in the “pdh” input panel are passed transparently to the underlying IQ module. The button “calibrate” below each plot can be used to determine the offset and amplitude of the corresponding error signal experimentally.
PID controller with the analog low-pass filter cutoff, such that the phase margin approaches \(90^{\circ}\), as expected for a perfect integrator. On the other hand, the gain margin--the maximum gain increase given the magnitude of the transfer-function at the 0-phase crossings--, materialized here by a red dot, is directly limited by the large piezoelectric resonances, which manifest themselves as large circles in this diagram. In the next section, we show how to use the IIR filter to compensate for the piezo-electric resonances and significantly increase the gain of the feedback loop without any instable behaviour.
### Lock optimization with IIR module
The example of the previous section illustrates how the convoluted response of mechanical actuators tends to strongly impede the performances of control loops in optical systems. However, this limitation can be overcome: in fact, as long as the plant's response doesn't have any zero in the right-half plane--such a system is also called minimum-phase--it admits an invert that is causal and bounded, and that could thus be implemented in a specifically designed controller [1]. However, the complexity of the required filter tends to divert the efforts of experimentalists towards mechanical engineering solutions, either attempting to push the mechanical modes towards higher frequencies or to damp the most problematic ones. Yet, digital control systems open an avenue for in-situ tuning of active compensation filters. Earlier work have demonstrated the compensation of the 10 main acoustic resonances of a piezoelectric actuator using a 25600 coefficients FIR filter [32]. Here, we demonstrate a similar result with the IIR module of PyRPL. As for the other examples discussed in this section, detailed instructions on how to tune the filter for the specific system to be stabilized are provided [26]. We think that, together with the ability to characterize the actuator's transfer function in-situ (see section IV.2), this functionality should broadly popularize active cancellation techniques among the community.
Fig. 8 shows the GUI of the IIR module, once fully configured to compensate the main resonances of the piezoelectric actuator characterized in section IV.2. To help with the filter design, the actuator's transfer function is displayed in the plot window (pink), together with the current filter's transfer function (in blue) and the product of these 2 curves (in cyan). The poles and zeros of the filter's transfer function are added and easily modified with a user-friendly graphical interface--zeros and poles are represented as dots and crosses respectively in the graph, and can be selected by a mouse click--until the amplitude response of the product curve is approximately flat. The final IIR filter is composed of 10 pairs of zeros and poles in the complex plane between 25 kHz and 90 kHz and is implemented with a delay of 10 cycles (see appendix A for details). The \(\sim 200\) ns delay of the controller (\(\sim 100\) ns for the DAC and ADC, and \(\sim 100\) ns due to the IIR filter pipeline itself) enable in practice to compensate mechanical resonances up to approximately 500 kHz. As visible in the second graph of Fig. 8, the phase response of the product curve is also approximately flat as a function of frequency, indicating that the actuator is indeed close to a minimum phase system.
We then proceed to close the loop with the designed filter and test the resulting performances. The total controller setup is composed of a PI controller and the IIR filter in series. In order to keep a similar phase margin as previously, we keep the same PI corner frequency, but increase the overall gain of the loop by a factor \(\sim 14\). The transfer function of the loop is measured as previously and displayed in Fig. 12. As expected, the overall loop gain has been increased by 23 dB, while maintaining a gain margin of 2 (limited by a higher-frequency mode of the piezo-electric actuator at 60 kHz that is not compensated for). Accordingly, the unit-gain frequency is pushed by more than one decade from 500 Hz to 10 kHz thanks to the active compensation.
Figure 11: **Measuring a loop transfer-function**. Generic feedback scheme (a) where the controller (here the RedPitaya) is acting on the plant (here an optical cavity). The Network Analyzer is adding a perturbation to the system and measures the total response including perturbation and counter-action from the controller. This effectively measures the closed-loop transfer function \(G_{\text{closed,loop}}\) (orange on (c)) of the system. The open-loop transfer function \(G_{\text{open,loop}}\) (blue on (c)) is deduced using Eq. (8) of the main text. The black dotted-line corresponds to the expected transfer-function of the loop, given by the product of the PID-controller and analog low pass filter transfer functions. (b) The open-loop transfer function is represented on a Nyquist diagram where the gain margin and phase margin are represented by a green and red dot respectively. The phase margin is limited by the phase-lag of the low-pass analog filter while the gain margin is limited by the piezo-electric actuator resonances.
### Phase-locked loop
Synchronizing two independent laser beams is a common need in various areas of experimental physics, including frequency comb stabilization, Doppler cancellation in optical fiber links, or optical time transfer. Generally, this is achieved by an optical phase-lock loop (PLL), where the difference frequency between a slave and a master laser is compared to a radio-frequency local oscillator. In practice, the local oscillator and the optical beatnote are sent into a phase detector which enables to stabilize the phase difference between these two signals.
In the simplest form, a mixer can generate an error signal proportional to the sine of the phase difference. Such an "analog phase detector" can in principle detect arbitrarily small phase drifts. However, the monotonic range doesn't exceed \(\pi\), which for one thing, limits the lock capture range (i.e. the maximum frequency difference to acquire lock), and for another, can lead to "phase-slips", whenever the lock fails to stabilize the phase below this critical value.
A phase-frequency detector [33], consisting of a counter incremented upwards by the optical beatnote and downwards by the radiofrequency local oscillator has a much larger monotonic range (determined by the saturation value of the counter), but this comes at the expense of a limited precision as it is only sensitive to the edges of the input signals, rather than their phase integrated over a full period [34] (the situation is even worse in a digital signal processing environment where edge detection is time-binned by an external clock).
All digital PLLs [35], on the other hand, proceed by explicitly evaluating the phase difference between the 2 input signals. Since this signal is inherently defined on a \(2\pi\) interval, it needs to be phase-unwrapped in order to extend the range of the phase signal. As a consequence, such phase detectors feature a precise and linear behavior over a large range.
In this example, we use the CORDIC phase estimator of PyRPL to phase-lock 2 Mephisto YAG lasers. The algorithm is based on a binary estimation of the phase knowing the I and Q coordinates of the demodulated signal. First, the signs of I and Q give the quadrant the phase lies in. The implementation in PyRPL includes an overflow procedure when the phase jumps from the last quadrant to the first (resp. the first to the last) and increment (resp. decrement) a turn counter. The IQ vector is then rotated in the \(-\frac{\pi}{4}\), \(\frac{\pi}{4}\) range and the test condition of the binary search is the sign of the vertical coordinate Q of the vector. The algorithm recursively applies pseudo rotation matrices with decreasing magnitudes, in the clockwise or counter-clockwise direction depending on the sign of Q and accumulate the corresponding angle. The matrix coefficients are of the form \(2^{-n}\) allowing a single cycle computation with additions and bit-shifts. The phase is coded on a 14-bit register, the 4 most significant bits are the turn count which is 2 bits wide allowing to encode phase between \(-4\pi\) and \(4\pi\), and 2 more bits for the quadrant. Follows 9 stages of recursive search to complete the 10 least significant bits with a quantization error of \(0.11^{\circ}\). With 14 bits in a \(8\pi\) range, the theoretical quantization is \(8\pi/2^{15}=0.044^{\circ}\). This could be approached by a tenth stage in the algorithm but would require extra bits for all registers and a careful truncation of the result. The precision is also spoiled by a factor \(4/\pi\) due to the use of the non optimal rotation angles \(\tan^{-1}(2^{-k})\) instead of \((2^{-k})\frac{\pi}{4}\).
The beatnote between the 2 lasers is directly fed into an analog input of the Red Pitaya and demodulated at 9 MHz by an IQ module. The output multiplexer of the IQ module is set to the CORDIC phase estimator and fed to the actuators of the slave laser via several PID modules (see Fig. 13 and the notebooks [26] for configuration details). In order to simultaneously achieve a large actu
Figure 12: **Optimized lock using the IIR filter**. (a) Open-loop transfer-function of the unoptimized lock described in Fig. 11 (blue curve) together with that of the optimized lock (orange) where the piezoelectric resonances have been numerically compensated with the IIR filter. The zeros and poles used for this particular filter implementation are listed in Fig. 8. The \(\sim 23\) dB increase in overall loop gain corresponds to the unit-gain frequency increasing from \(\sim 500\) Hz to \(\sim 10\) kHz. (b) Noise spectrum of the PDH error signal with (blue) and without (red) IIR filter. The variance (obtained by integrating the noise spectrum) decreases from \(31.8\) mV\({}^{2}\) to \(6.9\) mV\({}^{2}\) with the IIR filter.
Figure 13: **Implementation of a Phase-Lock Loop**. (a) Schematics of the optical and analog electronic setup: out1 and out2 are connected to fast and slow inputs of the piezoelectric laser actuator respectively. The pulse-width modulation output PWM0 is connected to the crystal temperature controller. The beatnote signal obtained by superimposing the two laser beams is acquired on a photodiode connected to in1. (b) Schematic of the IQ module used to compute the phase difference between the two lasers. The signal is demodulated at the desired beatnote frequency, and the angle in the rotating frame is estimated from the two quadratures via the CORDIC phase estimator. The error signal is then sent to cascaded PID filters, that control the laser actuators: the output of PID1, used for the fast piezoelectric actuator is maintained in the middle of its range by being fed to the slow piezoelectric controller (PID2). In turn, the output of PID2 is fed to the input of the laser temperature controller (PID3) for long term stabilization. (c) Spectrogram of the beatnote signal during lock acquisition: as the lock is switched on, the beatnote frequency quickly drifts towards the desired value of 9 MHz. (d) Error signals during the lock acquisition sequence: the lock is enabled at \(t=0\), after a short transient, the CORDIC error signal, as well as all outputs except for the laser temperature stabilize around 0 V. At \(t=18\) s, the phase setpoint is varied stepwise by increments of \(60^{\circ}\) to demonstrate the ability of the setup to stabilize the beatnote around an arbitrary phase. (e) Zoom on the lock-acquisition sequence around \(t=0\). Curve colors are identical to those of panel (d). In addition, we estimate the unsaturated evolution of the beatnote phase by unwrapping the CORDIC error signal (dashed curve). (f) IQ diagram of the demodulated beatnote amplitude for the various setpoints. The standard deviation of the phase in each step is approximately \(0.5^{\circ}\).
tion bandwidth and dynamic range while minimizing the electronic and digitization noise, we use three distinct actuators: the output "fast piezo" is directly connected to one of the terminals of the piezoelectric actuator, yielding a bandwidth of 100 kHz, the output "slow piezo" is connected to the other piezo terminal via a high-voltage amplifier and analog low-pass filter (10 Hz). Finally, the output "temperature" controls the temperature of the YAG crystal via a pulse-width modulation output of the Red Pitaya (see Fig. 13a). The PID modules are connected in series, such that the output of the fastest actuators are maintained close to the middle of their range by the action of the slower actuators.
Fig. 13d shows the time evolution of the various error signals during the lock acquisition sequence. During the first part of the sequence, for times \(-1\) s \(\leq t\leq\) 0 s, only the fast piezo-lock is enabled. The beatnote frequency is still far from that of the local oscillator, such that the demodulated signal rotates quickly in the IQ plane. Even though the CORDIC error signal saturates at \(\pm 4\pi\), the signal is reset to \(\pm 2\pi\) upon overflow. As a consequence, the error signal has a correct sign during this stage, with an average value of \(\pm 3\pi\), and the fast-piezo signal is saturated at -1 V. At time \(t=0\), the other lock-channels are activated simultaneously to acquire the lock. Fig. 13c shows the evolution of the beatnote spectrum during this stage. As the frequency \(\nu_{2}\) of laser 2 continuously drifts towards \(\nu_{1}+9\) MHz, the frequency of the beatnote \(|\nu_{2}-\nu_{1}|\) undergoes a sharp direction change when \(\nu_{2}=\nu_{1}\). After 0.6 s, the beatnote is stabilized to the desired value, with all error signals centered in the middle of their range, except for the laser temperature. Fig. 13e shows a zoom around the lock-acquisition moment. Eventhough the CORDIC error signal has a discontinuous behavior upon saturation, it can be unwrapped in post-processing to yield the continuous phase-evolution of the beatnote (dashed line).
At time \(t=18\) s, the phase setpoint of the lock is incremented by steps of \(60^{\circ}\) to demonstrate the ability of the loop to stabilize the beatnote at arbitrary phases. Fig. 13f shows the evolution of the two quadratures as a parametric plot in the compex-plane. The standard deviation of the individual blobs indicates a precision of \(0.5^{\circ}\) for the stabilization of the beatnote.
## V Conclusion
In conclusion, the open-source software package PyRPL, is a streamlined and pragmatic tool for implementing automatic digital feedback controllers in quantum optics experiments. The transition from traditional analog controllers to this digital approach presents several advantages. Significant cost reductions are achieved as typically a single Red Pitaya board can replace an array of equipment including signal generators, sweep generators, PID controllers, and analog mixers. Importantly, digital operation does not result in any detectable performance loss due to digitization noise. Furthermore, the approach effectively bypasses the issue of fluctuating offsets that can plague systems reliant on analog demodulation. A standout feature of PyRPL is its modular architecture which, when paired with a high-level programming interface and GUI, enables the agile development and debugging of complex feedback systems. Once a feedback loop has been configured, it is typically operated in a push-button manner owing to the lock-acquisition logic included in PyRPL. Finally, PyRPL's open-source nature allows it to be easily tailored to specific needs and enables constant refinement by a growing user community. This adaptability and continual evolution ensure PyRPL remains a dynamic and valuable tool in the field of quantum optics experiments.
###### Acknowledgements.
The authors thank Pierre Clade for extended discussions. They acknowledge support from ANR projects ExSqueez (ANR-15-CE30-0014) and QFilters (ANR-18-JSTQ-0002). L. N. acknowledges support from the FP7-cQOM Initial Training Network. S. C. acknowledges support from a Marie Slodowska-Curie post-doctoral fellowship (project 660941 - SQZOMS). M. C. and P.-E. J. acknowledge support from a PhD grant of 'Region Ile-de-France' (SIRTEQ projects OSLO and QuBeat).
## Data Availability Statement
The data that support the findings of this study are openly available in [https://github.com/lneuhaus/pyrpl/tree/main/docs/example-notebooks/article_examples](https://github.com/lneuhaus/pyrpl/tree/main/docs/example-notebooks/article_examples), reference number [26].
## Appendix A Infinite Impulse Response filter
The following theoretical discussion closely follows Ref. [36]. Causality ensures the output of a filter at cycle \(n=n_{0}\) may only depend on the input signal for \(n<n_{0}\):
\[y(n)=\sum_{i=0}^{\infty}x(n-i)h(i). \tag{12}\]
A useful representation of the impulse response \(h\) is given by its Z-transform:
\[H(z)=\sum_{n=0}^{\infty}h(n)z^{-n}, \tag{13}\]
which is defined within a region of convergence \(|z|>r\). Restricting our attention to stable filters, where bounded input signals lead to bounded outputs, we have \(r<1\)
and the system can be fully described equivalently by its frequency response:
\[H(e^{j\omega_{r}})=\sum_{n=0}^{\infty}h(n)e^{-j\omega_{r}n}. \tag{10}\]
In the above formula, the radian frequency \(\omega_{r}\) takes values in the interval \([-\pi,\pi]\), and it is linked to the usual continuous-time angular frequency \(\omega\) by the relation \(\omega_{r}=\omega T^{3}\).
Most practical filters can be represented by a rational Z-transform over the region of convergence:
\[H(z)=\frac{P(z)}{Q(z)}, \tag{11}\]
where \(P(z)\) and \(Q(z)\) are polynomials in \(z\). We denote \(z_{i}\) and \(p_{i}\) the zeros of the numerator and denominator respectively, hence obtaining:
\[H(z)=k\frac{\prod_{i=1}^{M}(1-z_{i}z^{-1})}{\prod_{j=1}^{N}\left(1-p_{i}z^{-1} \right)}. \tag{12}\]
For feedback applications, we will restrict our discussion to proper IIR filters that obey \(M\leq N\), since improper filters with \(M>N\) are expected to result in an unstable closed-loop behavior at high-frequencies where unavoidable delays occur (e.g. from the ADCs and DACs), thus impacting the phase margin. Moreover, to ensure a real time-domain representation \(h(n)\in\mathds{R}\), any non-real pole or zero must be accompanied by its complex conjugate in Eq. (12).
In summary, a practical IIR filter can be defined unambiguously by the lists of zeros and poles \(\{z_{i}\}_{1\leq i\leq M}\) and \(\{p_{j}\}_{1\leq j\leq N}\) with \(M\leq N\) and the overall gain factor \(k\). In the next section, we describe how one can implement an arbitrary filter, with a transfer function described by Eq. (12) in a real-time signal processing environment.
### IIR filter implementation
When designing an IIR filter, we specify the desired transfer function as a list of poles and zeros, together with a prefactor \(C\) corresponding to the gain at zero-frequency. In order to facilitate the interpretation in terms of actual frequency, the user specifies the zeros and poles \(\{\tilde{z}_{i}\}_{1\leq i\leq M}\) and \(\{\tilde{p}_{j}\}_{1\leq j\leq N}\) in terms of continuous time Laplace frequencies \(s=i\omega+\gamma\)4. Starting from these inputs, the arithmetic manipulations necessary for the practical implementation of the filter on the FPGA board are described in a public python file 5. The main steps are summarized here:
Footnote 5: [https://github.com/lneuhaus/pyrpl/blob/master/pyrpl/hardware_modules/iir/iir_theory.py](https://github.com/lneuhaus/pyrpl/blob/master/pyrpl/hardware_modules/iir/iir_theory.py)
1. The function IirFilter.proper_sys ensures that all non-real zeros/poles have a conjugate partner. Otherwise, the missing items are appended to the list. Once this operation has been completed, if the number \(M\) of zeros exceeds the number \(N\) of poles, extra-poles with a large real Laplace frequency are appended to the list to make the system strictly proper. Contrary to most other DSP operations in PyRPL, the calculation of the IIR filter output \(y(n)\) is too complex to be carried out in a single FPGA cycle. Instead, a number of cycles \(N_{\text{loops}}=\lceil N/2\rceil\) is required. Since the effective sampling time is given by \(T_{\text{IIR}}=N_{\text{loops}}T\), it is important to determine \(N_{\text{loops}}\) at such an early stage in the process.
2. The function IirFilter.rescaled_sys rescales the zeros and poles in terms of angular frequency, and calculates the prefactor \(k\) of Eq. (12) as a function of the specified DC-gain \(C\).
3. Before implementing the filter, we convert the continuous time zeros and poles into discrete-time zeros and poles of the Z-transform via the mapping: \[z_{i} =e^{\tilde{z}_{i}T_{\text{IIR}}}\] \[p_{i} =e^{\tilde{p}_{i}T_{\text{IIR}}}.\]
4. The function IirFilter.residues performs the partial fraction expansion of the desired Laplace transform using the Heaviside cover-up method: \[H(z) =k\frac{\prod_{i=1}^{M}(1-z_{i}z^{-1})}{\prod_{j=1}^{N}(1-p_{j}z^{ -1})}\] (13) \[H(z) =D+\sum_{j=1}^{N}\frac{r_{j}}{1-p_{j}z^{-1}}\,,\] (14) where \(D\) is a constant for proper filters (\(M\leq N\)) and is non-zero only for strictly proper filters (\(M=N\)). The residues \(\{r_{j}\}_{1\leq j\leq N}\), zeros \(\{z_{j}\}_{1\leq j\leq N}\), and constant feed-through \(D\) are an unambiguous representation of the filter. Furthermore, contrary to the factored representation (13) involving the poles, zeros, and gain \(k\), an expanded form such as Eq. (14) lends itself naturally to a modular implementation where the filter output is obtained by summing the outputs of different modules, provided each term in the Z-transform can be implemented by relatively simple DSP operations.
5. In order to implement a discrete time filter with the Z-transform described by Eq. (10), we combine pairs of terms after the summation symbol into second-order sections: \[H(z) =D+\sum_{j=1}^{N^{\prime}}\frac{b_{0}+b_{1}z^{-1}}{1+a_{1}z^{-1}+a_{ 2}z^{-2}},\] (11) with \[b_{0} =r_{j}+r_{j}^{\prime},\;\;b_{1}=-p_{j}r_{j^{\prime}}-p_{j}^{\prime }r_{j},\] (12) \[a_{1} =-p_{j}-p_{j}^{\prime},\;\;a_{2}=p_{j}p_{j}^{\prime}.\] (13) By making sure that complex poles are paired with their complex conjugate (\(p_{j}^{\prime}=\bar{p}_{j}\)), we ensure that all coefficients \(b_{0},b_{1},a_{1},a_{2}\) are real as residues \(r_{j}\) and \(r_{j}^{\prime}\) are also complex conjugate. Each term in the sum (13) is a simple IIR filter that can be physically implemented by calculating the output at cycle \(n\) from the current and previous inputs \(x(n)\) and \(x(n-1)\) and previous output values, \(y_{j}(n-1)\) and \(y_{j}(n-2)\): \[y_{j}(n)= b_{0}x(n)+b_{1}x(n-1)\] \[-a_{1}y_{j}(n-1)-a_{2}y_{j}(n-2).\] Indeed, by moving the 2 last terms to the left-hand side and by taking the Z-transform of the equation, we get: \[Y_{j}(z)[1+a_{1}z^{-1}+a_{2}z^{-2}]=X(z)[b_{0}+b_{1}z^{-1}].\] (14) Hence, the Z-transform of the second-order section \(y_{j}\) is given by: \[H_{j}(z)=\frac{Y_{j}(z)}{X(z)}=\frac{b_{0}+b_{1}z^{-1}}{1+a_{1}z^{-1}+a_{2}z^{- 2}}.\] (15) The constant feedthrough term D is implemented as a separate second-order section with \(b_{0}=D\), and \(a_{1}=a_{2}=b_{1}=0\). All the coefficients \(b_{0}\), \(b_{1}\), \(a_{1}\) and \(a_{2}\) for the individual biquads filters are calculated by the function IirFilter.rp2coefficients.
6. Since the second-order sections are sequentially implemented in the FPGA, we rearrange the different biquads by ascending frequency in order to minimize the delay experienced by the sections with the largest frequencies in the function IirFilter.minimize_delay.
7. In the function IirFilter.coefficients_rounded, we convert the floating number coefficients calculated above into fixed-point precision coefficients, and check for the magnitude of rounding errors. An error message is prompted when the relative error is too large.
In the Verilog code, we only implement a single biquad filter module, which computes the output for the coefficients of each second-order section in subsequent FPGA clock cycles. The cumulative sum of all second-order section outputs is the filter output. As mentioned above, this implementation leads to a reduced effective sampling rate equal to the FPGA clock rate divided by the number of second-order sections, i.e. by half the filter order. Filter coefficients are represented as fixed-point numbers with 3 (resp. 29) bits before (resp. after) the radix point. As a compromise between filter complexity and FPGA resources devoted to the IIR filter, we allow for a maximum filter order of 28, i.e. 14 second-order sections. The filter is preceded by a first-order low-pass filter to avoid aliasing when the sampling rate is significantly lower than the FPGA clock frequency.
|
2303.18097 | Phase noise analysis of mutually synchronized spin Hall nano-oscillators | The reduction of phase noise in electronic systems is of utmost importance in
modern communication and signal processing applications and requires an
understanding of the underlying physical processes. Here, we systematically
study the phase noise in mutually synchronized chains of nano-constriction spin
Hall nano-oscillators (SHNOs). We find that longer chains have improved phase
noise figures at low offset frequencies (1/f noise), where chains of two and
ten mutually synchronized SHNOs have 2.8 and 6.2 dB lower phase noise than
single SHNOs. This is close to the theoretical values of 3 and 10 dB, and the
deviation is ascribed to process variations between nano-constrictions.
However, at higher offset frequencies (thermal noise), the phase noise
unexpectedly increases with chain length, which we ascribe to process
variations, a higher operating temperature in the long chains at the same drive
current and phase delays in the coupling between nano-constrictions. | Artem Litvinenko, Akash Kumar, Mona Rajabali, Ahmad A. Awad, Roman Khymyn, Johan Akerman | 2023-03-31T14:40:40Z | http://arxiv.org/abs/2303.18097v1 | # Phase noise analysis of mutually synchronized spin Hall nano-oscillators
###### Abstract
The reduction of phase noise in electronic systems is of utmost importance in modern communication and signal processing applications and requires an understanding of the underlying physical processes. Here, we systematically study the phase noise in mutually synchronized chains of nano-constriction spin Hall nano-oscillators (SHNOs). We find that longer chains have improved phase noise figures at low offset frequencies (\(1/f\) noise), where chains of two and ten mutually synchronized SHNOs have 2.8 and 6.2 dB lower phase noise than single SHNOs. This is close to the theoretical values of 3 and 10 dB, and the deviation is ascribed to process variations between nano-constrictions. However, at higher offset frequencies (thermal noise), the phase noise unexpectedly increases with chain length, which we ascribe to process variations, a higher operating temperature in the long chains at the same drive current and phase delays in the coupling between nano-constrictions.
Spin transfer and spin-orbit torque provide means to drive nanomagnetic systems into current tunable high-frequency precession [1; 2; 3]. The resulting microwave voltage signal can be used for communication applications [4; 5; 6; 7; 8] and spectral analysis [9; 10], where the small footprint, ready integration with CMOS technology, and wide frequency tunability make these oscillators particularly interesting. While STNOs, comprising ferromagnetic/non-magnetic/ferromagnetic structures, requires a somewhat complex fabrication process due to the current flowing out-of-plane, spin-orbit torque-driven spin Hall nano-oscillators (SHNOs) utilize in-plane currents in simple ferromagnetic/heavy metal bilayer systems [11; 12; 13; 14; 15; 16; 17; 18; 19; 20], where heavy metals (e.g. Pt [21], Ta [22; 23], and W [24; 15; 25]) produce pure spin currents through the spin Hall effect [26]. The simple geometry and in-plane current flow allow ease of fabrication, direct optical access, and the ability to synchronize multiple oscillators in chains and two-dimensional arrays [16; 27; 28], making such SHNOs promising candidates for emerging spintronic applications including Ising machines [29; 30; 31] and neuromorphic computing [32; 33; 34; 35]. For communication applications, the phase noise plays a crucial role, as it directly determines the performance of the system. To evaluate the potential of nano-oscillators for conventional signal processing applications, it is hence essential to characterize their phase noise performance [36; 37; 38; 39; 40; 41; 42], understand its physical origin, and suggest methods [43] for its improvement.
Here, we perform a comprehensive analysis of phase noise in single NC SHNOs as well as short (2 NCs) and longer (10 NCs) chains of mutually synchronized NC SHNOs and demonstrate that it can be significantly reduced compared to single SHNOs. We find that in the case of two NC SHNOs the mutual synchronization leads to an improvement of 2.8 dB in the \(1/f\) flicker frequency phase noise, which is very close to the theoretical prediction of 3 dB. In the longer chain of 10 NCs, the \(1/f\) noise improves by 6.2 dB, which is substantial but further (3.8 dB) from the theoretical expectation of 10 dB. We argue that this deviation originates from process variations between individual NCs, since the theoretical value assumes identical intrinsic frequencies of all oscillators. Somewhat unexpectedly, the white (thermal) frequency phase noise at higher noise frequencies is found to increase with chain length, being 2.1 dB worse for two NCs and 3.1 dB worse for 10 NCs, compared to single SHNOs. In addition to process variations, the longer chains also operate at higher temperature, due to the higher power dissipation, which may further increase the phase noise, in particular in the thermal region. As the measured linewidth improves substantially with chain length, we conclude that it is governed primarily by the \(1/f\) noise.
Single NC SHNOs and chains were fabricated from DC/RF magnetron sputtered W(5 nm)/NiFe(5 nm)/Al\({}_{2}\)O\({}_{3}\)(4 nm) stacks. The large spin Hall angle of W (\(|\theta_{SH}|>\)0.44) reduces the threshold current [15; 25] and the anisotropic magnetoresistance (AMR) of NiFe (0.65 %) provides a reasonable output power [21; 44]. The devices were patterned into 150 nm NCs with 200 nm center-to-center separation (in chains) using e-beam lithography followed by Ar-ion etching (for details see \(e.g.\)[45]). Figure 1a shows a schematic of a 10 NC chain.
Phase noise measurements were performed at fixed current and magnetic field and analyzed using a Hilbert transform technique [8]. Analysis of close-in phase noise at low offset frequencies of sub-Hz range requires that
the experimental time traces are accumulated over seconds time scales, which would require Terabytes of data to be processed with direct signal sampling at a 40 GS/s rate. In order to reduce the processed amount of data for such a long time series, we performed SNHO signal down-sampling using a frequency mixer as shown in Fig. 1(b). The NC SHNO signal is downconverted to 10 MHz by adjusting the local oscillator (LO) frequency and captured with a real-time oscilloscope at a low sampling rate of 50 MS/s. We process the captured signal in several steps. First, the captured SHNOs signal gets filtered with an FIR digital band-pass filter with a central frequency of 10 MHz, bandwidth of 12 MHz and stopband attenuation of 60 dB to improve the signal-to-noise ratio (SNR) so that the amplitude of the SHNO signal is sufficiently higher than the RMS amplitude of the thermal noise floor. Note that even though the bandpass filter removes the thermal noise floor it still allows the analysis of the close-in phase noise of the signal. Then, the instantaneous phase gets extracted with a Hilbert transform [46; 47; 48] of signal time traces. At the next step, the instantaneous phase signal is detrended. Finally, a power spectrum density is calculated from the detrended instantaneous phase signal using FFT.
The free-running auto-oscillations with varying DC current (I\({}_{DC}\)) are shown in Fig. 2(a,b and C) for single (1 NC), two (2 NC) and ten (10 NC) mutually synchronized SHNOs, respectively. Figure 2d summarizes their operating frequency, where it can be observed that mutual synchronization of SHNOs leads to higher frequency tunability. This could be understood as an absolute increase in their magneto-dynamical region. Figure 2e and 2f shows the linewidth and integrated output power for the oscillator chains. It is clear from the observed parameters that mutual synchronization leads to larger output power and lower linewidth for a larger number of synchronized oscillators in a chain, consistent with earlier work [27; 28].
The results of phase noise measurement are presented in Fig. 3(a-e) for a single and mutually synchronized 2 NC and 10 NC oscillators in a chain. Phase noise measurements are performed at I\({}_{DC}\) = 2.53 mA, 2.53 mA and 2.35 mA (also indicated by the yellow dashed line in Fig. 2a-c) for single and mutually synchronized 2 NC and 10 NC oscillators, respectively. As can be seen from Fig. 3(a) all devices demonstrate regions with \(1/f\) and white (thermal) frequency phase noise. Interestingly, in our SHNOs, the \(1/f\) phase noise corner appears at a much lower offset frequency of 50 kHz compared to the GHz frequencies for MTJ STNOs [40]. A lower value of the \(1/f\) corner leads to an improved linewidth at laboratory time scales as the \(1/f\) noise has a steep slope and a much higher contribution to the integrated power of phase noise.
A single SHNO exhibits a phase noise of 0, \(-17\), and \(-67\) dBc/Hz at the offset frequencies 100 Hz, 10 kHz, and 1 MHz, respectively. In Fig. 3(b) it can be seen that two mutually synchronized SHNOs demonstrate a 2.8 dB improvement in the \(1/f\) region, which is in good agreement with the theoretical expectation of 3 dB improvement for each doubling in the number of synchronized identical oscillators [49; 50; 51]. In case of 10 NC SHNOs, we expect to see a 10 dB improvement, but the experiment only showed a reduction of 6.2 dB. This may be attributed to process variations in the NC width, which lead to variations in the intrinsic frequency of the nano-constrictions in the chain. Process variations naturally become more noticeable in longer chains as the probability to find \(N\) identical oscillators decreases rapidly with \(N\). Another factor could be attributed to the geometry of the chain and its associated thermal effects. A chain of two identical nano-constrictions will retain the same zero difference in their relative frequencies even when the temperature and its gradient increase. However, in the case of more than two coupled NC-SHNOs, the inner and outer NCs will heat up differently, leading to a varying intrinsic frequency as a function of position in the chain.
Unexpectedly, in the region of white frequency phase noise, the 2 NC and 10 NC SHNO, instead of an im
Figure 1: (a) Schematic of a SHNO chain with 10 nano-constrictions in series. (b) The phase noise measurement setup, where LO is a local oscillator tuned around 18 GHz to down-convert the SHNO signal to 10 MHz. The mixer is a ZMDB-44H-K+ double-balanced mixer, and the LFP is a lumped-element lowpass filter with a cutoff frequency of 30 MHz and stopband attenuation of 40dB. The sampling rate of the oscilloscope is 50 MS/s.
provement, show an _increase_ of the phase noise by 2.1 and 3.1 dB, respectively, as compared to a single NC SHNO (see Fig. 3(d)). A possible explanation could be that process variations affect thermal noise much more than the \(1/f\) noise. From Fig. 2(d) we can deduce from the increase of the frequency variation with current that the \(1/f\) noise is reduced by \(1/f\). This is because the \(1/f\) noise is reduced by \(1/f\).
Figure 3: Phase noise spectrum plot for a single and mutually synchronized 2 NC and 10 NC SHNOs in a chain. The dashed vertical line represents the \(1/f\)–corner frequency of 50 kHz and separates regions with flicker frequency noise and white frequency phase noise. The steep reduction in phase noise above 6 MHz is associated with the applied bandpass filter used to improve SNR.
Figure 2: Free running properties of single and mutually synchronized 2 NC and 10 NC SHNO chains. Power spectral density (PSD) of the auto-oscillation for (a) single NC, (b) 2NC, and (c) 10 NC, respectively. Extracted (d) auto-oscillation frequency, (e) linewidth, and (f) integrated power. The dashed yellow line represents the current used during phase noise measurements.
nonlinearity of NC SHNO chains increases with the number of oscillators. It may lead to a sufficient shift in the corner of white frequency phase noise. Additionally, the temperature of the NC SHNO is higher for longer chains which contributes to the region of up-converted thermal noise. From the inset of Fig. 3(a) where we plot frequency noise, it is more evident that for 2 and 10 NC the level of white frequency noise, which corresponds to the flicker frequency type of phase noise, increases with chain length.
To understand the extent of the temperature gradient in long chains, we performed COMSOL simulations of a 10 NC SHNO chain. We used the COMSOL model Electric Currents (ec) to simulate the current density variation in the nanoconstrictions together with the Heat Transfer in Solids (ht) module. Multiphysics simulations were performed using the Electromagnetic Heating (emh1) module. In our simulation, we took into account the 2 nm silicon oxide layer on top of the silicon wafer which has a significantly lower thermal conductivity of 1.4 W/(m*K). The base silicon wafer has a thermal conductivity of 34 W/(m*K). The simulations are performed using the measured resistivity for the thin films i.e. W (300 \(\mu\Omega\)-cm) and NiFe (40 \(\mu\Omega\)-cm). In order to reduce the simulation time and resources we simulate a limited chip area of 1.5x1.5x0.5 mm. Temperature boundary conditions of 293.15 K are applied at the edges of the simulated area. The top panel in Fig. 4 shows a thermal map for an applied DC current of 2.35 mA flowing through the chain. In order to visualize the temperature gradients we have plotted a temperature profile along x-axis in bottom panel of Fig. 4. It can be seen that the temperature gradient exponentially increases to the edge of an array. The temperature deviation \(\Delta\)T between the central and the outer NC SHNOs is 13 K. In our previous studies [52], we have experimentally observed a large change in operating frequency due to thermal effects. In our present work, we estimate that the temperature gradient contributes a 20 MHz change in intrinsic frequency between the oscillators. However, since a deviation of 20 MHz unequivocally falls within the broad locking range of SHNOs [27] it cannot be the main factor of the sufficient increase in phase noise. Another reason that can lead to an increase of phase noise in mutually synchronized chains of oscillators with primarily nearest-neighbour coupling is the phase delay in the coupling. In the paper [53] it has been shown that the total phase noise can sufficiently increase in a chain of oscillators with nearest-neighbour coupling. Since NC SHNO chains demonstrate positive nonlinearity the coupling between oscillators most likely happens through propagating spinwaves which may lead to a large delay. The phase delay of the coupling between NC SHNOs has to be explored further in order to fully understand its contribution to the phase noise increase in both flicker frequency and white frequency regions of the phase noise.
In summary, we have analyzed the phase noise for single, double and ten nano-constriction SHNOs. Two mutually synchronized SHNOs demonstrate a 2.8 dB reduction in phase noise, which corresponds well to the theoretical estimation of 3 dB. The longer chains of 10 nano-constrictions demonstrate an improvement of 6.2 dB, which is further from the theoretical value of 10 dB and can be associated with several factors such as _i_) process variation of the nano-constrictions, _ii_) temperature gradients within the chain making the NC SHNOs non-identical and increasing the overall temperature, and _iii_) phase delays in the coupling between nano-constrictions, which may lead to decoherence in the chain and elevated noise levels. Further phase noise measurements and analysis will be required for a more complete understanding of these different mechanisms and ways to mitigate their impact.
## Acknowledgement
This work was partially supported by the Horizon 2020 Research and Innovation Program (ERC Advanced Grant No. 835068 "TOPSPIN" and Grant No. 899559 "SpinAge",DOI 10.3030/899559) and the Swedish Research Council (VR; Dnr 2016-05980).
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
Figure 4: COMSOL simulation of 10 NC SHNOs in a chain. Top panel: a thermal map for an applied DC current of 2.35 mA. Bottom panel: a temperature profile along x-axis depicted as a dashed green line in top panel. The temparature difference \(\Delta\)T between the cental and the edge NC SHNOs is 13K.
## Data Availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
|
2309.10490 | Comparative study of low-temperature opacities with GARSTEC models | We present a comparative study of the effect of low-temperature opacities on
stellar models up to the Red Giant branch (RGB), computed with the GARching
STellar Evolution Code. We have used two sets of low-temperature opacities;
{\AE}SOPUS ({\AE}) from the University of Padova and those from the Wichita
State University group (F05). In the relevant range of temperatures for this
study, log \k{appa}{\AE} < log \k{appa}F 05. Therefore, to compare stellar
evolutionary tracks, we performed a solar calibration of the {\alpha}mlt, for
each set of low-temperature opacities. After carrying such a calibration, we
find that stellar evolutionary tracks are almost unaffected by the choice of
low-temperature opacities, with largest variations of 25-30 K at the latest
evolutionary stages of the RGB phase. | Pedro Diaz Reeve, Aldo Serenelli | 2023-09-19T09:57:37Z | http://arxiv.org/abs/2309.10490v1 | # Comparative study of low-temperature opacities with GARSTEC models
###### Abstract
We present a comparative study of the effect of low-temperature opacities on stellar models up to the Red Giant branch (RGB), computed with the GARching STellar Evolution Code. We have used two sets of low-temperature opacities; \(\Lambda\)ESOPUS (\(\Lambda\)E) from the University of Padova and those from the Wichita State University group (F05). In the relevant range of temperatures for this study, \(\log\kappa^{\rm E}\)\(<\log\kappa^{\rm F05}\). Therefore, to compare stellar evolutionary tracks, we performed a solar calibration of the \(\alpha_{mlt}\), for each set of low-temperature opacities. After carrying such a calibration, we find that stellar evolutionary tracks are almost unaffected by the choice of low-temperature opacities, with largest variations of 25-30 K at the latest evolutionary stages of the RGB phase.
0000-0002-4880-7880]Pedro Diaz Reeve
## 1 Introduction
Several sets of Rosseland mean opacities for stellar interiors, with different physical inputs and conditions have been developed, such as The Opacity Project OP [Badnell et al. (2005)] and OPAL [Iglesias & Rogers (1996)] among many others. In this research note we compared two sets of low-temperature opacities in the range \(3.50\leq\log T\leq 4.50\), which include molecules and dust as opacity sources in addition to atoms, and the impact they have in stellar evolutionary tracks for different masses, from the MS and up to the RGB phase. The scope of the comparison, encompasses the \(\Lambda\)ESOPUS1 web interface (Accurate Equation of State and OPacity Unit Software) [Marigo & Aringer (2009)] and [Marigo et al. (2022)] and the database set up by the Wichita State University group [Ferguson et al. (2005)], hereafter F05. A recent study of \(\Lambda\)ESOPUS opacities on the AGB phase has been presented in [Cinquegrana & Joyce (2022)]. Results presented here use the MB22 [Magg et al. (2022)] solar mixture, but we tested that other chemical compositions such as GS98 [Grevesse & Sauval (1998)] and AGSS09 [Asplund et al. (2009)] show similar results [Diaz Reeve & Serenelli (2023)]. Stellar models were computed using GARSTEC [Weiss & Schlattl (2008)] version 20.1.
Footnote 1: Available at [http://stev.oapd.inaf.it/cgi-bin/aesopus](http://stev.oapd.inaf.it/cgi-bin/aesopus)
## 2 Comparative study
### Preliminary Overview of the Opacity Data Sets
Left panel in Figure 1 shows the differences in the Rosseland mean opacity between \(\Lambda\)ESOPUS and F05, \(\Delta\log\kappa=\log\kappa^{\rm E}-\log\kappa^{\rm F05}\), for MB22 chemical composition and for the cases X = 0.70 and Z = 0.004, 0.02 and 0.04, for temperatures between \(3.50\leq\log T\leq 4.50\), throughout the range -8.00 \(\leq\log R\leq\) 1.00, where \(\log R=\log\rho-3\log T+18\). Such differences, in the range \(3.50\leq\log T\leq 4.00\) lie between \(+0.05/-0.08\) dex, except for a peak at \(\log T=3.60\) where differences reach up to \(-0.10\) dex. For \(4.00\leq\log T\leq 4.50\) differences seem to be in a wider range between \(\pm\) 0.10 dex and reaching at most \(-0.14\) dex at \(\log T=4.25\) for the case Z = 0.04. Differences appear to be larger with the increasing metallicity. Gray shaded regions show the differences for the whole \(\log R\) range, while maroon, yellow and purple lines show differences for the cases \(\log R\) = 0.00, -1.00 and -2.00 respectively, which span the \(\log R\) values that better reproduce the approximate conditions in 0.70 - 1.50 \(M_{\odot}\) stellar envelopes, up to the RGB phase.
### Solar Calibration
We carried out solar calibrations using the \(\Lambda\)ESOPUS and F05 opacities. The chemical composition is almost independent of the choice of low-temperature opacities. Initial abundances are \(X_{\odot}\) = 0.70988 (0.70982), \(Y_{\odot}\) = 0.27190
(0.27196) and \(Z_{\odot}\) = 0.01822 (0.01822) for the _AE_SOPUS (respectively F05) solar model. A more relevant difference appeared on the value of the \(\alpha_{mlt}\) parameter, as its calibration in a solar model is sensitive to the choice of low-temperature opacities. We found \(\alpha_{mlt}^{\rm E}\) = 2.0530 for _AE_SOPUS while for F05 the value was \(\alpha_{mlt}^{F05}\) = 2.1487, i.e. \(\alpha_{mlt}^{\rm E}<\alpha_{mlt}^{F05}\).
Near the solar surface at \(\log T\simeq 3.76\), _AE_SOPUS shows smaller opacities than F05 (see left panel in Figure 1), making the star more luminous (less opaque) and as a consequence, increasing its effective temperature. To compensate this change in temperature the mixing length parameter \(\alpha_{mlt}\) decreases for less opaque models decreasing the effective temperature of the model star so that the solar effective temperature and luminosity are matched.
### Comparative Study with GARSTEC models
We computed stellar evolution models for masses \(M/M_{\odot}\) = 0.70, 1.00 and 1.50 and \(Z=0.01998\), close to the solar calibrated \(Z_{\odot}\). The \(\alpha_{mlt}\) was used consistently with the low-temperature opacities as described above. Models extend up to the RGB phase. Right panel in Figure 1 shows the computed evolutionary tracks.
Both sets of models were computed using low-temperature opacities for \(3.20\leq\log T\leq 4.00\), so the effect of molecules and dust, besides the atomic effects, are included in stellar evolutionary models, and OP opacities for \(\log T\geq 4.10\).
Differences between models along the MS and the SGB phases are almost negligible for stellar evolutionary tracks for all masses, with differences of 10-15 K in the effective temperature of the model stars. Differences increase slightly at the latest stages of the RGB, reaching maximum values of 25-30 K at the RGB-tip. Such differences are originated mainly due to the fact that near \(\log T\simeq 3.50\), \(\log\kappa^{\rm E}>\log\kappa^{F05}\), i.e. there is a sign reversal in the opacity difference, so the effect of the solar calibration on \(\alpha_{mlt}\) is no longer able to compensate for the opacity differences and effective temperature differences appear.
## 3 Discussion
From the preliminary overview of the opacity data sets we found that differences between _AE_SOPUS and F05 increase with metallicity and that the range of variation was wider for temperatures above \(\log T=4.00\). However, mean differences between data sets were in the range \(\pm\) 0.10 dex for temperatures \(3.50\leq\log\leq 4.50\) and for - 8.00 \(\leq\log R\leq 1.00\). Near the solar surface _AE_SOPUS shows lower opacities than F05, which in a solar calibration was compensated by reducing the efficiency of the energy transport in near-surface convection, decreasing the value of the \(\alpha_{mlt}\) in less opaque models. Results are presented for MB22 chemical composition, however GS98 and AGSS09 solar compositions show similar results.
Stellar models for masses \(M/M_{\odot}\) = 0.70, 1.00 and 1.50 and Z = 0.01998, showed differences around 10-15 K along the MS and SGB phases, while differences around 25-30 K appeared on the latest evolutionary phases of the RGB. We conclude that stellar models computed with low-temperature opacities either from _AE_SOPUS or F05, are in agreement
Figure 1: Left: Comparison of the Rosseland mean opacity for _AE_SOPUS and F05 for X = 0.70, Z = 0.004, 0.02 and 0.04. Gray vertical line shows the solar effective temperature \(\log T\simeq\) 3.76, and gray shaded profile shows the range of variation for all \(\log R\) values. Right: Evolutionary tracks for stellar models computed with GARSTEC with Z = 0.01998, and for \(M/M_{\odot}\) = 0.70 (red track), 1.00 (blue track) and 1.50 (black track).
with each other when a solar calibration is carried out for \(\alpha_{mlt}\). Results presented here are valid for stars on the MS, SGB and RGB evolutionary phases and should not be extrapolated to other evolutionary phases e.g. the AGB.
|
2309.14806 | 3D printed realistic finger vein phantoms | Finger vein pattern recognition is an emerging biometric with a good
resistance to presentation attacks and low error rates. One problem is that it
is hard to obtain ground truth finger vein patterns from live fingers. In this
paper we propose an advanced method to create finger vein phantoms using 3D
printing where we mimic the optical properties of the various tissues inside
the fingers, like bone, veins and soft tissues using different printing
materials and parameters. We demonstrate that we are able to create finger
phantoms that result in realistic finger vein images and precisely known vein
patterns. These phantoms can be used to develop and evaluate finger vein
extraction and recognition methods. In addition, we show that the finger vein
phantoms can be used to spoof a finger vein recognition system. This paper is
based on the Master's thesis of Rasmus van der Grift. | Luuk Spreeuwers, Rasmus van der Grift, Pesigrihastamadya Normakristagaluh | 2023-09-26T10:03:57Z | http://arxiv.org/abs/2309.14806v1 | # 3D printed realistic finger vein phantoms
###### Abstract
Finger vein pattern recognition is an emerging biometric with a good resistance to presentation attacks and low error rates. One problem is that it is hard to obtain ground truth finger vein patterns from live fingers. In this paper we propose an advanced method to create finger vein phantoms using 3D printing where we mimic the optical properties of the various tissues inside the fingers, like bone, veins and soft tissues using different printing materials and parameters. We demonstrate that we are able to create finger phantoms that result in realistic finger vein images and precisely known vein patterns. These phantoms can be used to develop and evaluate finger vein extraction and recognition methods. In addition, we show that the finger vein phantoms can be used to spoof a finger vein recognition system. This paper is based on the Master's thesis of Rasmus van der Grift.
Finger vein recognition, finger vein phantom, biometrics, presentation attack, spoofing
## I Introduction
Fingerprint or face recognition is increasingly becoming an integral part of personal devices. Another emerging biometric technology is blood vessel based recognition and identification like e.g. finger vein pattern recognition. Advantages of finger vein pattern based biometric recognition are good resistance to presentation attacks, very low error rates and user convenience comparable to fingerprint recognition [1]. Face and fingerprint recognition systems are often tested and optimized using data with their ground truths. These ground truths are relatively easy to obtain because this information is present on the surface of the body. Since blood vessels are present inside the human tissue, obtaining the ground truth is not as easy. In earlier work we proposed synthesised finger vein images [2], that allow us to obtain ground truth vein patterns, however, these images are not as realistic as images obtained using 3D phantoms. In our first attempts to create 3D finger phantoms, we used soap, silicon and wires and 3D printed 'bones' [3, 4]. However, construction was complicated and did not allow for complex vein patterns. In this paper we show that using 3D printing, a complete phantom of a finger can be created where the properties of all important tissues are mimicked with complex vein structures and where the ground truth is known. This technique can be used to create a data set that can be used to help design finger vein acquisition devices and improve finger vein recognition systems e.g. for cross device finger vein recognition, see [5]. This paper is based on the Master's report of Rasmus van der Grift [6].
## II Related work
### _Acquisition of finger vein patterns_
The 'handbook of Vascular Biometrics' [1] describes different types of sensors for finger vein recognition. Because haemoglobin in veins has a higher absorption of Near-Infrared (NIR) light than the surrounding tissue, the vascular pattern inside a finger can be captured by a device sensitive to NIR light. There are several ways to illuminate the finger to extract blood vessels. The main types found in existing devices are shown in Figure 1.
Irrespective of the type of illumination, the veins will be visible as dark, rather unsharp lines against a brighter background, see Figure 2. A more in-depth analysis of the imaging process and the optical behaviour of the tissues in the finger is presented in [3, 7].
From these studies, we know that the bone acts as a diffuser, scattering the light in all directions, wh
Fig. 1: Acquisition of finger vein patterns
Fig. 2: NIR image of a finger with veins
of the NIR light. The softer tissues (fat etc.) absorb the NIR more than the bone and the blood in the veins absorbs the NIR even more. Because the bone near the joints is thicker there is less soft tissue and, consequently, the image appears brighter at the joints.
In this paper we used the finger vein scanning device that was developed at the University of Twente and provides high quality vein images [8, 9].
### _Depth of bloodovessels_
The contrast, size and sharpness of the depicted bloodovessels depends on their size and depth, i.e. distance to the skin. This is described in [10]. If the bloodovessels are further away from the skin, their contrast and sharpness decreases. The veins at the joints are closest to the surface and, hence, generally more sharply visible. So at the joints, the image is both brighter and the veins are sharper. This can be observed in Figure 2.
### _Finger vein recognition_
A biometric system for identifying individuals by the pattern of veins in a finger using the maximum curvature was proposed by Miura [11]. It is based on simple correlation of binary vein patterns. Even though many other methods for vein recognition have been proposed, among others based on deep learning networks, see e.g [12], the method proposed by Miura performs not much worse than more modern methods and is still used as a good baseline. An important step in finger vein recognition is proper alignment of the finger vein patterns. This alignment method is used to deal with variations in finger pose. In [13] we proposed an ICP alignment approach that improves the recognition results considerably.
## III Methods
In order to create realistic finger vein phantoms using 3D printing, the following steps were taken:
* Select PLA printing material with optical properties similar to the tissues in the finger
* Define shape of 3D model of bones and soft tissues
* Define 3D vein patterns
* Combine into complete finger phantom and create 3D model for printing
The different steps are detailed in the sections below.
### _PLA 3D printing material properties_
Since blood, tissue, and bones have different NIR light properties, we investiagated the properties of different colours and densities of polylactic acid (PLA). There are many different brands that produce PLA, and each has several colours to choose from. We used PLA printing materials from Rankforce [14].
We first created a cylinder in Solidworks [15] and using the slicer software Simplify3D [16] varied the density of the printing material. The 3D printed cylinder was scanned using our fingervein sensor [9] and the cylinder and resulting image are shown in Figure 3.
In the figure 0% means the cylinder is hollow, while 100% means it is solid. It is clear that using a varying printing density it is easy to mimic different absorption from the tissues in the finger. We also investigated the absorption for 3D printing materials with different colours for various illumination strengths of the scanner and compared it with the absorption of NIR light of a real finger. In Figure 4 the set of 3D printed cylinders with different colours is shown, each of them with varying densities as in Figure 3.
The resulting NIR scans of all the cylinders compared to scans of a real finger for varying illumination strength are shown in Figure 5. The optical properties of the Green, Blue and White materials seem closest to those of the real fingers while the Gold and Grey material are closest to the properties of the veins. Therefore, we chose Green for the colour of the bones and soft tissues and Grey for the veins.
### _Shape of the bones and soft tissue_
The fingers that are used in finger vein recognition (index, middle and ring fingers) have 3 bones. These bones are thicker near the joints. We recorded a NIR image of human finger bones to investigate their properties in NIR light in [3], see Figure 6. From this figure it can be observed that the bone indeed scatters the NIR light in all direction and only absorbs part of the light. At the thicker ends of the bones more light is absorbed, but still quite much of the light passes through and is scattered,
We mimicked the basic properties of the bones by creating joints from thicker cylindric parts as illustrated at the top
Fig. 4: 3D printed coloured cylinders
Fig. 3: NIR scan of 3D printed cylinder with varying densities
of Figure 7. The bone is created using a low density of printing material and the soft tissue that absorbs more NIR light is printed using a higher density. Both use the Green PLA material. As can be observed the soft tissue near the joints is thinner in the model which will result in in a brighter image even though the bones are thicker at the joints and absorb more NIR light, because the absorption of the soft tissue is higher than that of the bone. The result of a NIR recording of the phantom with the bone model and soft tissue is shown at the bottom of Figure 7. It shows the same dark and bright bands as the real finger in Figure 2.
### _Defining 3D vein patterns_
We first investigated the impact of the depth of the blood vessels on the image. To this end, we prepared a phantom with vessel like structures with a constant size at various depths from the surface (skin). The 3D model and the NIR image are shown in Figure 8. We can see similar effects as in the image of the real finger in Figure 2: the veins closer to the surface, near the joints in the brighter areas are sharper, the other, deeper veins are less sharp.
To create realistic vein patterns, we used vein patterns extracted using maximum curvature from real finger vein images. The thickness of the veins is determined from the original finger vein image as shown in Figure 9.
Now that the skeleton of the veins has been determined, the veins must be converted into a 3D model. This is done by projection of the 2D pattern on a cylinder with a radius that ensures the veins are at a constant distance to the surface of
Fig. 5: 3D printed coloured cylinders in NIR light. The numbers at the top signify illumination strength
Fig. 6: NIR recording of human finger bones
Fig. 7: 3D model of bones with joints and NIR scan
the finger. This means in our phantom, we do not take varying depths into account, but only the widths of the bloodvessels. To import the images of the vessels into Solidworks, they need to be transformed into so-called sketches. The individual points on the vessels are first converted into contours and then into a DXF file using the Python library exdxf [17]. The DXF files can be imported into Solidworks as sketches.
### _Combining into a complete phantom finger_
The 3D vein pattern DXF file is combined with the 3D model of the bones and soft tissues to form a complete phantom finger. The resulting model is shown in Figure 10 at the top. The 3D model is printed on a consumer 3D printer with 2 printing heads that can simultaneously print 2 different materials, where Green PLA was used for the bones and soft tissues and Grey for the veins. A NIR image of the resulting phantom is shown at the bottom in Figure 10.
By comparing Figures 10 and 2 we can observe that the phantom vein image shows all the main characteristics of the real finger vein image: it has a similar brightness distribution including the dark and bright bands at the joints, the veins also look similar, although the variation in vein widths is less in the phantoms and there is no variation in depth of the veins. Especially very thin veins are missing. This is due to the limitations of the used 3D printer that can only print structures with a thickness of 0.4 mm and larger.
## IV Experiments
In order to investigate if the phantom finger vein images are similar to real finger vein images for a finger vein recognition system, we set up the following experiment.
### _Dataset_
First we acquired a small data set of 6 real fingers (of one person). Each finger was recorded 5 times and each time the finger was placed slightly differently on the scanner, i.e. with a different position and rotation. We used rotations up to about 10 degrees. Next we created 6 phantom fingers using a different vein scan of the same 6 fingers and using the procedure described in this paper. The phantom fingers were also scanned 5 times with differen placement and rotation. We therefore have now two data sets of 30 images in total. For each of the data sets we can create:
\[N_{m}=\frac{5\cdot 4}{2}\cdot 6=60\]
mated pairs and
\[N_{nm}=\frac{5\cdot 5\cdot 5}{2}\cdot 6=375\]
non mated pairs.
If we compare phantoms to real fingers, we can create:
\[N_{mm}=5\cdot 5\cdot 6=150\]
mixed mated pairs and
\[N_{mnm}=\frac{10\cdot 10\cdot 5}{2}\cdot 6=1500\]
non mated pairs.
### _Finger vein comparison_
We used the Miura finger vein comparison approach [11] with maximum curvature to extract the veins and ICP alignment [13] to compare finger vein pairs. The comparison score is based on the maximum normalised correlation of the binary vein images. We mapped the comparison scores to a range of 0-100, where 0 means no similarity and 100 exactly the same veins.
Fig. 8: 3D vein structures at various depths and NIR image
Fig. 10: Complete 3D finger phantom model and NIR scan
Fig. 9: Vein thickness extraction from a real vein image
### _Results_
In Figure 11 the comparison score histograms of mated and non mated finger vein images are shown for the real fingers. We can observe that the non mated scores are all below about 30. The distribution of the mated scores is quite wide, which is caused mainly by the relatively high rotation of the fingers of up to 10 degrees that we allowed. Most mated scores are above 30, though. From experience we know that if the rotation decreases, all mated scores will be above 30 and the overlap between mated and non mated scores disappears. The mated score histograms are not so smooth, beacuse the number of mated pairs is small.
In Figure 12 the comparison score histograms of mated and non mated finger vein images are shown for the phantom fingers. Again, we see that all non mated comparison scores are below 30, but the average non mated score is somewhat higher than for the real fingers. We think this is mostly to be attributed to the missing smaller veins in the phantoms. Normally the small veins cause a large disagreement resulting in a lower comparison score. The mated scores are all above 30 and most of them are higher than for the real fingers. We think this may be caused by the missing small veins, that are are harder to detect and for real fingers can reduce the score. Also, in real fingers the musccle tension can create variation of the visibility of the veins, while this is not present in the phantom fingers, of course. Nevertheless, the main behaviour of the comparison scores is similar to that of the real fingers.
Finally, we compared the phantom fingers with the real fingers. The mated scores are now obtained by picking a real finger vein image and a phantom finger vein image that was created using the vein pattern of the real finger. The non mated scores are a combination of all possible non mated scores, i.e. comparisons of real vs real, phantom vs phantom and real vs phantom. The score histograms for the phantom vs real comparisons are shown in Figure 13. We can observe that the non mated scores are again all below 30 and the distribution is similar to those of the real and phantom fingers. The mated scores are mostly above 30 again and the distribution is between those of the real and fake fingers. The scores are somewhat higher than those of the real fingers, probably because in one of the images the smaller veins are missing, but lower than those of the phantom fingers, because in one of the images (the real finger) the smaller veins are present and lead to lower similarity.
### _Presentation Attack_
The fact that most of the mated comparisons between phantom and real fingers result in scores above 30 and in the same range as real finger vein comparisons, means that using the phantom, it is possible to spoof the finger vein sensor. The phantoms exhibit the same properties as real fingers, i.e. they show the same grey level distribution and vein patterns as real fingers in NIR illumination. Of course visual inspection of the phantoms would immediately reveal that they are cylindric and have no natural shape, not to mention that they are green!
## V Conclusion
The aim of the presented research was to investigate if using 3D printing techniques it is possible to create finger vein phantoms that mimic the behaviour of real fingers in NIR illumination closely. We found that the absorption and
Fig. 11: Score histograms of real finger veins
Fig. 12: Score histograms of phantom finger veins
Fig. 13: Score histograms of real vs phantom finger veins
scattering behaviour of bones, soft tissues and veins can be approximated by using the proper 3D printing materials and varying the density of the printing material in the printing process. For bones and soft tissues we chose green PLA where for bones we used less dense and for soft tissues denser printing, because bones absorb NIR less than soft tissues. For the veins we used grey PLA and used solid printing. This resulted in very high absorption of NIR red light like for real veins. We also took into account the shape of the bones that are thicker near the joints and were able to replicate the brighter bands around the joints in finger vein images, caused by a thinner layer of soft tissues at that location. To create realistic vein patterns, we used 2D scans of real vein patterns and transformed them into 3D and combined these with a cylindric model of the bones and soft tissues. The complete phantoms could be 3D printed using a 3D printer with two heads to print two materials (green and grey) simultaneously. It turned out that the thus created phantom fingers produced convincing finger vein images when scanned using NIR light. We collected a small dataset with 6 fingers and showed that the score distributions of mated and non mated pairs when compared using Miura's finger vein recognition method show similar behaviour and that phantoms created using the vein pattern of a finger can be used successfully for Presentation Attacks for that finger.
We use the finger vein phantoms currently for improving the design of finger vein scanner devices and for investigating the behaviour of methods for finger vein pattern extraction and recognition. The current vein phantoms don't have very thin veins due to limitations of the 3D printer, which has a nozzle of 0.4 mm. We intend to replace it with a finer nozzle of 0.1 mm. Furthermore, the finger phantoms can be made more realistic by using an actual finger shape instead of a cylinder. Also we plan to use synthesized finger vein patterns instead of scanned real finger vein patterns and vary the depth of the veins in the patterns to further approximate real fingers.
|
2301.00089 | Autonomous Driving Simulator based on Neurorobotics Platform | There are many artificial intelligence algorithms for autonomous driving, but
directly installing these algorithms on vehicles is unrealistic and expensive.
At the same time, many of these algorithms need an environment to train and
optimize. Simulation is a valuable and meaningful solution with training and
testing functions, and it can say that simulation is a critical link in the
autonomous driving world. There are also many different applications or systems
of simulation from companies or academies such as SVL and Carla. These
simulators flaunt that they have the closest real-world simulation, but their
environment objects, such as pedestrians and other vehicles around the
agent-vehicle, are already fixed programmed. They can only move along the
pre-setting trajectory, or random numbers determine their movements. What is
the situation when all environmental objects are also installed by Artificial
Intelligence, or their behaviors are like real people or natural reactions of
other drivers? This problem is a blind spot for most of the simulation
applications, or these applications cannot be easy to solve this problem. The
Neurorobotics Platform from the TUM team of Prof. Alois Knoll has the idea
about "Engines" and "Transceiver Functions" to solve the multi-agents problem.
This report will start with a little research on the Neurorobotics Platform and
analyze the potential and possibility of developing a new simulator to achieve
the true real-world simulation goal. Then based on the NRP-Core Platform, this
initial development aims to construct an initial demo experiment. The consist
of this report starts with the basic knowledge of NRP-Core and its
installation, then focus on the explanation of the necessary components for a
simulation experiment, at last, about the details of constructions for the
autonomous driving system, which is integrated object detection and autonomous
control. | Wei Cao, Liguo Zhou, Yuhong Huang, Alois Knoll | 2022-12-31T01:12:27Z | http://arxiv.org/abs/2301.00089v1 | # Autonomous Driving Simulator based on Neurorobotics Platform
###### Abstract
There are many artificial intelligence algorithms for autonomous driving in the present market, but directly installing these algorithms on vehicles is unrealistic and expensive. At the same time, many of these algorithms need an environment to train and optimize. Simulation is a valuable and meaningful solution with training and testing functions, and it can say that simulation is a critical link in the autonomous driving world. There are also many different applications or systems of simulation from companies or academies such as SVL and Carla. These simulators flaunt that they have the closest real-world simulation, but their environment objects, such as pedestrians and other vehicles around the agent-vehicle, are already fixed programmed. They can only move along the pre-setting trajectory, or random numbers determine their movements. What is the situation when all environmental objects are also installed by Artificial Intelligence, or their behaviors are like real people or natural reactions of other drivers? This problem is a blind spot for most of the simulation applications, or these applications cannot be easy to solve this problem. The Neurorobotics Platform from the TUM team of Prof. Alois Knoll has the idea about "Engines" and "Transceiver Functions" to solve the multi-agents problem. This report will start with a little research on the Neurorobotics Platform and analyze the potential and possibility of developing a new simulator to achieve the true real-world simulation goal. Then based on the NRP-Core Platform, this initial development aims to construct an initial demo experiment. The consist of this report starts with the basic knowledge of NRP-Core and its installation, then focus on the explanation of the necessary components for a simulation experiment, at last, about the details of constructions for the autonomous driving system, which is integrated object detection function and autonomous driving control function. At the end will discuss the existing disadvantages and improvements of this autonomous driving system.
Simulation, Neurorobotics Platform, NRP-Core, Engines, Transceiver Functions, Autonomous Driving, Object Detection, PID Trajectory Control
## 1 Introduction
### Motivation
At present, there are many different Artificial Intelligence (AI) algorithms used for autonomous driving. Some algorithms are used to perceive the environment, such as object detection and semantic/instance segmentation. Some algorithms are dedicated to making the best trajectory strategy and control decisions based on the road environment. Others contribute to many different applications, e.g. path planning and parking. Simulation is the best cost-performance way to develop these algorithms before they are truly deployed to actual vehicles or robots. So, the performance of a simulation platform is influencing the performance of the AI algorithms. In the present market or business world, there are already a lot of different "real-world" simulation applications such as CARLA [1] for simulating the algorithm for autonomous driving, AirSim [2] from Microsoft for autonomous vehicle and quadrotor and PTV Vissim [3] from Germany PTV Group for flexible traffic simulation.
Although these simulators are dedicated to the "real world" simulation, they have more or less "unreal" problems on some sides in the process of simulation. For example, besides the problem about the unreal 3-D models and environment, these simulators have an obvious feature, these AI algorithms are only deployed to target experimental subjects, vehicles, or robots, and the environment such as other vehicles, motorbikes, and pedestrian looks very close to the "real" environment but actually these environmental subjects are already in advance pre-programmed and have a fix motion trail. The core problem of most of them focuses on basic information transmission. They only transfer the essential or necessary traffic information to the agent subject in the simulation. This transmission is one-way direction. Considering this situation, can let other subjects in this simulation have their own different AI algorithms at the same time that they can react to the agent's behavior? In the future world, there would be not only one vehicle owning one algorithm from one company, but they must also have much interaction with other agents. The interaction between different algorithms can take which influence back on these algorithms, and this problem is also a blind point for many simulators.
This large range of interaction between lots of agents is the main problem that these applications should pay attention to and these existing applications do not have an efficient way to solve this problem. A simulation platform that is truly
like the real world, whose environment is not only a fixed pre-definition program, the objects in the environment can make a relative objective interaction with vehicles with the testing autonomous driving algorithms and they can influence each other, the goal and concept is an intractable problem for the construction of a simulation platform. There is a platform called The Neurorobotics Platform (NRP) from the TUM team of Prof. Alois Knoll that provides a potential idea to solve this interaction problem. This research project focuses on preliminary implementation and searches for the possibility of solving the previously mentioned interaction problem.
### 1.2 Neurorobotics Platform (NRP)
Neurorobotics Platform [4] is an open-source integrative simulation framework platform developed by the group of the chair of Robotics, Artificial Intelligence and Real-Time Systems of the Technical University of Munich in the context of the Human Brain Project - a FET Flagship funded by the European Commission. The basic starting point of this platform enables to choose and test of different brain models (ranging from spiking neural networks to deep networks) for robots. This platform builds an efficient information transmission framework to let simulated agents interact with their virtual environment.
The new Version of NRP called NRP Core provides a new idea, which regards all the Participator in the Simulation-system as "Engines", just like the object in the programming language C++/python, the properties of the simulation participator such as the robot, autonomous-driving car, weather, or pedestrian and their "behaviors" would be completely constructed in their own "Engine"-object and let all the participates become a "real" object and can each other influence in the simulation world and they would not be a fix definite "Program". And the NRP-Platform is the most important transport median between these engines and they are called the Transceiver Function. It transmits the "Information" such as the image from the camera and sends the image to an autonomous-driving car and the same time would send other information to other engines by different transfer protocols such as JSON or ROS system. That means the transmission of information is highly real-time and lets the simulation world very close to the real world and it has high simulation potency, e.g. the platform sends the image information to the autonomous-driving car and lets the car computes the situation and makes the right strategy and rational decision, and at the same moment the environment-cars or "drivers" also get the location information from the autonomous-driving car and make their own decisions such like drive further or change velocity and lanes, and the same time these cars are influenced by the situation of the weather, e.g. in raining days the brake time of the car would be longer and let the decision making and object detection more significant.
NRP-core is mostly written in C++, with the Transceiver Function framework relying on Python for better usability. It guarantees a fully deterministic execution of the simulation, provided every simulator used is itself deterministic and works on the basis of controlled progression through time steps. Users should thus take note that event-based simulators may not be suitable for integration in NRP-core (to be analyzed on a case-by-case basis). Communications to and from NRP-core are indeed synchronous, and function calls are blocking; as such, the actual execution time of a simulation based on NRP-core will critically depend on the slowest simulator integrated therein. The aforementioned feature of the NRP-Core platform is significant to build multi-object which interact with other agencies in the simulation progress and lets the simulation be close to the real world.
## 2 NRP-Core configurations for simulation progress
NRP-Core has many application scenarios for different demands of simulation situations. For a specific purpose, the model of NRP-Core can be widely different. This development for the Autonomous-driving benchmark focuses on the actual suggested development progress. It concentrates on the construction of the simulation application, the details of
Figure 1.1: The base model of Neurorobotics Platform (NRP)
the operation mechanism of NRP-Core would not be discussed, and deep research in this development documentation, the principle of the operation mechanism can be found on the homepage of NRP-Core.
### Installation of NRP-Core and setting environment
For the complete installation, refer to the homepage of the NRP-Core Platform by "Getting Started" under the page "Installation Instructions." This section lists only all the requirements for applying the autonomous driving simulator and benchmark.
**WARNING**: Previous versions of the NRP install forked versions of several libraries, notably NEST and Gazebo. Installing NRP-core in a system where a previous version of NRP is installed is known to cause conflicts. That will be strongly recommended not to install the last version at the same time.
**Operating System**: recommend on Ubuntu 20.04
Setting the Installation Environment: To properly set the environment to run experiments with NRP-core, please make sure that it is added the lines below to your /.bashrc file.
```
1#Startsettingenvironment
2exportNRP_INSTALL_DIR="/home/${USER}/.local/nrp"#Theinstallationdirectory,whichwasgivenbefore
3exportNRP_DEPS_INSTALL_DIR="/home/${USER}/.local/nrp_deps"
4exportPTYHOWPAH^#
* [23]Sudoaptinstallpython3-restrictedpythonuwspi-coreuwspi-plugin-python3
* [24]pipinstallflask_corsmpi4pydocpt
* [25]
* [26]#requiredbynrp-server,whichusesgRPCpythonbindings
* [27]pipinstallgrpcio-toolspytestpsutildocker
* [28]#Requiredforusingdockerwithssh
* [29]pipinstallparamiko
* [27]
* [28]#ROS,whennotneeded,canjumptothenextstep
* [29]
* [30]#InstallROS:followtheinstallationinstructions:[http://wiki.ros.org/noeticInstallation/Ubuntu.Toenablerssupportinnrpon](http://wiki.ros.org/noeticInstallation/Ubuntu.Toenablerssupportinnrpon)'ros-noetic-ros-base'isrequired.
* [31]#Tellnrpcorewhereyourcatkinworkspaceislocated:exportavariableCATKIN_WSpointingtoanexistingcatkinworkspacerootfolder.Ifthevariabledoesnotexist,anewcatkinworkspacewillbecreatedat'${HOME}/catkin_ws'.
* [32]
* [33]#MQTT,ifneeded,seethehompageofNRP-Core
* [34]
* [35]#Endofdependenciesinstallation
**NRP installation:**
```
1#Startofinstallation
2gitclone[https://bitbucket.org/hbpneuorobotics/nrp-core.git](https://bitbucket.org/hbpneuorobotics/nrp-core.git)
3cdnrp-core
4mkdirbuild
5cdbuild
6#Seethesection"CommonNRP-coreCMakeoptions"inthedocumentationfortheadditionalwaystoconfiguretheprojectwithCMake
7cmake.-DCMAKE_INSTALL_PREFIX="${NRP_INSTALL_DIR}"-DNRP_DEP_CHARKE_INSTALL_PREFIX="${NRP_DEPS_INSTALL_DIR}"
8mkdir-p"${NRP_INSTALL_DIR}"
9#theinstallationprocessmighttakesometime,asitdownloadsandcompiles
10Nestaswell.
11#Ifyouhaven'tinstalledMQTTlibraries,addENABLE_MQTT=OFFdefinitiontoc
12cmake(-DENABLE_MQTT=OFF).
13make
14makeinstall
15#justincaseofwantingtobuildthedocumentation.Documentationcanthenbefoundinanewdoxygenfolder
16makenrp_doxygen
17#Endofinstallation
```
**CommonNRP-coreCMake options**: Here is the list of the CMake options that can help modify the project configuration (turn on and off the support of some components and features).
[MISSING_PAGE_POST]
* the NRPCoreSim is always built regardless of any of the flags values.
* if ENABLE_SIMULATOR is set to OFF:
* the related simulator won't be assumed to be installed in the system, ie. make won't fail if it isn't. Also it won't be installed in the compilation process if this possibility is available (as in the case of nest)
* The engines connected with this simulator won't be built (nor client nor server components)
* tests that would fail if the related simulator is not available won't be built
* if the ENABLE_SIMULATOR is set to ON and BUILD_SIMULATOR_ENGINE_SERVER is set to OFF: Same as above, but:
* the engine clients connected to this simulator will be built. This means that they should not depend on or link to any specific simulator
* the engine server-side components might or might not be built, depending on if the related simulator is required at compilation time
* if both flags are set to ON the simulator is assumed to be installed or it will be installed from the source if this option is available. All targets connected with this simulator will be built.
This flag system allows configuring the resulting NRP-Core depending on which simulators are available on the system, both for avoiding potential dependency conflicts between simulators and enforcing modularity, opening the possibility of having specific engine servers running on a different machine or inside containers.
### Introduction of basic components of simulation by NRP
Some important elements for constructing a simulation example by the NRP platform are: Engines, Transceiver Function (TF) + Preprocessing Function (PF), Simulation Configuration JSON file, Simulation model file and DataPack, which are basic components of simulation progress. In this section, list and declare their definition, content and implementation.
#### 2.2.1 Engine
Engines are a core aspect of the NRP-core framework. They run the actual simulation software (which can be comprised of any number of heterogeneous modules), with the Simulation Loop and TransceiverFunctions merely being a way to synchronize and exchange data between them. The data exchange is carried out through an engine client (see paragraph below). An Engine can run any type of software, from physics engines to brain simulators. The only requirement is that they should be able to manage progressing through time with fixed-duration time steps.
There are different engines already implemented in NRP-Core:
* Nest: two different implementations that integrate the NEST Simulator into NRP-core.
* Gazebo: engine implementation for the Gazebo physics simulator.
* PySim: engine implementation based on the Python JSON Engine wrapping different simulators (Mujoco, Opensim, and OpenAI) with a python API.
* The Virtual Brain: engine implementation based on the Python JSON Engine and TVB Python API.
and so on are provided by NRP and as the first user-interested engines for research Spiking neural Networks and the like. These applications are distributed to the specific simulator. This platform provides also **Python JSON Engine**, this versatile engine enables users to execute a user-defined python script as an engine server, thus ensuring synchronization and enabling DataPack data transfer with the Simulation Loop process. It can be used to integrate any simulator with a Python API in an NRP-core experiment. This feature allows users to modular develop experiment agents in constructed simulation world and is flexible to manage plural objects with different behaviors and characters.
#### 2.2.2 DataPack and Construction format
The carrier of Information which is transported between engines and lets engines with each other communicate is DataPack. By NRP are there three types of supported DataPack, all of them are simple objects which wrap around arbitrary data structures, one is **JSON DataPack**, second is **Protobuf DataPack** and another is **ROS msg DataPack**. They provide the necessary abstract interface, which is understood by all components of NRP-Core, while still allowing the passing of data in various formats. DataPack is also an important feature or property of a specific Engine, meaning the parameters and form of data of a specific DataPack be declared in the Engine (Example see section 3.4.2).
A DataPack consists of two parts:
* DataPack ID: which allows unique identify the object.
* DataPack data: this is the data stored by the DataPack, which can be in the principle of any type.
DataPacks are mainly used by Transceiver functions to relay data between engines. Each engine type is designed to accept only datapacks of a certain type and structure.
Every DataPack contains a DataPackIdentifier, which uniquely identifies the datapack object and allows for the routing of the data between transceiver functions, engine clients and engine servers. A datapack identifier consists of three fields:
* the name of the DataPack. It must be unique.
* string representation of the DataPack data type. This field will most probably be of no concern for the users. It is set and used internally and is not in human-readable form.
* the name of the engine to which the DataPack is bound.
DataPack is a template class with a single template parameter, which specifies the type of data contained by the DataPack. This DataPack data can be in the principle of any type. In practice, there are some limitations though, since DataPacks, which are C++ objects, must be accessible from TransceiverFunctions, which are written in Python. Therefore the only DataPack data types which can be actually used in NRP-core are those for which Python bindings are provided. It is possible for a DataPack to contain no data. This is useful, for example, when an Engine is asked for a certain DataPack but it is not able to provide it. In this case, an Engine can return an empty DataPack. This type of Datapack contains only a Datapack identifier and no data. Attempting to retrieve the data from an empty DataPack will result in an exception. A method "isEmpty" is provided to check whether a DataPack is empty or not before attempting to access its data:
```
1if(notdatapack.isEmpty()):
2#It'ssafetogethedata
3print(datapack.data)
4else:
5#Thiswillraiseexception
6print(datapack.data)
```
* The Format of getting DataPack from a particular Engine:
```
1#Declareddapackwith"datapack_name"namefromengine"engine_name"asinputusingthe@EngineDataPackdecorator
2#Thetransceiverfunctionmustacceptanargumentwiththesamemaness"keyword"inthedatapackdecorator
3
4#EngineDataPack(keyword="datapack",id=DataPackIdentifier("datapack_name","engine_name"))
5#TransceiverFunction("engine_name")
6deftransceiver_function(datapack):
7print(datapack.data)
8
9#Multipleinputdatapacksfromdifferentenginescanbedeclared
10#EngineDataPack(keyword="datapack1",id=DataPackIdentifier("datapack_name1","engine_name1"))
11#EngineDataPack(keyword="datapack2",id=DataPackIdentifier("datapack_name2","engine_name2"))
12#TransceiverFunction("engine_name1")
13deftransceiver_function(datapack1,datapack2):
14print(datapack1.data)
15print(datapack2.data)
```
PS: The details of two Decorators of TransceiverFunction see below in section 2.2.3.
* The Format of setting information in DataPack and sending to particular Engine:
```
1#NRP-Coreexpectstransceiverfunctionstoalwaysreturnalistofdatapacks
2@TransceiverFunction("engine_name")
3deftransceiver_function():
4datapack=JsonDataPack("datapack_name","engine_name")
5return[datapack]
*#Multipledatapackscanbereturned
*
*#TransceiverFunction("engine_name")
*deftransceiver_function():
*datapack1=JsonDataPack("datapack_name1","engine_name")
*datapack2=JsonDataPack("datapack_name2","engine_name")
*return[datapack1,datapack2]
#### 2.2.3 Transceiver Function and Preprocessing Function
**1. Transceiver Function**
Transceiver Functions are user-defined Python functions that take the role of transmitting DataPacks between engines. They are used in the architecture to convert, transform or combine data from one or multiple engines and relay it to another.
The definition of a Transceiver Function must use Decorator before the user-defined "def" transceiver function, which means: Sending the DataPack to the target Engine:
```
1@TransceiverFunction("engine_name")
```
To request datapacks from engines, additional decorators can be prepended to the Transceiver Function, with the form (Attention: Receive-Decorator must be in the front of TransceiverFunction):
```
1@EngineDataPack(keyword_datapack,id_datapack)
```
*keyword_datapack: user-defined new data name of DataPacks, this keyword is used as Input to Transceiver Function.
*id_datapack: the id of from particular Engine received DataPack, "DataPack ID" = "DataPack Name" + "Engine Name" (Examples see 2.2.2)
**2. Preprocessing Function**
Preprocessing Function is very similar to Transceiver Function but has different usage. Preprocessing Functions are introduced to optimize expensive computations on DataPacks attached to a single engine. In some cases, there might be necessary to apply the same operations on a particular DataPack in multiple Transceiver Functions. An example of this might be applying a filter to a DataPack containing an image from a physics simulator. In order to allow to execute this operation just once and let other TFs access the processed DataPack data, PreprocessingFunctions (PFs) are introduced.
They show two main differences with respect to Transceiver Functions:
* Their output datapacks are not sent to the corresponding Engines, they are kept in a local datapack cache and can be used as input in TransceiverFunctions
* PFs just can take input DataPacks from the Engine they are linked to
The format of Preprocessing Function is similar to Transceiver Function:
```
1@PreprocessingFunction("engine_name")
2@PreprocessedDataPack(keyword_datapack,id_datapack)
```
These Decorators "@PreprocessingFunction" and "@PreprocessedDataPack" must be used in Preprocessing Functions. Since the output of Preprocessing Function is stored in the local cache and does not need to process on the Engine Server side, Preprocessing Function can return any type of DataPack without restrictions.
#### 2.2.4 Simulation Configuration Json file
The details of configuration information for any simulation with Engines and Transceiver Functions are stored in a single JSON file, this file contains the objects of engines, Transceiver functions, and also their important necessary parameters to initialize and execute a simulation. This file is usually written in the "example_simulation.json" file.
The JSON format is here a JSON schema, which is highly readable and offers similar capabilities as XML Schema. The advantage of composability and inheritance allows the simulation to use reference keywords to definite the agent and to validate inheritance by referring to other schemas. That means that the same basement of an engine can at the same time create plural agents or objects with only different identify IDs.
**1. Simulation Parameters**
For details, see appendix Table A.1: Simulation configuration parameter.
**2. Example form**
```
1{
2"SimulationName":"example_simulation",
3"SimulationDescription":"Launchtwopythonengines."
4"SimulationTimeout":1,
5"EngineConfigs":
6[
7{
8"EngineType":"python_json",
9"EngineName":"python_1",
10"PythonFileName":"engine_1.py"
11},
12{
13"EngineType":"python_json",
14"EngineName":"python_2",
15"PythonFileName":"engine_2.py"
16}
17],
18"DataPackProcessingFunctions":
19[
20{
21"Name":"tf_1",
22"FileName":"tf_1.py"
23}
24]
25}
```
* EngineConfigs: this section list all the engines are participating in the simulation progress. There are some important parameters should be declared:
* EngineType: which type of engine is used for this validated engine, e.g., gazebo engine, python JSON engine
* EngineName: user-defined unit identification name for the validated engine
* Other Parameters: These Parameters should be declared according to the type of engines (details see appendix Table A.2: Engine Base Parameter)
* Python Json engine: "PythonFileName" - reference base python script for validated engine
* Gazebo engine: see in section
* DataPackProcessingFunctions: this section lists all the Transceiver functions validated in simulation progress. Mostly are there two parameters that should be declared:
* Name: user-defined identification name for validated Transceiver Function
* FileName: which file as reference base python script to validate Transceiver Function
* 1. Simulation Parameters
* Launch a simulation: This simulation configuration JSON file is also the launch file and uses the NRP command to start a simulation experiment with the following command: ```
```
1NRPCoreSim-cuserdefinedsimulationconfig.json
```
Tip: In a user-defined simulation, the folder can simultaneously exist many different named configuration JSON files. It is very useful to config the target engine or Transceiver Functions that which user wants to launch and test with. To start and launch the target simulation experiment, just choose the corresponding configuration file.
#### 2.2.5 Simulation model file
In this experiment for Autonomous driving on the NRP platform Gazebo physics simulator [5] is the world description simulator. For the construction of the simulation, the world can use the "SDF" file based on XML format to describe all the necessary information about 3D models in a file, e.g. sunlight, environment, friction, wind, landform, robots, vehicles, and other physics objects. This file can in detail describe the static or dynamic information of the robot, the relative position and motion information, the declaration of sensor or control plugins, and so on. And Gazebo is a simulator that has a close correlation to the ROS system and provides simulation components for ROS, so the ROS system describes many similar details about the construction of SDF file [6].
According to XML format label to describe components of the simulation world and construct the dependence relationship of these components:
* World Label
```
1<sdfversion='1.7'>
2<worldname='default'>
3.......
4</world>
5</sdf>
```
All the components and their labels should be under <world> label.
* Model Labels
```
1<modelname='model_name'>
2<pose>0000-00</pose>
3<linkname='roadmap'>
4.......
5</link>
6<pluginname='link_plugin'filename='NRPGazeboGrpcLinkControllerPlugin.so'/>
7</plugin>
8</model>
```
The Description is under label <model>, and importantly if user will use a plugin such as the control-plugin or sensor-plugin (camera or lidar), this <plugin> label must be set under the corresponding <model> label. Under <link> label describes the model physics features like <collision>, <visual>, <joint>, and so on.
* mesh files
Gazebo requires that mesh files be formatted as STL, Collada, or OBJ, with Collada and OBJ being the preferred formats. Blow lists the file suffixes to the corresponding mesh file format.
Collada -.dae OBJ -.obj STL -.stl
Tip: Collada and OBJ file formats allow users to attach materials to the meshes. Use this mechanism to improve the visual appearance of meshes.
Mesh file should be declared under a needed label like <visual> or <collision> with layer structure with <geometry> - <mesh> - <uri> (Uri can be absolute or relative file path):
```
1<geometry>
2<mesh>
3<uri>xxxx/xxxx.dae</uri>
4</mesh>
5</geometry>
```
## 3 Simulation Construction on NRP-Core
Based on the steps for configuring a simulation on the NRP-Core platform, the autonomous driving benchmark can now be implemented with the components mentioned above, from 3D models to communicating mechanisms. This section will introduce the requirements of the autonomous driving application, and second will analyze the corresponding components and their functions. The third is the concrete implementation of these requirements.
Second, this project will also research the possibility of achieving modular development for multi-agents on the NRP platform, comparing it with other existing and widely used systems, and analyzing the simulation performance according to the progress result.
### Analysis of requirements for autonomous driving application
An application to achieve the goal of testing the performance of autonomous driving algorithms can refer to different aspects. The reason is that autonomous driving can integrate different algorithms such as computer vision, object detection, decision-making and trajectory planning, vehicle control, or Simultaneous localization and mapping. The concept and final goal of the application are to build a real-world simulation that integrates multi-agents, different algorithms, and corresponding evaluation systems to the performance of the autonomous driving vehicle. But that first needs many available, mature, and feasible algorithms. Second, the construction of world 3D models is a big project. And last, the evaluation system is based on the successful operation of the simulation. So the initial construction of the application will focus on the base model of the communication mechanism to first achieve the communication between the single agent
and object-detection algorithm under the progress of NRP-Core. And for vehicle control algorithm reacts logically based on the object detection and generates feasible control commands, in this project will skip this step and give a specific trajectory, that let the vehicle along this trajectory move.
Requirements of implementation:
* Construction of the base model frame for communication between the Gazebo simulator, object-detection algorithm, and control unit.
* Selection of feasible object-detection algorithm
* Simple control system for autonomous movement of high accuracy physical vehicle model
### Object detection algorithm and YOLO v5 Detector Python Class
According to the above analysis, the requirements of the application should choose an appropriate existing object detection algorithm as the example to verify the communication mechanism of the NRP platform and at the same time to optimize performance.
On the research of existing object detection algorithms from base Alex-Net for image classification [7] and CNN-Convolution neural network for image recognition [8], the optimized neural network ResNet [9] and SSD neural network for multi-box Detector [10] and in the end the YOLOv5 neural network [11], YOLOv5 has high performance on the object detection and its advantage by efficient handling of frame image on real-time let this algorithm also be meaningful as a reference to test other object-detection algorithms. Considering the requirements of autonomous driving is YOLOv5 also a suitable choice as the experimental object-detection algorithm to integrate into the NRP platform.
Table Notes:
* All checkpoints are trained to 300 epochs with default settings and hyperparameters.
* mAPval values are for single-model single-scale on COCO val2017 dataset. Reproduced by python val.py -data coco.yaml -img 640 -conf 0.001 -iou 0.65
* Speed averaged over COCO val images using a AWS p3.2xlarge instance. NMS times ( 1 ms/img) not included.Reproduce by python val.py -data coco.yaml -img 640 -conf 0.25 -iou 0.45
* TTA Test Time Augmentation includes reflection and scale augmentations.Reproduce by python val.py -data coco.yaml -img 1536 -iou 0.7 -augment
Requirements and Environment for YOLOv5:
* Quick link for YOLOv5 documentation : YOLOv5 Docs [12]
* Environment requirements: Python >= 3.7.0 version and PyTorch [13] >= 1.7
* Integration of original initial trained YOLOv5 neural network parameters, the main backbone has no changes compared to the initial version
Based on the original execute-python file "detect.py" has another python file "Yolov5Detector.py" with a self-defined Yolov5Detector class interface written in the "YOLOv5" package. To use YOLO v5 should in main progress validate the YOLO v5 class, second use warm-up function "**detectorWarmUp0**" to initiate the neural network. And "**detectImage()**" is the function that sends the image frame to the main predict detection function and will finally return the detected image with bounding boxes in NumPy format.
### 3D-Models for Gazebo simulation world
According to the performance of the Gazebo is the scope of the base environment world not suitable to use a large map. On the basic test of different sizes of the map of Garching-area is the environment world model recommends encircling the area of Parkring in Garching-Hochbruck. This map model is based on the high-accuracy satellite generated and is very similar to the origin location. And by the simulation progress, the experimental vehicle moves around the main road of Parkring.
The experimental vehicle is also a high detail modeling vehicle model with independently controllable steerings for diversion control of two front wheels, free front, and rear wheels, and a high-definition camera. For the rebuilding of these models, the belonging relationship for each mode should be declared in the SDF file. In the SDF file are these models including base-chassis, steerings, wheels, and camera as "Link" of the car "model" under the <model> label with a user-defined unique name. Attention, the name of models or links must be specific and has no same name as other objects. The below shows the base architecture frame to describe the physical relationship of the whole vehicle in the SDF file:
* [leftmargin=*]
**1. Description of Labels [6]:**
* [leftmargin=*]
* [leftmargin=*]
The mesh file "vehicle_body.dae" (shown in Fig. 3.1b the blue car body) is used for the base-chassis of the experiment vehicle under <link name="base_link"> label. And the mesh file "wheel.dae" is used for the rotatable vehicle wheels under <link name=" front_left_wheel_link"> and the other three similar link labels. And for steering models, <cylinder> labels are used to simply generate length - 0.01m + height radius 0.1m cylinder as the joint elements between wheels and chassis.
**2. Sensor Label:**
In the Gazebo simulator to activate the camera function, the camera model should under the "camera link" label declare a new secondary "sensor label" - <sensor> with "name" and "type=camera" elements. And the detailed construction for the camera sensor seeing blow scripts:
```
1<sensorname='camera'type='camera'>
2<pose>000.1320-0.1740</pose>
3<topic>/smart/camera</topic>
4<camera>
5<horizontal_fov>1.57</horizontal_fov>
6
10<clip>
11<near>0.1</near>
12<far>100</far>
13</clip>
14<noise>
15<type>gaussian</type>
16<mean>0</mean>
17<stddev>0.007</stddev>
18</noise>
19</camera>
20<always_on>1</always_on>
21<update_rate>30</update_rate>
22<visualize>1</visualize>
23</sensor>
```
* <image> -- this label defines the camera resolution ratio and this is regarded as the size of the frame-image that sends to the Yolo detector engine. According to the requirement of the YOLO detection algorithm, the width and height of the camera should be set as integral multiples by 32.
### Construction of Engines and Transceiver Functions
The construction of the whole project regards as an experiment on the NRP platform, and as an experiment, the whole package of the autonomous driving benchmark is under the "nrp-core" path in the examples folder. According to bevor announced NRP components for a simulation experiment is the application also modular developed referring
Figure 3.2: the system of autonomous driving on NRP
to requirements of autonomous driving benchmark application. And the whole system frame is shown in Fig. 3.2. The construction of simulation would according to primary embrace two branches extend:
* A close loop from the Gazebo engine to get the location information of the vehicle and sent to the Vehicle control engine depending on Gazebo DataPacks (Protobuf DataPack), then send the joint control command back to the Gazebo engine.
* An open loop from Gazebo engine to get camera information and sent to Yolo Detector Engine, final using OpenCV to show the detected frame-image as monitor window.
#### 3.4.1 Gazebo plugins
Before the steps to acquire the different information must the corresponding plugins in SDF be declared. These plugins label are such as recognition-label to let Gazebo know what information and parameters should be sent or received and assigned. A set of plugins is provided to integrate the Gazebo in NRP-Core simulation. NRPGazeboCommunicationPlugin registers the engine with the SimulationManager and handles control requests for advancing the gazebo simulation or shutting it down. Its use is mandatory in order to run the engine. And there are two implementations of the Gazebo engine are provided. One is based on JSON over REST and another on Protobuf over gRPC. The latter performs much better and it is recommended. The gRPC implementation uses protobuf objects to encapsulate data exchanged between the Engine and TFs, whereas the JSON implementation uses nolhmann::json objects. Besides this fact, both engines are very similar in their configuration and behavior. The rest of the documentation below is implicitly referred to the gRPC implementation even though in most cases the JSON implementation shows no differences. The corresponding plugins are also based on Protobuf over the gRPC protocol. There are four plugins that would be applied in the SDF model world file:
* NRPGazeboGrpcCommunicationPlugin This plugin is the main communication plugin to set up a gRPC server and waits for NRP commands. It must be declared under the <world> label in the SDF file.
```
1<worldname='default'>
2...
3<pluginname="nrp_world_plugin"filename="NRPGazeboGrpcWorldPlugin.so"/>
4...
5</world>
* NRPGazeboGrpcCameraPlugin This plugin is used to add a GazeboCameraDataPack datapack. In the SDF file, the plugin would be named "smart_camera" (user-defined). This name can be accessed by TransceiverFunctions and get the corresponding information. This plugin must be declared under <sensor> label in the application under the camera sensor label:
```
1<sensorname='camera'type='camera'>
2...
3<pluginname='smart_camera'filename='NRPGazeboGrpcCameraControllerPlugin.so'/>
4...
5</sensor> ```
* NRPGazeboGrpcJointPlugin This plugin is used to register GazeboJointDataPack DataPack and in this case, only those joints that are explicitly named in the plugin will be registered and made available to control under NRP. The joint's name must be unique and once again in the plugin declared. In contrast to the other plugins described above or below, when using NRPGazeboGrpcJointPlugin DataPacks can be used to set a target state for the referenced joint, the plugin is integrated with the PID controller and can for each of the joint-specific set a better control performance. This plugin must be declared under the corresponding <model> label and have the parallel level in contrast to the <joint> label, and there are four joints that would be chosen to control: rear left and right wheel joint, front left and right steering joint, and according to small tests of the physical model of experiment-vehicle in Gazebo are the parameters of PID controller listed in below block:
```
1<modelname='smart_car'>
2...
3<jointname="rear_left_wheel_joint">...</joint> ```
* <joint name="rear_right_wheel_jointt">...</joint>
* <joint name="front_left_steering_joint">...</joint>
* <joint name="front_right_steering_joint">...</joint>
*..
* <plugin name='smart_car_joint_plugin'filename=' NRPGazeboGrpcJointControllerPlugin.so'>
* <rear_left_wheel_joint P='10'I='0'D='0'Type='velocity'Target='0' IMax='0'IMin='0'/>
* <rear_right_wheel_joint P='10'I='0'D='0'Type='velocity'Target='0' IMax='0'IMin='0'/>
* <front_left_steering_joint P='40000.0'I='200.0'D='1.0'Type=' position'Target='0'IMax='0'IMin='0'/>
* <front_right_steering_joint P='40000.0'I='200.0'D='1.0'Type=' position'Target='0'IMax='0'IMin='0'/>
* </plugin>
*...
* </model>
**Attention:** There are two target types that can be influenced and supported in Gazebo: Position and Velocity. And for the rear left and right wheels of the vehicle are recommended for setting type with "Velocity" and for the front left and right steering are recommended setting type with "Position". Because the actual control of the rear wheels is better with velocity and front steering uses angle to describe the turning control.
* NRPGazeboGrpcLinkPlugin This plugin is used to register GazebolinkDataPack DataPacks for each link of the experiment vehicle. Similar to the sensor plugin, this plugin must be declared under <model> label and has the parallel level of <link> label, and only be declared once:
```
1<modelname='smart_car'>
2...
3<pluginname='smart_car_link_plugin'filename=' NRPGazeboGrpcLinkControllerPlugin.so'/>
4...
5<linkname='base_link'>...</link>
6<linkname='eye_vision_camera'>...</link>
7<linkname='front_left_steering_link'>...</link>
8<linkname='front_left_wheel_link'>...</link>
9<linkname='front_right_steering_link'>...</link>
10<linkname='front_right_wheel_link'>...</link>
11<linkname='rear_left_wheel_link'>...</link>
12<linkname='rear_right_wheel_link'>...</link>
13...
14</model>
#### 3.4.2 State Transceiver Function "state_tf.py"
State Transceiver Function acquires the location information from the Gazebo engine and transmits it to Vehicle Control Engine to compute the next control commands. The receiving of location coordinates of the vehicle is based on the DataPack from Gazebo, and this DataPack is already encapsulated in NRP, it only needs to in the Decoder indicate which link information should be loaded in DataPack.
```
1#EngineDataPack(keyword='state_gazebo',id=DataPackIdentifier(' smart_car_link_plugin::base_link','gazebo'))
2#TransceiverFunction('car_ctl_engine")
3defcar_control(state_gazebo): ```
The location coordinates in the experiment would be the coordinate of base-chassis "base_link" chosen and use C++ inheritance declaration with the name of the plugin that is declared in the SDF file. And the received DataPack with the user-defined keyword "state_gazebo" would be sent in Transceiver Function "car_control()".
**Attention:** Guarantee to get link-information from Gazebo it is recommended new declaring on the top of the script with the below sentence:
```
1fromnrp_core.data.nrp_protobufimportGazeboLinkDataPack
that could let NRP accurately communicate with Gazebo.
The link-information DataPack in NRP would be called GazeboLinkDataPack. And its Attributes are listed in next Table 3.1. In Project are "position" and "rotation" information chosen and set to the "car_ctl_engine" engine defining Json DataPack, in the last "return" back to "car_ctl_engine". Use the "JsonDataPack" function to get in other engine-defined DataPack and itself form and assign the corresponding parameter with received information from Gazebo.
```
1car_state='JsonDataPack("state_location";"car_ctl_engine")
2
3car_state.data['location_x']=state_gazebo.data.position[0]
4car_state.data['location_y']=state_gazebo.data.position[1]
5car_state.data['qtn_x']=state_gazebo.data.rotation[0]
6car_state.data['qtn_y']=state_gazebo.data.rotation[1]
7car_state.data['qtn_z']=state_gazebo.data.rotation[2]
8car_state.data['qtn_w']=state_gazebo.data.rotation[3]
```
**Tip:** The z-direction coordinate is not necessary. So only x- and y-direction coordinates are included in DataPack to make the size of JSON DataPack smaller and let the transmission more efficient.
#### 3.4.3 Vehicle Control Engine "car_ctl_engine.py"
The Vehicle Control Engine would be written according to the form of Python Json Engine. The construction of a Python Json Engine is similar to the definition of a python class file that includes the attributes such as parameters or initialization and its functions. And a class file should declare that this Python Json Engine inherits the class "EngineScript" to let NRP recognize this file as a Python Json Engine to compute and execute. So a Python Json Engine can mostly be divided into three main blocks with def functions: def initialize(self), def runLoop(self, timestep_ns), and def shutdown(self).
* In **initialize block** is the initial parameters and functions defined for the next simulation. And in this block, should the corresponding DataPacks that belong to the specific Engine at the same time be defined with "self_registerDataPack()" and "self_setDataPack()" functions:
```
1self_registerDataPack("actors")
2self_setDataPack("actors",{"angular_L":0,"angular_R":0,"linear_L":0,"linear_R":0})
3self_registerDataPack("state_location")
4self_setDataPack("state_location",{"location_x":0,"location_y":0,"qtn_x":0,"qtn_y":0,"qtn_z":0,"qtn_w":0}) -_registerDataPack():-given the user-defined DataPack in the corresponding Engine. -_setDataPack():-given the corresponding name of DataPack and set parameters, form, and value of the DataPack. The generated actors-control-commands and location-coordinate of the vehicle in this project would be as properties of the DataPack belonging to the "car_ctl_engine" Engine.
* **runLoop** block is the main block that would always be looped during the simulation progress, which means the computation that relies on time and always need to update would be written in this block. In the "car_ctl_engine" Engine should always get the information from Gazebo Engine with the function "self_getDataPack()":
```
1state=self_getDataPack("state_location") -_getDataPack():-given the user_defined name of the DataPack Attention:the name must be same as the name in the Transceiver function that user-chosen DataPack which is sent back to Engine.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Attribute & Description & Python Type & C Type \\ \hline pos & Link Position & numpy.array(3, numpy.float32) & std::array<float,3> \\ rot & Link Rotation as quaternion & numpy.array(4, numpy.float32) & std::array<float,4> \\ lin\_vel & Link Linear Velocity & numpy.array(3, numpy.float32) & std::array<float,3> \\ ang\_vel & Link Angular Velocity & numpy.array(3, numpy.float32) & std::array<float,3> \\ \hline \hline \end{tabular}
\end{table}
Table 3.1: GazeboLinkDataPack Attributes. **Tip:** the rotation information from Gazebo is quaternion and its four parameters sort sequence is “x, y, z, w”.
After the computation of the corresponding command to control the vehicle is the function "_setDataPack()" once again called to set the commands information in corresponding "actors" DataPack and waiting for other Transceiver Function to call this DataPack:
```
1self_setDataPack("actors",{"angular_L":steerL_angle,"angular_R":steerR_angle,"linear_L":rearL_omiga,"linear_R":rearR_omiga})
```
* **shutdown** block is only called when the simulation is shutting down or the Engine arises errors and would run under progress.
#### 3.4.4 Package of Euler-angle-quaternion Transform and Trajectory
* Euler-angle and quaternion transform The received information of rotation from Gazebo is quaternion. That should be converted into Euler-angle to conveniently compute the desired steering angle value according to the beforehand setting trajectory. And this package is called "euler_from_quaternion.py" and should be in the "car_ctl_engine" Engine imported.
* Trajectory and Computation of target relative steering angle The beforehand setting trajectory consists of many equal proportional divided points-coordinate. And through the comparison of the present location coordinate and the target coordinate, the package would get the desired distance and steering angle to adjust whether the vehicle arrives at the target. If the vehicle arrives in the radius 0.8m of the target location points will be decided that the vehicle will reach the present destination, and the index will jump to the next destination location coordinate until the final destination. This package is called "relateAngle_computation.py".
#### 3.4.5 Actors "Motor" Setting Transceiver Function "motor_set_tf.py"
This Transceiver Function is the communication medium similar to the state-Transceiver Function. The direction of data is now from the "car_ctl_engine" Engine to the Gazebo engine. The acquired data from the "car_ctl_engine" Engine is the DataPack "actors" with the keyword "actors":
```
1@EngineDataPack(keyword='actors',id=DataPackIdentifier('actors','car_ctl_engine'))
2@TransceiverFunction("gazebo")
3defcar_control(actors):
```
And the DataPack from the Gazebo joint must be validated in this Transceiver Function with the "GazeboJointDataPack()" function. This function is specifically provided by Gazebo to control the joint, the given parameters are the corresponding joint name (declared with NRPGazeboGrpcJointPlugin in name in the SDF file) and target Gazebo engine (gazebo) (Attention: each joint should be registered as a new joint DataPack):
```
1rear_left_wheel_joint=GazeboJointDataPack("smart_car_joint_plugin::
2rear_left_wheel_joint","gazebo")
3rear_right_wheel_joint=GazeboJointDataPack("smart_car_joint_plugin::
4rear_right_wheel_joint","gazebo")
5front_left_steering_joint=GazeboJointDataPack("smart_car_joint_plugin::
6front_left_steering_joint=GazeboJointDataPack("smart_car_joint_plugin::
7front_right_steering_joint=GazeboJointDataPack("smart_car_joint_plugin::
8front_right_steering_joint","gazebo")
```
The joint control DataPack is GazeboJointDataPack and its attributes are listed in Table 3.2:
**Attention:** Guarantee to send Joint-information to Gazebo it is recommended new declaring on the top of the script with the below sentence:
```
1fromnrp_core.data.nrp_protobufimportGazeboJointDataPack
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Attribute & Description & Python Type & C Type \\ \hline position & Joint angle position (in rad) & float & float \\ velocity & Joint angle velocity (in rad/s) & float & float \\ effort & Joint angle effort (in N) & float & float \\ \hline \hline \end{tabular}
\end{table}
Table 3.2: GazeboJointDataPack Attributes.
#### 3.4.6 Camera Frame-Image Transceiver Function "camera_tf.py"
Camera frame-image Transceiver Function acquires the single frame image gathered by Gazebo internally installed camera plugin and sends this frame image to YOLO v5 Engine "jolo_detector". The receiving of the image of the camera is based on the camera DataPack from Gazebo called "GazeboCameraDataPack". To get the data, should the Decorator declare the corresponding sensor name with Validation through C++ and indicate the "gazebo" engine and assign a new keyword for the next Transceiver Function:
```
1@EngineDataPack(keyword="camera",id=DataPackIdentifier('smart_camera:camera','gazebo'))
2@TransceiverFunction("yolo_detector")
3defdetect_img(camera):
```
**Attention:** Guarantee to acquire camera information from Gazebo it is recommended new declaring on the top of the script with the below sentence that confirms import GazeboCameraDataPack:
```
1fromnrp_core.data.nrp_protobufimportGazeboCameraDataPack
```
And received image Json-information is four parameters: height, width, depth, and image data. The Attributes of the GazeboCameraDataPack are listed in Table 3.3:
The received image data from the gazebo is a 1-D array of pixels with unsigned-int-8 form in a sequence of 3 channels. So this Transceiver Function should be pre-processed with NumPy "frombuffer()" function that transforms the 1-D array in NumPy form:
```
1imgData=np.frombuffer(trans_imgData_bytes,np.uint8)
```
And in the end, validate the Json-DataPack from YOLO v5 Engine and set all information in DataPack, and return to YOLO v5 Engine:
```
1processed_image=JsonDataPack("camera_img","yolo_detector")
2
3processed_image.data['c_imageHeight']=trans_imgHeight
4processed_image.data['c_imageWidth']=trans_imgWidth
5processed_image.data['current_image_frame']=imgData
```
#### 3.4.7 YOLO v5 Engine for Detection of the Objects "yolo_detector_engine.py"
YOLO v5 Engine acquires the camera frame image from Gazebo during the camera Transceiver Function and detects objects in the current frame image. In the end, through the OpenCV package, the result is shown in another window. And the Yolo v5 Engine is also based on the Python Json Engine model and is similar to the vehicle control Engine in section 3.4.2. The whole structure is divided into three main blocks with another step to import Yolo v5 package.
* Initialization of Engine with establishing "camera_img" DataPack and validation Yolo v5 object with specific pre-preparation by "detectorWarmUp()":
```
1self._registerDataPack("camera_img")
2self._setDataPack("camera_img",{"c_imageHeight":0,"c_imageWidth":0,"current_image_frame":[240,320,3]})
3self.image_np=0
4
5self.detector=Yolov5.Yolov5Detector()
```
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Attribute & Description & Python Type & C Type \\ \hline image\_height & Camera Image height & uint32 & uint32 \\ image\_width & Camera Image width & uint32 & uint32 \\ image\_depth & Camera Image depth. & uint8 & uint32 \\ & Number of bytes per pixel & & \\ & Camera Image data. & numpy.array(image\_height & \\ image\_data & 1-D array of pixel data & * image\_width * image\_depth, numpy.uint8) & std::vector\textless{}unsigned char\textgreater{} \\ \hline \hline \end{tabular}
\end{table}
Table 3.3: GazeboCameraDataPack Attributes.
* stride, names, pt, jit, onnx, engine, imgsz, device = self.detector. detectorInit()
* self.detector.detectorWarmUp()
* In the main loop function first step is to acquire the camera image with the "_getDataPack()" function. And the extracted image data from Json DataPack during the camera Transceiver Function became already again in 1-D "list" data form. There is a necessary step to reform the structure of the image data to fit the form for OpenCV. The first is to convert the 1-D array into NumPy ndarray form and, according to acquired height and width information, reshape this np-array. And image form for OpenCV is the default in "BGR" form, and the image from Gazebo is "RGB". There is also an extra step to convert the "RGB" shaped NumPy ndarray [14]. In the last, it sends the original NumPy array-shaped image and OpenCV-shaped image together into detect-function and finally returns an OpenCV-shaped image with an object-bonding box, and this OpenCV-shaped ndarray can directly use the function of OpenCV showed in the window:
```
1#Imageconversion
2img_frame=np.array(img_list,dtype=np.uint8)
3cv_image=img_frame.reshape((img_height, img_width, 3))
4cv_image=cv_image[:,;,::-1]-np.zeros_like(cv_image)
5np_image=cv_image.transpose(2,0,1)
6
7#ImagedetectionbyYolov5
8cv_ImgRet,detect,_=self.detector.detectImage(np_image,cv_image, needProcess=True)
9
10#ShowofDetectedimagethroughOpenCV
11cv2.imshow('detectedimage',cv_ImgRet)
12cv2.waitKey(1)
```
**Figure 4.1** Object-detection by Yolo v5 on NRP platform (right: another frame)
The final goal of the Autonomous driving Benchmark Platform is to build a real-world simulation platform that can train, do research, test or validate different AI algorithms integrated into vehicles, and next, according to the performance to give benchmark and evaluation to adjust algorithms, in the end to real installed these algorithms on the real vehicle. This project "Autonomous Driving Simulator and Benchmark on Neurorobotics Platform" is a basic and tentative concept and foundation to research the possibility of the simulator with multi-agents on the NRP-Core platform. And according to the above construction of a single vehicle agent, the autonomous driving simulation experiment has been finished. This section will discuss the results and suggestions based on the performance of the simulation on the NRP-Core Platform and the Gazebo simulator.
### Simulation Result of Object-detection and Autonomous Driving
#### 4.1.1 Object Detection through YOLOv5 on NRP
The object detection is based on the visual camera from the Gazebo simulator through the Yolo v5 algorithm. NRP-Core is the behind transmit medium between the Gazebo and Yolo v5 detector. The simulation result is shown in Fig. 4.1.
On the point of objects-detection, the result reaches the standard and performances well, most of the objects in the camera frame image has been detected, but in some different frame, the detected objects are not stable and come to "undetected." And in the other hand, although most objects are correctly detected with a high confidence coefficient, e.g., the person is between 80% 93%, at the same time, there are few detected errors, such as when the flowering shrubs are detected as a car or a potted plant, the bush plant is detected as an umbrella and the but in front of the vehicle is detected as a suitcase. And last, even though the Yolo works well on the NRP platform, the performance is actually not smooth, and in the Gazebo simulator, the running frame rate is very low, perhaps only around 10-13 frames per second, in a more complex situation, the frame rate came to only 5 frames per second. That makes the simulation in Gazebo very slow and felled the sense of stumble. And when the size and resolution ratio of the camera became bigger, that made the stumble situation worse.
#### 4.1.2 Autonomous Driving along pre-defined Trajectory
Autonomous driving along a pre-defined trajectory works well, the performance of simulation also runs smoothly and the FPS (frame pro second) holds between 20-40 fps. This FPS ratio is also in the tolerance of real-world simulation. The part trajectory of the experiment vehicle is shown in Fig. 4.2, and the vehicle could run around Parkring and finish one circle. As the first image of the experiment, the vehicle would, according to the detection result, make the corresponding decision to control the vehicle to accelerate or to brake down and turn to evade other obstacles. But for this project, there is no appropriate autonomous driving algorithm to support presently, so here only use a pre-defined trajectory consisting of plenty of point coordinates. The speed of the vehicle is also fixed, and using PID controller to achieve simulated autonomous driving.
And on the other hand, all the 3-D models are equal in proportion to the real size of objects. After many tests of different sizes of the world maps, the size of Parkring is almost the limit of the Gazebo, even though the complexity of the map is not high. For a bigger scenario of the map, the FPS is obviously reduced, and finally, the simulation would become stumble and generate a sense of separation.
#### 4.1.3 Multi-Engines united Simulation
The final experiment is to start the Yolo v5 Engine and the autonomous driving control Engine. The above experiments are loaded with only one Engine, and they actually reacted well and had a relatively good performance. And the goal of this project is also to research the possibility of multi-agent simulation.
The result of multi-Engines simulation actually works in that the Yolo v5 Engine can detect the image and show it in a window and at the same time, the vehicle can move along the trajectory automatically drive. But the simulation performance is not good, and the FPS can only hold between 9 -11 fps. The driving vehicle in Gazebo moves very slowly and not smoothly, and the simulation time has an enormous error compared to the real-time situation.
Figure 4.2: Simulation trajectory of autonomous driving
### Analysis of Simulation Performance and Discussion
#### 4.2.1 YOLOv5 Detection ratio and Accuracy
Most of the objects near the vehicle in the field of view of the camera have been detected and have high confidence, but there are also some errors appearing during the detection that some objects in as wrong objects are detected, some far objects are detected bus some obvious close objects are not detected. The reason can conclude in two aspects:
1. The employment of the integrated Yolo v5 algorithm is the original version that is not aimed at the specific purpose of this autonomous driving project and has not been trained according to the specific usage. Its network parameters and arts of objects are original and did not use the specific self-own data set, which makes the result actually have a big error between the detected result and expected performance. So that makes the result described in section 4.1.1 that appears some detection error.
2. The accuracy and reality of 3-D models and environment. The object detection algorithm is actually deeply dependent on the quality of the sent image. Here the quality is not about the resolution size but refers to the "reality" of the objects in the image. The original Yolo v5 algorithm was trained based on real-world images, but the camera images from Gazebo actually have enormous distances from real-world images. But the 3-D models and the environment in Gazebo Simulator are relatively very rough, and like cartoon style, they have a giant distance to the real-world objects on the side of the light, material texture of surface and reflection, the accuracy of objects. For example, in Gazebo, the bus has terrible texture and reflection that lets the bus be seen as a black box and not easy to recognize, and Yolo Engine actually detected as a suitcase. And the Environment in Gazebo is also not well exquisitely built. For example, the shrub and bushes on the roadside have a rough appearance with coarse triangles and obvious polygon shapes. That would make huge mistakes and influence the accuracy of desired algorithms.
3. The property of the Gazebo simulator. The Gazebo simulator is perhaps suitable for small scene simulations like in a room, a tank station, or in a factory. Comparing to other simulators on the market like Unity or Unreal, the advantage of Gazebo is quickly start-up to the reproduction of a situation and environment. But the upper limit of Gazebo and its rendering quality is actually not very close to the real world and can let people at the first time recognize this is a virtual simulation, which also has a huge influence on training object-detection algorithms. And the construction of the virtual world in Gazebo is very difficult and has to use other supported applications like Blender [15] to help the construction. Even in Blender, the world has a very high reality, but after the transfer to Gazebo, the rendering quality becomes terrible and awful.
In fact, although detection has some mistakes and errors, the total result and performance are in line with the forecast that the Yolo v5 algorithm has excellent performance.
#### 4.2.2 Multi-Engines Situation and Non-smooth Simulation Phenomenon
The simulation of single loaded Yolo Engine and the multi-engine meanwhile operation appear terrible performance by the movement of the vehicle and inferior progress FPS of the whole simulation. But simulation for single loaded vehicle control engine is actually working well and has smooth performance. After the comparison experiment, the main reason for the terrible performance is because of the backstage transmission mechanism of information between Python Json
Figure 4.3: Distance between real-world and visual camera image
Engine on the NRP Platform. In the simulation of a single loaded vehicle control Engine, the transmission from Gazebo is based on Protobuf-gRPC protocol, and transmission back to Gazebo is JSON protocol, but the size of transmitted information is actually very small because the transmitted data consists of only the control commands like "line-velocity" and "angular-velocity" that don't take much transmission capacity and for JSON Protocol is actually has a negligible error to Protobuf Protocol. And the image transmission from Gazebo to Transceiver Function is also based on the Protobuf-gRPC method. But the transmission of an image from the Transceiver Function to Yolo Engine through JSON Protocol is very slow because the information of an image is hundreds of commands, and the according to the simulation loop in NRP, would make a block during the process of simulation and let the system "be forced" wait for the finish of transmission of the image. The transfer efficiency of JSON Protocol is actually compared to real-time slowness and tardiness, which takes the choke point to the transmission and, according to the test, only reduces the resolution rate of the camera to fit the simulation speed requirements.
### Improvement Advice and Prospect
The autonomous driving simulator and application on NRP-Core achieve the first goal of building a concept and foundation for multi-agents, and at the same time, this model is still imperfect and has many disadvantages that would be improved. On the NRP-Core platform is also the possibility for a real-world simulator discussed, and the NRP-Core has large potential to achieve the complete simulation and online cooperation with other platforms. There are also some directions and advice for the improvement of this application presently on NRP for further development.
#### 4.3.1 Unhindered simulation with other communication protocol
As mentioned before, the problem that communication with JSON protocol is the simulation at present is not smooth and has terrible simulation performance with Yolo Engine. Actually, the transmission of information through the Protobuf protocol based on the transmission between Gazebo and Transceiver Functions has an exceeding expectation performance than JSON protocol. The development Group of NRP-Core has also been developing and integrating the Protobuf-gRPC [16] communication backstage mechanism on the NRP-Core platform to solve the big data transmission problem. And in order to use Yolo or other object-detection Engines, it is recommended to change the existing communication protocol in the Protobuf-gRPC protocol. And the Protobuf protocol is a free and open-source cross-platform data format used to serialize structured data and developed by google, and details see on the official website [16].
#### 4.3.2 Selection of Basic Simulator with better performance
Because of the limitation of performance and functions of the Gazebo, there are many applications that can not in Gazebo easy to realize, such as the weather and itself change, and the accuracy and reality of 3-D models also have limitations. The usage of high-accuracy models would make the load became heavier on the Gazebo because of the fall behind the optimization of the Gazebo simulator. In fact, there are many excellent simulators, and they also provide many application development packages that can shorten the development period, such as Unity3D [17] or Unreal engine simulator [18]. In the team of an autonomous driving simulator and the benchmark there is an application demo on Unity3D simulator and figure Fig. 4.4 shows the difference between Gazebo and Unity3D.
The construction and simulation in Unity3D have much better rendering quality close to the real world than Gazebo, and the simulation FPS can maintain above 30 or even 60 fps. And for the YoloV5 detection result, according to the analysis in section 4.2.1, the result by Unity3D is better than the performance by Gazebo simulator because of more precision 3-D models and better rendering quality of models (Example see Fig. 4.5). The better choice for the development as the basic simulator and world expresser is recommended to develop on Unity3D or other game engines. And actually, NRP-Core will push a new version that integrates the interfaces with Unity3D and could use Protobuf protocol to ensure better performance for a real-world simulation.
#### 4.3.3 Comparing to other Communication Systems and frameworks
There are also many communication transmission frameworks and systems that are widely used in academia or business for robot development, especially ROS (Robot Operating System) system already has many applications and development. Actually, ROS has already been widely and mainly used for Robot-development with different algorithms: detection algorithm and computer vision, SLAM (Simultaneous Localization and Mapping) and Motion-control, and so on. ROS has already provided relatively mature and stable methods and schemes to undertake the role of transmitting these necessary data from sensors to the robot's algorithms and sending the corresponding control command codes to the robot body or actors. But the reason chosen NRP-Core to be the communication system is based on the concepts of Engines and Transceiver Functions. Compared to ROS or other framework NRP platform has many advantages: This platform is very easy to build multi-agents in simulation and conveniently load in or delete from the configuration of simulation; The
management of information is easier to identify than ROS-topics-system; The transmission of information is theoretically more efficient, and modularization and this platform can also let ROS at the same time as parallel transmission method to match and adapt to another systems or simulations. From this viewpoint, the NRP platform generalizes the transmission of data and extends the boundary of the development of the robot, which makes the development more modular and efficient. ROS system can also realize the multi-agents union simulation but is not convenient to manage based on the "topic" system. ROS system is now more suitable for a single agent simulation and the simulation environment. As mentioned before, the real interacting environment is not easy to realize. But NRP-Core has the potential because that NRP-Core can at the same time run the ROS system and let the agent developed based on the ROS system easily join in the simulation. That is meaningful to develop further on the NRP-Core platform.
## 5 Conclusion and Epilogue
This project focuses on the first construction of the basic framework on the Neurorobotics Platform for applying the Autonomous Driving Simulator and Benchmark. Most of the functions including the template of the autonomous driving function and object-detection functions are realized. The part of the benchmark because there are no suitable standards and further development is a huge project regarded as further complete development for the application.
Figure 4.4: Construction of simulation world in Unity3D with weather application
Figure 4.5: Comparing of the detection result by different platforms
This project started with researching the basic characters to build a simulation experiment on the NRP-Core Platform. Then the requirements of the construction of the simulation are listed and each necessary component and object of the NRP-Core is given the basic and key understanding and attention. The next step according to the frame of the NRP-Core is the construction of the application of the autonomous driving simulator. Started with establishing the physic model of the vehicle and the corresponding environment in the SDF file, then building the "close loop" - autonomous driving based on PID control along the pre-defined trajectory and finally the "open loop" - objects-detection based on YoloV5 algorithm and successfully achieve the goal to demonstrate the detected current frame image in a window and operated as camera monitor. And at last, the current problems and the points of improvement are listed and discussed in this development document.
And at the same time there are also many problems that should be optimized and solved. At present the simulation application can only regard as research for the probability of the multi-agent simulation. The performance of the scripts has a lot of space to improve, and it is recommended to select a high-performance simulator as the carrier of the real-world simulation. In fact the NRP-Core platform has shown enormous potential for the construction of a simulation world with each object interacting function and the high efficiency to control and manage the whole simulation project. In conclusion the NRP-Core platform has great potential to achieve the multi-agents simulation world.
|
2309.16977 | Reliability Quantification of Deep Reinforcement Learning-based Control | Reliability quantification of deep reinforcement learning (DRL)-based control
is a significant challenge for the practical application of artificial
intelligence (AI) in safety-critical systems. This study proposes a method for
quantifying the reliability of DRL-based control. First, an existing method,
random noise distillation, was applied to the reliability evaluation to clarify
the issues to be solved. Second, a novel method for reliability quantification
was proposed to solve these issues. The reliability is quantified using two
neural networks: reference and evaluator. They have the same structure with the
same initial parameters. The outputs of the two networks were the same before
training. During training, the evaluator network parameters were updated to
maximize the difference between the reference and evaluator networks for
trained data. Thus, the reliability of the DRL-based control for a state can be
evaluated based on the difference in output between the two networks. The
proposed method was applied to DQN-based control as an example of a simple
task, and its effectiveness was demonstrated. Finally, the proposed method was
applied to the problem of switching trained models depending on the state.
Con-sequently, the performance of the DRL-based control was improved by
switching the trained models according to their reliability. | Hitoshi Yoshioka, Hirotada Hashimoto | 2023-09-29T04:49:49Z | http://arxiv.org/abs/2309.16977v2 | # Reliability Quantification of Deep Reinforcement Learning-based Control
###### Abstract
Reliability quantification of deep reinforcement learning (DRL)-based control is a significant challenge for the practical application of artificial intelligence (AI) in safety-critical systems. This study proposes a method for quantifying the reliability of DRL-based control. First, an existing method, random noise distillation, was applied to the reliability evaluation to clarify the issues to be solved. Second, a novel method for reliability quantification was proposed to solve these issues. The reliability is quantified using two neural networks: reference and evaluator. They have the same structure with the same initial parameters. The outputs of the two networks were the same before training. During training, the evaluator network parameters were updated to maximize the difference between the reference and evaluator networks for trained data. Thus, the reliability of the DRL-based control for a state can be evaluated based on the difference in output between the two networks. The proposed method was applied to DQN-based control as an example of a simple task, and its effectiveness was demonstrated. Finally, the proposed method was applied to the problem of switching trained models depending on the state. Consequently, the performance of the DRL-based control was improved by switching the trained models according to their reliability.
Machine Learning, Reinforcement Learning, DRL, Reinforcement Learning-based Control
## 1 Introduction
Deep reinforcement learning (DRL) has attracted considerable attention as a method for realizing autonomous control. It achieves high performance even for highly complicated tasks. However, the black-box problem prevents the practical application of DRL-based controls in safety-critical systems. In DRL, the agent, which is a subject of action, autonomously learns the action that maximizes the cumulative discounted reward using multi-layer neural networks. Therefore, the intentions and decision-making processes of DRL-based control remain unclear. It is difficult for human users to accept DRL-based control outputs without showing the reliability of the DRL-based control.
Furthermore, the causes of inappropriate DRL-based control decisions must be explained. Recently, the explainability of artificial intelligence (AI) has become crucial and has been studied as an explainable AI (XAI). One approach is to analyze the change in outputs by eliminating part of the input data or by adding noise. [1, 2]. It can analyze the contributions of data/area in the input data and can be applied to AIs of any structure, such as random forest and neural networks. However, output changes due to many patterns of change for a specific input data/area should be analyzed to calculate the contribution. Therefore, the analysis could not be performed in real time because it required several hours. In another approach, the contribution of the input data is explained by an analysis using backpropagation. This method explains the contribution of the input data by differentiating the output of the AI with respect to the input data [3, 4, 5]. This approach can be applied to any model if it is differentiable. Furthermore, several methods suitable for image recognition have also been proposed [6, 7, 8, 9, 10]. These methods can visualize contributions of input data as a heatmap. Although they have excellent visibility, they can only be applied to convolutional neural networks (CNNs). Furthermore, AI learns the complex relationship between input and output data that humans cannot understand; therefore, visualized contributions are often unconvincing to humans. Consequently, many XAI methods have been proposed thus far [11, 12, 13, 14]; nevertheless, they cannot sufficiently explain the AI's intentions, and evaluation methods for XAI themselves have not been proposed. An XAI method that can be applied to the AI of any structure in real-time and that provides convincing outputs for humans needs to be studied.
In addition to explainability, reliability is another important factor in XAI. During the operation of a system, it is necessary to confirm that the system is working properly. Automatic control based on model predictive control (MPC) can be evaluated based on whether the iteration of the optimization calculation is convergent. However, no method for evaluating the adequacy of AI-based controls has yet been proposed. In DRL, an agent learns a policy on how to act through experience; therefore, the reliability of DRL-based control relies exhaustively on whether the agent experiences the possible states in a learning environment. Because a dataset is sampled according to the agent's action and training data are randomly chosen from the sampled dataset, the sampled state has a bias caused by the agent's policy. Because it is difficult to learn all the states, there is a lack of learned states. Therefore, identifying the operational design domain (ODD) of AI requires numerous simulations covering all states in the learning environment. To solve this problem, a method that prioritizes the sampled data according to the learning loss was proposed. This can reduce training data bias when selecting the training data from the sampled dataset. However, this does not reduce the bias in the sampled dataset. To reduce this bias, improved exploration methods are necessary to enable the agents to experience various states. One approach to improving exploration is to provide an intrinsic reward to the agent when the agent experiences an unknown state. In random network distillation (RND) [15], two randomized neural networks, the target network and predictor network, are introduced to calculate the
intrinsic reward. In the learning process, the parameters of the target network were fixed, and those of the predictor network were updated to minimize the error between the target and predictor networks. Because the error is minimized for well-experienced data, it can be used as an intrinsic reward. Therefore, curiosity can be implemented in a DRL agent through the RND. RND improves learning efficiency and is especially effective in cases with a sparse reward. Although several methods for reducing bias in training data have been studied, they cannot be eliminated. Therefore, quantitatively evaluating the bias of training data for the practical application of AI-based systems is crucial.
The simplest method for assessing whether the AI has learned the state well is to count the data used [16]. Because a state is defined in a continuous multidimensional space, the number of state patterns is infinite. Therefore, determining the number of counts necessary for learning in every situation is difficult. Moreover, the same idea as the RND can be used to evaluate proficiency. As RND induces the exploration of unknown states through intrinsic reward, the value of the intrinsic reward can describe the proficiency of the state. However, the applicability of the RND to proficiency evaluation is unclear.
In this study, a novel method for reliability quantification of DRL-based controls is proposed. Reliability is evaluated by the fact that learning has been performed sufficiently for the given states. First, RND was applied to the reliability evaluation to clarify the issues to be solved. Second, a reliability quantification method is proposed to solve these issues. The reliability is quantified using reference and evaluator networks, which have the same structure and initial parameters. During the training, the parameters of the evaluator networks were updated to maximize the difference between the reference and evaluator networks. Thus, the reliability of the DRL-based control for states can be evaluated based on the difference in outputs between the two networks. For example, it was applied to deep q-network (DQN)-based control for a simple task, and its effectiveness was demonstrated. Finally, the switching of the trained DQN models is demonstrated as an example of the application of the proposed reliability quantification.
## 2 Deep Reinforcement Learning
### Reinforcement Learning
Reinforcement learning (RL) consists of an agent and environment. In the learning process, the agent observes the current state of the environment, \(s_{t}\). The agent then takes action according to its policy, \(\pi\), from an observed state. The environment transits to the next state, \(s_{t+1}\), and returns a reward, \(r_{t+1}\), to the agent. Finally, the agent updates its policy to maximize the cumulative reward in the future using the received reward. The cumulative reward \(R_{t}\), is calculated using Eq. 1, where \(\gamma\) is a discount rate and shows an uncertain amount of reward obtained in the future.
\[R_{t}=r_{t+1}+\gamma r_{t+1}+\gamma^{2}r_{t+2}\... \tag{1}\]
Owing to the huge trial-and-error time, the agent obtains the optimal action that can maximize the cumulative reward.
### Deep Q-Network
In this study, the deep Q-network (DQN) [17] was used to develop autonomous control. DQN is one of the DRL methods based on the Q function, \(Q_{\pi}(s_{t},a|\theta)\). The Q function maps the state and action to the value of the action. The action value is called the Q value, and the optimal action is that has the maximum Q value. In DQN, the neural network approximates the Q function. The parameters of the neural network were updated to minimize the loss function in Eq. 2, which indicates the error between the predicted Q value and the obtained value. The \(Q_{\pi}^{*}(s,a|\theta^{\prime})\), called target network, is a neural network that has the same structure as \(Q_{\pi}(s,a|\theta)\). It is used when predicting future rewards and stabilizing training. The parameters of the target network, \(\theta^{\prime}\), are updated by Eq. 3, where \(\alpha\) is a learning rate.
\[Loss_{DQN}(\theta)=\mathbb{E}\left[\left(r_{t+1}+\gamma\max_{a^{ \prime}\in A}Q_{\pi}^{*}(s_{t+1},a^{\prime}|\theta^{\prime})-Q_{\pi}(s_{t},a_{ t}|\theta)\right)^{2}\right] \tag{2}\] \[\theta^{\prime}\leftarrow\theta^{\prime}+\alpha(\theta-\theta^{ \prime}) \tag{3}\]
## 3 Learning Environment
In this section, the learning environment for the validation of the existing and proposed methods is described.
### Learning Task
The task of the agent was to achieve the goal from its initial position. The goal \((x_{goal},y_{goal})\), is set at (0,0) and the initial position of the agent \((x_{init},y_{init})\), is randomly set in an area described by Eq. 4. The initial velocities \((v_{xinit},v_{yinit})\), are (0,0). The area of the learning environment is given by Eq. 5.
\[(x_{init},y_{init})\in \left\{(x,y)\right|\left(200\leq\sqrt{\left(x_{goal}-x\right)^{2} +\left(y_{goal}-y\right)^{2}}\leq 300\right)\right\} \tag{4}\] \[\left\{(x,y)|-400\leq x\leq 400,-400\leq y\leq 400\right\} \tag{5}\]
The motion of the agent is defined as the motion of a mass point, considering the resistance corresponding to the velocity. The equations of motion are described by Eqs. 6, where \(m,f,\kappa\) are a mass, a force for each axis, and a gain of resistance, respectively. The values of mass and resistance gain were set to 10 and 2, respectively.
\[\begin{cases}\text{m}\,\frac{\text{d}^{2}\text{x}}{\text{dt}^{2}}=\text{f}_{ \text{x}}-\kappa\frac{\text{dx}}{\text{dt}}\\ \text{m}\,\frac{\text{d}^{2}\text{y}}{\text{dt}^{2}}=\text{f}_{\text{y}}-\kappa \frac{\text{dy}}{\text{dt}}\end{cases} \tag{6}\]
As the action space of the DQN is defined in a discrete space, the actions of the AI are set as discrete forces, F. The action options are the nine forces described in Eqs. 7.
\[\text{F}=\begin{pmatrix}\text{f}_{\text{x}}\\ \text{f}_{\text{y}}\end{pmatrix}\coloneqq\left\{\begin{pmatrix}\text{fsin}( \theta)\\ \text{fcos}(\theta)\end{pmatrix}\left|\text{(f}=0\right)\nu\left(\text{f}=10 \wedge\theta\in\left\{0,\frac{\pi}{4},\frac{\pi}{2},...,\frac{7\pi}{4}\right\} \right)\right\} \tag{7}\]
The schematic view of the learning environment is shown in Fig. 1. The number of steps in one episode was 240, and the time step was set to 1. The agent makes decisions at each time step. The conditions for ending the episode are as follows: 240 steps are performed, the distance between the agent and goal becomes less than 10, or the agent exits the learning environment.
### Design of Reward
Because the agent determines the goodness of an action using a reward, a reward appropriate to the task should be designed. To realize an action to achieve a goal, a reward is given according to the deviation from the goal, \(\text{r}_{\text{t}}^{\text{dev}}\). Furthermore, the reward, according to the changes in the distance between the agent and the goal, \(\text{r}_{\text{t}}^{\text{dec}}\), is given if the distance decreases. Finally, a large negative reward was provided if the agent exited the environment. The rewards and total reward were calculated using Eqs. 8, 9, 10, and 11, where \(\text{d}_{\text{t}}\) is the distance between the agent and goal.
\[\text{r}_{\text{t}}^{\text{dev}}=-\frac{\text{d}_{\text{t}}}{400\sqrt{2}}-0.1 \tag{8}\]
Figure 1: Learning environment
\[r_{t}^{dec}=0.03+0.07*\left|1-\arccos\left(\frac{d_{t+1}-d_{t}}{\sqrt{v_{x}^{2}+v_{ y}^{2}}}\right)\right|/(\pi/2) \tag{9}\]
\[r_{t}^{done}=\begin{cases}-10,&\quad\text{if the agent go out}\\ 0,&\quad\text{else}\end{cases} \tag{10}\]
\[r_{t}=\begin{cases}r_{t}^{dev}+r_{t}^{done},&\quad d_{t+1}>d_{t}\\ r_{t}^{dev}+r_{t}^{dec}+r_{t}^{done},&\quad d_{t+1}\leq d_{t}\end{cases} \tag{11}\]
### Design of Observation
The observation is the input data for the neural network and should include sufficient data to predict the reward. In this study, the observed state is defined by Eq. 12, where \((dx,dy)\) and \((v_{x},v_{y})\) are the relative positions of the goal and the velocities of the agent, respectively. These values are scaled from -1 to 1. The input data for the neural network were the current state and the states in the previous four steps.
\[s_{t}\coloneqq\left[v_{x}/5,v_{y}/5,dx/800,dy/800\right] \tag{12}\]
## 4 Reliability Quantification
### Applicability of Random Network Distillation
In random network distillation (RND), an intrinsic reward is provided when an agent experiences an unknown state, and exploration of the unknown state is encouraged. Thus, the value of the intrinsic reward was used as the value of inadequacy to the state, and RND is used for reliability evaluation. A neural network consists of multiple layers, and features are extracted from each layer. Because the output of each layer may be changed significantly by smaller changes in the input data, it is not possible to extract similar features from similar states. Therefore, the uncertainties of both the input state and the extracted feature are evaluated. The structure of the neural network is defined according to the RND, as shown in Fig. 2. The neural network is comprised of three parts: a DQN network, an RND network that evaluates the uncertainty of the state, and an RND network that evaluates the uncertainty of the extracted features. They consist of fully connected layers (FC); the numbers of layers and nodes are shown in Fig. 2. The activation functions in the DQN are the ReLU function in the hidden layers and the linear function in the output layer. Activations in the hidden layers of the RND networks and output layers are a softsign function and linear function. Any DRL method can be applied by setting up a neural network instead of a DQN.
During training, the parameters of the neural networks that evaluate uncertainty are updated to minimize the output values. The parameters of the neural network predicting the Q values were updated to minimize the loss function of the DQN. To ensure that reliability quantification does not affect the DRLs, these updated processes are independent. The training hyperparameters are listed in Table 1.
After training until the loss values converged, the trained model was evaluated in the simulation by varying the initial position within the ranges shown in Eqs. 13. To evaluate the action of the agent and the uncertainty of the learning environment, the ranges were wider than those in the learning environment, and the episode did not end if the agent exited the learning environment.
\[\left(\mathbf{x_{evalu}},\mathbf{y_{evalu}}\right)\in\left\{\left(\mathbf{x },\mathbf{y}\right)\right|_{\begin{subarray}{c}\mathbf{x}\in\left[-700,-630,-630,700\right]\\ \mathbf{y}\in\left[-700,-630,-630,700\right]\end{subarray}}^{\mathbf{x}\in \left[-700,-630,-630,700\right]\end{subarray}} \tag{13}\]
\begin{table}
\begin{tabular}{l l} \hline \hline Learning steps & 20,000,000 [steps] \\ Batch size & 1024 \\ Learning rate & 0.001 \\ Target model update & 0.01 \\ optimizer & Adam \\ L1 regularization & 0.0001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for training
Figure 2: Structure of the neural networks considering RND
The evaluation trajectories are shown in Fig. 3. The unfilled and filled red markers indicate the agent's initial and last positions, respectively, and the shade of color indicates the time history. As shown in Fig. 3, the agent can reach the goal regardless of where the initial position is set within the learning environment. If the initial position is set outside the environment, the agent cannot reach the goal.
Fig. 4 shows the distribution of the uncertainty. The uncertainty evaluated for input state and extracted features are denoted by 'input' and 'extracted features', respectively. The average of them is done by 'both'. The figure shows the average uncertainty values when the agent passed through the points. During the training process, the agent acted randomly in the early steps and experienced a wide range of states in the environment. At the end stage of training, the agent moves to the given goal from its initial point on the shortest course. Thus, the number of states experienced inside the initial position, inside, and outside the learning environments decreased in that order. According to Fig. 4, the value of uncertainty regarding the extracted features decreases with the distance to the goal. This result is appropriate for the number of experiences. However, the reliability of the input data is not suitable. Thus, the uncertainty can be evaluated using the RND with the extracted features. To compare the uncertainty distribution of the trained model with that of the untrained model, the uncertainty distribution of the untrained model is shown in Fig. 5. The uncertainty values were lower in some areas of the model distribution before training.
Figure 3: Trajectories of the trained model (RND)
Based on the results shown in Figs. 4 and 5, the possibility of applying RND to uncertainty quantification is shown; however, there are two issues regarding the application of RND to evaluate uncertainty. First, the RND does not ensure that the value of uncertainty is large in an unknown state, as shown in Fig. 5. Second, the uncertainty range depends on the initial parameters. If the maximum and minimum values differ according to the model, it is difficult to set a criterion to determine whether the model is in a well-trained state. These issues were caused by the value difference in the output of the target network, and the predictor network depended on the initial parameters. From the perspective of evaluation, it is important to ensure that the value of uncertainty is small in well-known states and large in unknown states and that the range of uncertainty is constant. If this is not ensured, the uncertainty coincidentally decreases for unknown states. Therefore, it is not preferable to use RND as an evaluation method.
### Reliability Quantification Method
In the previous section, issues related to the application of RND to uncertainty evaluation were
Figure 4: Distributions of uncertainty of trained model
Figure 5: Distributions of uncertainty of model before training
addressed. Thus, an improved method suitable for evaluating reliability was proposed. To evaluate reliability, two networks, a reference network, and an evaluator network, were used. Reliability was obtained by calculating the absolute error between the reference and evaluator networks, and a softsign function was applied to the absolute error to scale a range from 0 to 1. The reliability value was calculated using Eq. 14, and the calculation flow of reliability is shown in Fig. 6.
\[\text{Reliability(s)}=\begin{array}{c}\text{softsign}(|\text{Ref(s}|\theta_{ \text{r}})-\text{Evalu(s}|\theta_{\text{e}})|)\\ s_{t}\\ \end{array} \tag{14}\]
The scheme is similar to that of RND, but there are some differences. In RND, the target and predictor networks can have different structures, and their initial parameters are determined randomly; however, the reference and evaluator networks should have the same structure and initial parameters. Because the reference and evaluator networks are the same neural networks before training, their outputs are the same if the same input date is provided. This ensured that the reliability value reached zero before training. During the training process, the parameters of the reference network were fixed, and those of the evaluator network were updated to maximize the reliability value. Thus, the loss function can be described using Eq. 15, where \(\text{Ref(s}|\theta_{\text{r}})\), \(\text{Evalu(s}|\theta_{\text{e}})\), \(\theta_{\text{r}}\), and \(\theta_{\text{e}}\) are the reference network, the evaluator network, the parameters of the reference network, and the parameters of the evaluator network, respectively. The shape of the loss function is shown in Fig. 7.
\[\text{Loss}_{\text{improve}}(\theta_{\text{e}})=\mathbb{E}[-\text{softsign}(| \text{Ref(s}|\theta_{\text{r}})-\text{Evalu(s}|\theta_{\text{e}})|)] \tag{15}\]
Figure 6: Reliability calculation flow
Figure 7: Shape of the loss function
The gradient of the loss function decreased as reliability increased. Therefore, the larger the reliability value, the more difficult it is to change. This means that sufficient trained experience fades slowly and is similar to human memory.
Furthermore, because the trained model is largely affected by recent training data, the reliability of past trained situations should decrease step-by-step. In this regard, the parameters of the evaluation network were updated to minimize the reliability of irrelevant training data, \(\mathrm{S_{irr}}\) and irrelevant data were generated randomly. The loss function that considers the forgetting experience is described by Eq. 16, where \(\mathrm{s_{irr}}\) is a randomly generated irrelevant states.
\[\mathrm{Loss_{forget}}(\theta_{\mathrm{e}})=\mathbb{E}[\mathrm{softsign}(| \mathrm{Ref}(\mathrm{s_{irr}}|\theta_{\mathrm{r}})-\mathrm{Evalu}(\mathrm{s_{irr }}|\theta_{\mathrm{e}})])] \tag{16}\]
Finally, the loss function on reliability quantification is described as Eq. 17.
\[\mathrm{Loss_{reliability}}(\theta_{\mathrm{e}})=\mathrm{Loss_{improve}}(\theta_ {\mathrm{e}})+\mathrm{Loss_{forget}}(\theta_{\mathrm{e}}) \tag{17}\]
The regularization term used to prevent overtraining is defined in Eq. 18, where \(\mathrm{p}\) and \(\lambda\) are a dimension and a power of regularization, respectively. This restricted the distance of the parameters between the evaluation and reference networks.
\[\lambda\frac{1}{\mathrm{p}}|\theta_{\mathrm{e}}-\theta_{\mathrm{r}}|^{\mathrm{ p}} \tag{18}\]
### Validation of Reliability Quantification
To validate the proposed method, it was evaluated in the same manner as that described in Section 4.1. The structure of the neural network is shown in Fig. 8. The hyperparameters for training are the same as those listed in Table 1.
Figure 8: Structure of neural network on reliability quantification
The simulation trajectories are shown in Fig. 9. As shown in Fig. 9, the agent learns the optimal action within the learning environment. The reliability distribution is shown in Fig. 10. According to Fig. 10, the reliability becomes high within the range of the initial positions. According to the results, the proposed method, particularly the reliability of the extracted features, can evaluate whether the agent is trained.
According to the comparison results, the distribution clearly classifies the high-reliability and low-reliability areas rather than the uncertainty of RND. The range of the uncertainty distribution depends on the model parameters and structure in RND, whereas the range of the reliability distribution is fixed at 0-1 in the proposed method. Furthermore, as shown in Fig. 10, the reliability of the untrained model was 0 for all the states. These two aspects are important for their use as evaluation methods.
### Relationships between Reliability of AI and Feasibility on Task
The agent must achieve the goal if the initial position is within a high-reliability area. The trajectories of the agent corresponding to the reliability at the initial position are shown in Fig. 13. The ratio for achieving this goal is shown in Fig. 14.
Figure 11: Comparison of distribution between uncertainty by RND and reliability by proposed
According to the results, the ratio of the agent achieving its goal increases with the reliability value. If the reliability exceeds 0.5, the agent reaches the goal regardless of its initial position. Because the reliability and ratio of tasks achieved are shown, the proposed method can be used to quantify the reliability of DRL-based control.
## 6 Switching of Trained Model
It was demonstrated that the proposed method could evaluate the reliability of the trained model described in the previous section. The reliability value becomes 0 in the untrained state and 1 in the well-trained state. In RND, because the range of uncertainty depends on the model, a comparison of the uncertainty between models is not possible. The proposed method ensured that the reliability ranges were identical, enabling the reliability of the models to be compared. Therefore, the effectiveness of switching trained models was demonstrated as an application method. The quality of
Figure 14: Ratio of the agent achieving its goal
Figure 13: Trajectories corresponding to the reliability at the initial position
the sampled training data is a major issue in DRL. However, it is difficult to solve this problem completely. One solution is to switch between the trained models. Using several trained models, the untrained states of each model can cover each other. However, it is difficult to set criteria for switching models. As a solution, the reliability value can be used as a criterion for the switching model. Therefore, the effectiveness of switching models is discussed in this section.
Before demonstrating the switching models, four models with biased experience were developed. The ranges of the initial positions were different for these models. The initial positions of the i-th model were determined randomly within the range expressed in Eq. 19. The i-th model trained the states of the i-th quadrant. The reward and observations were the same as those described in the previous sections.
\[\left(\mathrm{x}_{\mathrm{init}}^{\mathrm{i}},\mathrm{y}_{\mathrm{init}}^{ \mathrm{i}}\right)\in\left\{\left(\mathrm{d}\cdot\cos(\theta),\mathrm{d}\cdot \sin(\theta)\right)\right|_{\frac{\pi}{2}(\mathrm{i-1})\leq\theta<\frac{\pi_{ \mathrm{i}}}{2}}^{200\leq\mathrm{d}\leq 300}}\right\}\text{ (i=1,2,3,4)} \tag{19}\]
After training, all models were evaluated. The trajectories and distribution of the reliability are shown in Figs. 15 and 16, respectively. The gray area in Fig. 15 indicates the range of the initial states. According to the results, the trained models achieved their goals and exhibited high reliability within their trained quadrants.
Figure 16: Reliability distributions of models trained using biased experience
Figure 15: Trajectories of models trained using biased experience
The trained models were then switched. In the simulation, the optimal actions and reliability values of all models were calculated every timestep. Then, the action of the agent with the maximum reliability is used as an action in the simulation. The results of the trajectories and reliability are shown in Fig. 17. According to the evaluation results, the agent achieved the goal from all initial positions and exhibited high reliability over a wider area in the learning environment. Thus, the performance of the DRL-based control is improved by switching the trained models according to reliability.
## 7 Conclusion
A major challenge of DRL is reliability quantification. It is necessary to evaluate the reliability in real time for the actual application of DRL-based control in safety-critical systems. In this study, the following was conducted to propose a novel method for reliability quantification of DRL-based control. First, the applicability of an existing method, RND, for uncertainty evaluation is investigated. Uncertainty is defined as the opposite evaluation index of reliability. It was confirmed that the range of the uncertainty value depends on the initial parameters. Hence, it cannot be ensured that the uncertainty value increases in an unknown state. Because the range of uncertainty is not fixed and the uncertainty value often becomes small for untrained states, RND is difficult to use for uncertainty quantification. Second, a reliability quantification method is proposed to solve those problems. Reliability is evaluated by the difference between the reference and evaluator networks, which have the same structure and initial parameters. The parameters of the reference network were fixed, whereas those of the evaluator network were updated to maximize the difference in output between the two networks for the trained data and minimize the difference in output between them for the irrelevant data. The proposed method was validated for DQN-based control of a simple task. Consequently, it was confirmed that the proposed method can evaluate reliability and identify a well-trained domain in
Figure 17: Trajectories and reliability of results of model switching
the learning environment. Finally, an example application is presented. To address the lack of experience in a trained model, switching the trained models according to their reliability was investigated. Using four trained models with biased experience, it was demonstrated that a given task could be completed in any situation by appropriately switching between them based on their reliability. The advantages of the proposed method are that the range of values is fixed and that the value of reliability becomes zero in untrained situations and one in well-trained situations. These advantages are beneficial for evaluating reliability and creating a criterion easily. The proposed method therefore can be used in various applications, such as the switching of trained models. The issue of bias in experienced states can be resolved by switching several trained models. This can improve the performance and robustness of DRL-based control. Another application example involves identifying the ODD of the control. Additionally, the proposed method can calculate the intrinsic reward instead of the RND.
In future studies, its applicability to an actual environment should be validated. In actual environments, the input state is affected by data noise. Their effects are not clear at this moment but are important for practical use. The proposed method assumes that the loss value converges, although it does not ensure that the loss value in all states converges to zero. Therefore, the convergence of the loss of the training process in reliability quantification should be considered.
|
2309.08591 | Are Multilingual LLMs Culturally-Diverse Reasoners? An Investigation
into Multicultural Proverbs and Sayings | Large language models (LLMs) are highly adept at question answering and
reasoning tasks, but when reasoning in a situational context, human
expectations vary depending on the relevant cultural common ground. As
languages are associated with diverse cultures, LLMs should also be
culturally-diverse reasoners. In this paper, we study the ability of a wide
range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and
sayings in a conversational context. Our experiments reveal that: (1) mLLMs
"know" limited proverbs and memorizing proverbs does not mean understanding
them within a conversational context; (2) mLLMs struggle to reason with
figurative proverbs and sayings, and when asked to select the wrong answer
(instead of asking it to select the correct answer); and (3) there is a
"culture gap" in mLLMs when reasoning about proverbs and sayings translated
from other languages. We construct and release our evaluation dataset MAPS
(MulticultrAl Proverbs and Sayings) for proverb understanding with
conversational context for six different languages. | Chen Cecilia Liu, Fajri Koto, Timothy Baldwin, Iryna Gurevych | 2023-09-15T17:45:28Z | http://arxiv.org/abs/2309.08591v2 | Are Multilingual LLMs Culturally-Diverse Reasoners? An Investigation into Multicultural Proverbs and Sayings
###### Abstract
Large language models (LLMs) are highly adept at question answering and reasoning tasks, but when reasoning in situational context, human expectations vary depending on the relevant cultural common ground. As human languages are associated with diverse cultures, LLMs should also be culturally-diverse reasoners. In this paper, we study the ability of a wide range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and sayings in a conversational context. Our experiments reveal that: (1) mLLMs 'knows' limited proverbs and memorizing proverbs does not mean understanding them within a conversational context; (2) mLLMs struggle to reason with figurative proverbs and sayings, and when asked to select the wrong answer (instead of asking it to select the correct answer); and (3) there is a "culture gap" in mLLMs when reasoning about proverbs and sayings translated from other languages. We construct and release our evaluation dataset **MAPS** (**Mul**c**ulti**r**A**l** Proverbs and **S**ayings) for proverb understanding with conversational context for six different languages.1
Footnote 1: Available at [https://github.com/UKPLab/maps](https://github.com/UKPLab/maps).
## 1 Introduction
Large language models (LLMs) have achieved impressive results for question answering and reasoning tasks (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022, inter alia). However, when reasoning in situational context, human expectations may vary cross-culturally (Thomas, 1983) and depend on the knowledge of the relevant cultural common ground (which cultural common ground is the shared knowledge based on which people within a culture reason and communicate, including concepts and common sense, Hersh-covich et al., 2022). The ability of pre-trained LLMs to understand such common ground in a cross-lingual setting is specifically neglected in NLP (Hershcovich et al., 2022). As languages and cultures are intertwined (Kramsch, 2014; Hovy and Yang, 2021), it is crucial for models to serve all communities and be able to reason and communicate in a relevant way.
Focusing on the multilingual LLMs, several questions arise: (1) Do mLLMs embed knowledge of cultural common ground, and does this knowledge affect their reasoning performance? (2) Can mLLMs reason in contexts that require an understanding of cultural common ground? and (3) Can mLLMs reason cross-culturally (i.e., about
Figure 1: Proverbs are fixed expressions used by different cultures. We collect proverbs from six languages (top) and their usage within conversational contexts. We evaluate mLLMs with a binary-choice inference task in the conversational context that contains proverbs (bottom).
another culture's cultural common ground, after translating into the same language) and are there gaps in the cultural knowledge (a "cultural gap")?2
Footnote 2: Reasoning with the cultural common ground could be independent of language. For example, communications among different cultural groups within a multi-cultural country or for international trade.
In order to answer the above questions, we need to assess LLMs using fixed, culturally-diverse expressions in multiple languages, that are used flexibly in situational contexts that require reasoning. Fixed expressions are particularly important for evaluating the memorization of cultural common ground knowledge of LLMs. However, prior work such as MaRVL (Liu et al., 2021, which is in multimodal) or MABL (Kabra et al., 2023) do not contain fixed expressions.
Proverbs and sayings (such as the ones illustrated in Figure 1) are fixed expressions that convey traditional wisdom, sometimes viewed as a form of folk literature, and grounded in living experience and social-cultural context (White, 1987; Mieder, 2004; Honeck, 2013). While different proverbs may emerge for different cultures, the underlying meaning of proverbs usually expresses universal human experiences. Yet, their literal expression and interpretation can vary from culture (Honeck, 2013).
For example, the English proverb _The apple doesn't fall far from the tree_ -- means a child grows up to resemble his/her parents. While a plain version _like father like son_ exists in many cultures, this proverb has a similar variant _Rebung it aitda jah dari rumpunyna_ "Bamboo shoots are not far from the clump" in Indonesian, and "the dragon the son of a rat can make a hole
is the largest multilingual dataset that focuses on proverbs and sayings, with conversational contexts and a inference task.
MABL (Kabra et al., 2023) is a task that's close to ours, but focuses on novel metaphors understanding in different cultures and on cross-lingual transfers. It is less suitable to study memorization vs reasoning, nor studies reasoning with a context. Ruis et al. (2022) and Hu et al. (2023) use conversational context to study pragmatic reasoning in English LLMs and identification of parallels between human and models, which provide limit insights beyond English. While we also use conversational context, our work focuses on cultural common ground and multilingual reasoning aspects of LLMs (we also have a larger dataset).
Recent work by Haviv et al. (2023) aims to understand the memory retrieval mechanism in LLMs with English idioms. Here, our goal is not to study memory retrievals in LLMs, but rather focus on the multi-cultural and cross-cultural reasoning of mLLMs.
## 3 MAPS - MulticultrAl Proverbs and Sayings
To help investigate our proposed research questions, we first present **MAPS** -- a dataset of proverbs across six geographically and topologically-diverse languages. **MAPS** consists of: (1) proverbs, (2) conversational usages as context, (3) interpretations of proverbs (one correct, one wrong), and (4) labelling of whether the usage of the proverb is figurative or not (data examples in Table 2, Figure 6, Appendix A.5 illustrates the annotation process).
### Dataset Creation
Language Choices.We chose six languages for this dataset: English, German, Russian, Bengali, Chinese, and Indonesian. Several factors where included when choosing the languages, including geographical diversity such as Eastern vs. Western (to increase the potential concept diversity), topological diversity, and resource availability (high-resource vs. lower-resource).
Proverbs and Sayings.We collect all proverbs and sayings (along with explanations) from Wikiquote3 and Wiktionary.4. Bengali has a significantly higher quantity of proverbs compared to other languages, thus, we perform a random sub-sampling of the proverbs for annotation to keep the final data roughly balanced.
Footnote 3: [https://en.wikiquote.org/](https://en.wikiquote.org/)
Footnote 4: [https://www.wiktionary.org/](https://www.wiktionary.org/)
Footnote 5: The conversational contexts are in each perspective language, except for Russian and Bengali where the contexts are in English due to quality issues. For Russian and Bengali, the contexts are written in English first, then machine translated and fixed by native speakers for two rounds.
Conversational Context.While proverbs and sayings are self-contained, they are typically used in conversations and writing to offer advice or console others. In order to investigate the ability of mLLMs to reason with proverbs, next we created short conversations that use proverbs (i.e., conversational context for the inference task).
To aid the data creation process, we use a human-model hybrid approach (i.e. model-in-the-loop), inspired by the recent work (Chakrabarty et al., 2022; Liu et al., 2023). We first use GPT3.5 (gpt-3.5-turbo-0301; a sibling model of Ouyang et al., 2022) by prompting it with fixed templates to generate the seed conversational context (see Appendix B for the model templates).5 Next, we ask two or more native speakers (experts or crowd, with least one expert per language) to either accept the model-created conversation, or write a new conversation if the human thinks the usage of the proverb is flawed.
Footnote 6: The model has significant trouble in creating relevant context when the proverb is figurative. Anecdotally, human annotators found that machine-generated context is helpful as a ’prompt’ to human and helped to speed up the re-writes.
In the final dataset, the conversational contexts for English, Chinese, Russian, and Bengali were completely re-written,7 whereas for Indonesian and German, 22% and 20.5% of the original model-generated contexts were retained (the difference is probably due to variations in individual annotator preferences).
Footnote 7: “An apple a day keeps the doctor away” is a literal proverb that is advocating for apple consumption. “The apple doesn’t fall far from the tree” is a figurative proverb where the phrase is the same as the phrase is the same as the phrase is the same as the phrase is the same as the phrase is the same as the phrase is the same as the phrase is the phrase.”
Interpretation of Proverbs in Context.We formulate this part as an inference task (following Liu et al., 2022). We ask annotators to create one correct answer and one wrong answer to the following question based on the conversational context:
_What does the person mean by [proverb]?_
Additionally, we also label the proverb if the interpretation is figurative (i.e. the interpreted meaning of the proverb is different than the expressed literal meaning).
Quality Control.Finally, we sampled 100 conversational contexts with their answers from each language. Then, we asked a separate set of native speakers to ensure the data quality for (1) correct usage of proverb (i.e. the context is correct), and (2) correct answers for interpreting the meaning. Sometimes, it is possible to have more than one interpretation of a proverb given the context. We asked the native speakers to score the answers as correct as long as the answers aligned with one possible interpretation.
Footnote 1: [https://www.face.com/face](https://www.face.com/face)
In addition, although an English model, LLaMA-2 (Touvron et al., 2023) has some multilingual capabilities, and hence we included two LLaMA-2 models (7B, 13B) in our studies.9
Footnote 9: While larger models exist, we chose these models due to computational constraints. We can already see differences in performance at these model sizes.
Memorization Evaluation.Following previous work in assessing data memorization (Magar and Schwartz, 2022; Haviv et al., 2023; Carlini et al., 2023, 2021), we mask out the last word of each proverb and prompt the mLLMs to complete the proverb with templates.
For the memorization task, let \(t_{i}\in T\) be a prompt template, and let \(q_{j}\) be a proverb with \(n\) words where \(q_{j}\triangleq\{w_{1},w_{2}\cdots w_{n}\}\). We remove the last word \(w_{n}\) for non-MLM models, if the LM generates (greedily) a string that starts with the missing token, or the entire proverb is a sub-string of generated string, then we count the model as having memorized the proverb. For the MLM model, we mask out the last word with '<mask>' and do predictions (i.e. \(w=\arg\max_{w_{n}\in V}P(w_{n}|T_{i};\hat{q}_{j})\), where \(\hat{q}_{j}\) is a proverb with mask token, and \(V\) is the vocabulary).
As the zero-shot prompting results are highly sensitive to the input patterns, we create 5 different prompt patterns (Table 8, Appendix B), and take the union of memorized examples among 5 patterns as the memorization accuracy.
Reasoning Evaluation.For the reasoning task, we compute the correct answer by comparing logits of the two answer candidates ('A' or 'B') as in Lin et al. (2022). In particular, we use the prompt template \(t^{r}\) for this task (as in Table 3) and compute \(P(t^{r};q_{i};A^{\prime})\) and \(P(t^{r};q_{i};B^{\prime})\) and pick the larger one as the correct answer. For the MLM model, we compare the prediction logits of the answer candidates.
Both memorization and reasoning experiments are based on the test set.
## 5 Results and Discussion
### Knowledge of Proverbs
_-- A little knowledge is a dangerous thing._
Since proverbs are _fixed_ expressions, successfully completing a proverb with greedy decoding means that the model has seen the proverb during pre-training. While it is possible that the proverbs in training data appears alone without any contextual usage or explanation, we consider such an occurrence to be unlikely.10 Hence, we make the as
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Lang** & Proverb & Context \\ \hline \multirow{5}{*}{Zh} & \multirow{5}{*}{\begin{tabular}{l} Zh \\ (figurative) \\ \end{tabular} } & A: & \begin{tabular}{l} A: \\ (B wants to help A with the project instead of teaching A to do the project.) \\ \end{tabular} \\ & & \begin{tabular}{l} A: \\ (A: Can you help me with this project? B: Of course, but I think “it is better to teach a \\ man fishing than to give him fish”.) \\ \end{tabular} \\ \hline \multirow{5}{*}{Id} & \multirow{5}{*}{\begin{tabular}{l} Nasi sudah \\ menjadi \\ bubur \\ (figurative) \\ \end{tabular} } & Orang 1: Bagaimana reaksi bos-mu setelah kamu menjadi kasalan susya berbute, taupi saya tetap diheri sansi. Nasi sudah menjadi bubur. & A: Orang 2 tidak dapat melakukkan apapun untuk mengubah reaksi bos. \\ & & \begin{tabular}{l} (Person 2 can do nothing to change the \\ boss’s reaction.) \\ \end{tabular} \\ & & \begin{tabular}{l} B: Orang 2 mashi bisa mengubah reaksi atasan. \\ (Person 2 can still change the boss’s reaction.) \\ \end{tabular} \\ & &
\begin{tabular}{l} I’ve tried to explain why I did this, but I’m \\ still being penalized. The rice has become \\ porridge.) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples from selected languages (examples for all languages in Table 7, Appendix A.5).
\begin{table}
\begin{tabular}{l l} \hline \hline
**Template** & \\ \hline \multirow{2}{*}{**Question:** What does the person mean by the proverb?} & \\ \multirow{2}{*}{**Proverb:**\textless{proverb>} \\ **Context:**\textless{context>} & \\ \multirow{2}{*}{**Choices:** A: \textless{answer 1> B: \textless{answer 2>}} \\ Answer:**} & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-shot testing template, where the coloured part is the template.
sumption that memorization of the fixed expression also correlates with LLMs having embedded knowledge of the usage or meaning.
Figure 3 shows the results of proverb memorization, which unsurprisingly as model size increases, memorization increases. Overall, LLaMA-2 completes the most proverbs in English and BLOOMZ completes the most proverbs in Chinese. While XLM-R, XLGM and mT0 cover all the languages in our dataset, they don't score particularly well on memorization of proverbs in a single language. All models exhibit disparities among the memorization across all languages, and are particularly bad for Indonesian, Bengali, and Russian (i.e. lower-resource languages). These disparities are probably due to data exposures, as we don't find any significant attribution such as well known versus less known, long versus short, or figurative versus not figurative proverbs by analyzing what's memorized.
### Reasoning of Proverbs with Conversational Context
_-- All that glitters is not gold._
While many models embed knowledge about proverbs, it is unclear if memorization translates to better reasoning with proverbs given context. Next, we assess model performance on the inference task.
Memorization doesn't indicate the ability to reason with proverbs.We prompt models with the pattern in Table 3, and plot the accuracy (described in SS4) across languages in Figure 2(b). In general, the bigger the model is, the better it performs on the inference task (i.e. the ability emerges with scale).
Overall, the mT0 13B model scores the highest on this task across all languages, with only a small performance gap across languages (except Bengali), despite the fact that it memorized very few proverbs for languages other than English and Chinese. Interestingly, while XGLM and XLM-R scored well across many languages in the previous experiment, these models only have chance-level performance on the reasoning task. Similarly, while having memorized a lot of proverbs in Chinese, BLOOMZ's performance in reasoning with proverbs lags behind English and Indonesian.
Since we know which proverbs are memorized from the previous experiments, we further break down the results into memorized vs. not memorized proverbs for the 3 best-performing models for English and Chinese (in Table 10, Appendix D.1). The benefit of memorization shows through for English, but is inconsistent for Chinese (aligned with observations for other languages from Figure 2(b)).
A possible explanation of the task not heavily depending on memorization is that contextual information helps with inference, and the model may learn other relevant cultural information implicitly from training data. Consequently, this
Figure 3: Performance of mLLMs on the proposed MAPS dataset. The number of parameters is in millions.
hints that LLMs may use contextual information rather than memory retrieval when both are available. However, such hypothesis requires rigorous study, which we will leave as the future work.
**Figurative proverbs are difficult to understand in general.** Many proverbs are figurative, hence, we further divide the results of the model based on this property (described in SS3). From Table 4, we observe that all models perform worse on the inference task when the interpretation is figurative across English, German and Russian, versus the consistent opposite observation for Chinese. Bigger models seem to understand Indonesian and Bengali figurative proverbs better. One conjecture we have is while abstract reasoning (the kind we have with figurative proverb understanding) can rely on memorization, but less memorization may lead to better abstract reasoning in LLMs.
**Bias towards the correct answer amplify performance gaps across languages.** Ideally, if the model truly understands what a proverb means in a situational context, it should be able to pick out the correct answer as well as the wrong answer, especially for a task with only two choices. Although not the focus of our work, since this is a fundamental aspect of reasoning Blanco and Moldovan (2011), we perform the experiments. Several prior work show that negation in the natural language inference task weakens model performance Hartmann et al. (2021); Truong et al. (2023); She et al. (2023), here we want to ask a 'negative' question rather than provide negative answers. Hence, we change the question in the prompt template to _What does the person not mean by the proverb?_, and keep everything else the same.
The results are in Figure 4. By simply asking the model to pick the wrong answer, all previous well-performing models flipped their predictions except mT0. The 'negative' question enlarged performance gaps across languages as the model size increases. Additional results on asking the model to pick the wrong answer _without_ using the word _not_ are in the Appendix D.2, where we observe consistent trends of model failures and inverse scaling in many cases. While we focus on the culture aspect of mLLMs, these results show fundamental work is needed to improve the ability to handle 'negative' questions in current mLLMs.
### Culture Gaps in mLLMs - A Case Study -- When in Rome, do as the Romans do.
An ideal mLLM should perform on texts from all languages and translations in any directions equally well. However, in our experiments, the performance on English data is still stronger than other languages for most of the models we studied. Recently, several works have shown that good performance can be achieved by translating non-English text data in languages into English Conneau and Lample (2019); Yang et al. (2019), inter alia). Here we demonstrate that when a task relies on cultural context, there are two distinct performance gaps in order to achieve true multilingual ability: one is the language gap (due to mistakes by the translation system, may be fixed by a perfect translation system), another is the culture gap. To demonstrate this, we use English and Chinese as the focus of a case study.
**Machine Translation (MT).** We translate Chinese data to English using Google translate (Zh-En). By closely examining the translated data, current translation systems do not handle cultural context (i.e. proverbs) well, such as incomplete translations or wrong translations. For example, a polysemous phrase \(\mathcal{K}\)\(\stackrel{{\leftarrow}}{{\Box}}\)was translated to "junior" (third year university student), but in a specific proverbial context, it means someone is "three years older".
**Human-Adapted Translation (HT).** Next, we perform several adaptations to the machine translated context: 1) manually fix the literal translation of the proverb if there are any mistakes, fix the grammatical mistakes in the contexts and answers; 2) a light adaptation of the translated data inspired by Majewska et al. (2023), by replacing the names and locations in the dataset to align with the culture (e.g. XiaoMing to Michael etc.) in case models are confused about whether an entity is a person or a place. This is our best effort-adaptation to remove the language gap.
Next, we perform zero-shot evaluation with the best-performing multilingual models (mT0-XXL, 13B) and English model (Llama-2 13B) for Zh-En (in Figure 5). In fact, both models show a performance gap in the translated data in comparison to the target language. Interestingly, mT0 also show a performance degradation comparing to the inference results in the original language (Llama-2 is near chance level for Zh, improvement is not surprising). In all cases, HT improves over MT, which the gain can be considered as the language
gap. More interestingly we demonstrate the gap between HT and the \(\max\) of source and target language is the _culture gap_ in mLMs, i.e. _culture gap_ = \(|\text{Acc}^{HT}-\max(\text{Acc}^{Src},\text{ Acc}^{Tgt})|\). The culture gap for Zh-En is 5.73 for mT0 and 19.40 for Llama-2.11 In an ideal situation, these gaps should be 0, indicating that the model is culturally aware and able to understand a language if speakers are from diverse cultural backgrounds. These results suggest that additional research is needed to improve cultural-awareness and the inclusion of cultural priors in MT models and mLLMs Yao et al. (2023); Shaikh et al. (2023).
Footnote 11: We also perform the same experiment in reverse direction En-Zh with mT0 (Appendix D.3), similar results were observed. Other evaluation results on machine translated data for other languages are in Appendix D.3.
## 6 Conclusion
In this work, we present an investigation of mLLMs' ability to reason with cultural common ground. We use proverbs and sayings from different languages and cultures as an investigative tool. Specifically, we study a variety of mLLMs on their ability to memorize proverbs, reason with proverbs and sayings in a situational setting, and understand cross-cultural communications with proverbs.
To aid the investigation, we created a multi-cultural proverbs and sayings dataset MAPS. Our analysis shows that many models possess knowledge of proverbs and sayings, however, knowing proverbs does not mean the model is able to reason with proverbs in contextual settings. Indeed, we found that some models did show culturally-diverse reasoning ability, it is only to a very limited degree. We also found that the ability to reason in zero-shot emerges with model scale, but the ability to understand a 'negative' question is inverse with the model scale. The disparities in culturally-diverse reasoning ability between languages grows with the model size, which raises concerns in terms of multilingual availability and points to the need for more robust mLLMs. Finally, we defined and observed several culture gaps in cross-lingual communications. We hope to explore different aspects of cultural common ground in the future.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & \multicolumn{5}{c}{Non-Figurative / Figurative} \\ & En & Zh & Id & De & Ru & Bn \\ \hline BLOOMZ 3B & 58.76/57.60 & 53.12/61.97 & 53.33/60.52 & 51.66/47.54 & 52.43/45.13 & 55.88/49.26 \\ BLOOMZ 7.1B & 79.66/68.20 & 66.66/68.30 & 72.00/75.18 & 54.30/53.55 & 52.43/49.55 & 67.64/53.30 \\ mT0 XL (3.7B) & 75.14/62.21 & 62.50/64.08 & 74.67/69.54 & 74.17/61.74 & 73.78/61.94 & 69.12/52.94 \\ mT0 XXL (13B) & 87.01/82.95 & 81.77/83.09 & 84.00/84.96 & 88.74/83.61 & 87.80/76.99 & 63.23/69.85 \\ Llama-2 13B & 81.36/76.50 & 53.12/54.23 & 54.66/58.27 & 72.19/65.03 & 67.07/59.73 & 47.05/49.63 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot accuracy of non-figurative and figurative proverbs (Non-Fig./Fig.). The gray colour results indicate that the language is not officially supported by the model.
Figure 4: Performance of mLLMs on the proposed MAPS - Inference task when asking the ‘negative’ question.
Figure 5: Performance gap between machine translated, human translated data and results in the original source language (Zh), and target language (En).
## 7 Limitations
Our work uses proverbs and sayings as a proxy for cultural common ground, and we explore mLLMs' ability in understanding cultural common grounds in a limited setting. One potential limitation with our experiments is the evaluation data is relatively small compared to many automatically generated benchmarks and may introduce lexical biases. However, these are not major concerns as 1) we want to focus on cultural common ground, which automatically limit us to a subset of lexical items; 2) to our best knowledge, this is the largest proverbs dataset for reasoning, and there is enough signal to distinguish between the tested models and uncover insights on current mLLMs ability and limitations in understanding proverbs and sayings. We hope to explore aspect of culture beyond proverbs and sayings, and with a more diverse set of languages in the future.
In this work, we evaluate models of size up to 13B parameters (the biggest available size of mT0) due to computational constraints. However, full evaluation of larger models or task-specific models is necessary, especially when asking 'negative' questions and assessing the culture gaps in the future. Moreover, we focus on studying open-source LLMs in this paper for scientific reproducibility, and closed-source LLM evaluations are beyond our scope. As our dataset is open source, they can be used to evaluate closed-source LLMs in the future.
## 8 Acknowledgement
This work was funded by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 13N15897 (MISRIK), and the LOEWE initiative (Hesse, Germany) within the emergencITY center. We would like to thank Sukannya Purkayastha, Aniket Pramanick, Ilia Kuznetsov, Kexin Wang, Luke Bates for their constructive feedback and discussions of the work. We thank Sukannya Purkayastha, Jonathan Tonglet, and Yongxin Huang for their suggestions on a draft of the paper.
|
2305.19985 | On the Existence of Reactive Strategies Resilient to Delay | We compare games under delayed control and delay games, two types of infinite
games modelling asynchronicity in reactive synthesis. In games under delayed
control both players suffer from partial informedness due to symmetrically
delayed communication, while in delay games, the protagonist has to grant
lookahead to the alter player. Our first main result, the interreducibility of
the existence of sure winning strategies for the protagonist, allows to
transfer known complexity results and bounds on the delay from delay games to
games under delayed control, for which no such results had been known. We
furthermore analyse existence of randomized strategies that win almost surely,
where this correspondence between the two types of games breaks down. In this
setting, some games surely won by the alter player in delay games can now be
won almost surely by the protagonist in the corresponding game under delayed
control, showing that it indeed makes a difference whether the protagonist has
to grant lookahead or both players suffer from partial informedness. These
results get even more pronounced when we finally address the quantitative goal
of winning with a probability in $[0,1]$. We show that for any rational
threshold $\theta \in [0,1]$ there is a game that can be won by the protagonist
with exactly probability $\theta$ under delayed control, while being surely won
by alter in the delay game setting. All these findings refine our original
result that games under delayed control are not determined. | Martin Fränzle, Paul Kröger, Sarah Winter, Martin Zimmermann | 2023-05-31T16:08:10Z | http://arxiv.org/abs/2305.19985v4 | # Strategies Resilient to Delay:
###### Abstract
We compare games under delayed control and delay games, two types of infinite games modelling asynchronicity in reactive synthesis. Our main result, the interreducibility of the existence of sure winning strategies for the protagonist, allows to transfer known complexity results and bounds on the delay from delay games to games under delayed control, for which no such results had been known. We furthermore analyze existence of strategies that win almost surely, where this correspondence between the two types of games breaks down.
## 1 Introduction
Two-player zero-sum games of infinite duration are a standard model for the synthesis of reactive controllers, i.e., correct-by-construction controllers that satisfy their specification even in the presence of a malicious environment. In such games, the interaction between the controller and the environment is captured by the rules of the game and the specification on the controller induces the winning condition of the game. Then, computing a correct controller boils down to computing winning strategies.
Often, it is convenient to express the rules in terms of a graph capturing the state-space such that moves correspond to transitions between these states. The interaction between the controller and the environment then corresponds to a path through the graph and the winning condition is a language of such paths, containing those that correspond to interactions that satisfy the specification on the controller.
In other settings, it is more convenient to consider a slightly more abstract setting without game graphs, so-called Gale-Stewart games [4]. In such games, the players alternatingly pick a sequence of letters, thereby constructing an infinite word. The winning condition is a language over infinite words, containing the winning words for one player. To capture the synthesis problem, the winning condition has to encode both the specification on the controller as well as the rules of interaction. It is straightforward to transform a graph-based game into a Gale-Stewart game and a Gale-Stewart game into a graph-based game such that the existence of winning strategies for both players is preserved.
In the most basic setting of synthesis, both the controller and the environment are fully informed about the current state of the game (complete information). However, this scenario is not always realistic. Thus, much effort has been poured into studying games under incomplete information where the players are only partially informed about the current state of the game. Here, we are concerned with a special type of partial information designed to capture delays in perception and action. Such delays either render the most recent moves of the opponent invisible to a player or induce a time lag between the selection and the implementation of an own move, respectively.
As a motivating example, consider the domain of cooperative driving: Here, the exchange of information between cars is limited (and therefore delayed) by communication protocols that have to manage the available bandwidth to transfer information between cars. Other delaying factors include, e.g., complex
signal processing chains based on computer vision to detect the locations of obstacles. Thus, decisions have to be made based on incomplete information, which only arrives after some delay.
**Games under Delayed Control.** Chen et al. [2] introduced (graph) games under delayed control to capture this type of incomplete information. Intuitively, assume the players so far have constructed a finite path \(v_{0}\cdots v_{k}\) through the graph. Then, the controller has to base her decision on a visible proper prefix \(v_{0}\cdots v_{k-\delta}\), where \(\delta\) is the amount of delay. Hence, the suffix \(v_{k-\delta+1}\cdots v_{k}\) is not yet available to base the decision on, although the decision to be made is to be applied at the last state \(v_{k}\) in the sequence.
They showed that solving games under delayed control with safety conditions and with respect to a given delay is decidable: They presented two algorithms, an exponential one based on a reduction to delay-free safety games using a queue of length \(\delta\), and a more practical incremental algorithm synthesizing a series of controllers handling increasing delays and reducing game-graph size in between. They showed that even a naive implementation of this algorithm outperforms the reduction-based one, even when the latter is used with state-of-the-art solvers for delay-free games. However, the exact complexity of the incremental algorithm and that of solving games under delayed control remained open.
Note that asking whether there is some delay \(\delta\) that allows controller to win reduces to solving standard, i.e., delay-free games, as they correspond to the case \(\delta=0\). The reason is monotonicity in the delay: if the controller can win for delay \(\delta\) then also for any \(\delta^{\prime}<\delta\). More interesting is the question whether controller wins with respect to every possible delay. Chen et al. conjectured that there is some exponential \(\delta\) such that if the controller wins under delay \(\delta\), then also under every \(\delta^{\prime}\).
**Delay Games.** There is also a variant of Gale-Stewart games modelling delayed interaction between the players [6]. Here, the player representing the environment (often called Player \(I\)) has to provide a lookahead on her moves, i.e., the player representing the controller (accordingly called Player \(O\)) has access to the first \(n+k\) letters picked by Player \(I\) when picking her \(n\)-th letter. So, \(k\) is the amount of lookahead that Player \(I\) has to grant Player \(O\). Note that the lookahead benefits Player \(O\) (representing the controller) while the delay in a game under delayed control disadvantages the controller.
Only three years after the seminal Buchi-Landweber theorem showing that delay-free games with \(\omega\)-regular winning conditions are decidable [1], Hosch and Landweber showed that it is decidable whether there is a \(k\) such that Player \(O\) wins a given Gale-Stewart game with lookahead \(k\)[6]. Forty years later, Holtmann, Kaiser, and Thomas [5] revisited these games (and dubbed them delay games). They proved that if Player \(O\) wins a delay game then she wins it already with at most doubly-exponential lookahead (in the size of a given deterministic parity automaton recognizing the winning condition). Thus, unbounded lookahead does not offer any advantage over doubly-exponential lookahead in games with \(\omega\)-regular winning conditions. Furthermore, they presented an algorithm with doubly-exponential running time solving delay games with \(\omega\)-regular winning conditions conditions, i.e., determining whether there is a \(k\) such that Player \(O\) wins a given delay game (with its winning condition again given by a deterministic parity automaton) with lookahead \(k\).
Both upper bounds were improved and matching lower bounds were proven by Klein and Zimmermann [8]: Solving delay games is ExpTime-complete and exponential lookahead is both necessary to win some games and sufficient to win all games that can be won. Both lower bounds already hold for winning conditions specified by deterministic safety automata while the upper bounds hold for deterministic parity automata. The special case of solving games with conditions given as reachability automata is PSpace-complete, but exponential lookahead is still necessary and sufficient. Thus, there are tight complexity results for delay games, unlike for games under delayed control.
**Our Contributions.** In this work, we exhibit a tight relation between controller in a game under delayed control and Player \(I\) in a delay game (recall that these are the players that are disadvantaged by delay and lookahead, respectively). Note that winning conditions in games under delayed control are
always given from the perspective of controller (i.e., she has to avoid unsafe states in a safety game) while winning conditions in delay games are always given from the perspective of Player \(O\). Hence, as we relate controller and Player \(I\), we always have to complement winning conditions.
More precisely, we show that one can transform a safety game under delayed control in polynomial time into a delay game with a reachability condition for Player \(O\) (i.e., with a safety condition for Player \(I\)) such that controller wins the game under delayed control with delay \(\delta\) if and only if Player \(I\) wins the resulting delay game with lookahead of size \(\frac{\delta}{2}\). Dually, we show that one can transform a delay game with safety condition for Player \(I\) in polynomial time into a safety game under delayed control such that Player \(I\) wins the delay game with lookahead of size \(\delta\) if and only if controller wins the resulting game under delayed control with delay \(2\delta\). Thus, we can transfer both upper and lower bound results on complexity and on (necessary and sufficient) lookahead from delay games to delayed control. In particular, determining whether controller wins a given safety game under delayed control for every possible delay is PSpace-complete. Our reductions also prove the conjecture by Chen et al. on the delays that allow controller to win such games. Furthermore, we generalize our translation from games with safety conditions to games with parity conditions and games with LTL winning conditions, again allowing us to transfer known results for delay games to games under delayed control.
Note that we have only claimed that the existence of winning strategies for the controller in the game under delayed control and Player \(I\) in the delay game coincides. This is no accident! In fact, the analogous result for relating environment and Player \(O\) fails. This follows immediately from the fact that delay games are determined while games under delayed control are undetermined, even with safety conditions. The reason is that the latter games are truly incomplete information games (which are typically undetermined) while delay games are perfect information games.
We conclude by a detailed comparison between environment and Player \(O\) in both the setting with deterministic as well as in the setting with randomized strategies,. The latter setting increases power for both the controller and the environment, making them win (almost surely) games under delayed control that remain undetermined in the deterministic setting, but it also breaks the correspondence between controller and Player \(I\) observed in the deterministic setting: there are games that controller wins almost surely while Player \(I\) surely looses them.
All proofs which are omitted due to space restrictions can be found in the appendix.
## 2 Preliminaries
We denote the non-negative integers by \(\mathbb{N}\). An alphabet \(\Sigma\) is a non-empty finite set of letters. A word over \(\Sigma\) is a finite or infinite sequence of letters of \(\Sigma\): The set of finite words (non-empty finite words, infinite words) over \(\Sigma\) is denoted by \(\Sigma^{*}\) (\(\Sigma^{+}\), \(\Sigma^{\omega}\)). The empty word is denoted by \(\varepsilon\), the length of a finite word \(w\) is denoted by \(|w|\). Given two infinite words \(\alpha\in(\Sigma_{0})^{\omega}\) and \(\beta\in(\Sigma_{1})^{\omega}\), we define \(\binom{\alpha}{\beta}=\binom{\alpha(0)}{\beta(0)}\binom{\alpha(1)}{\beta(1)} \binom{\alpha(2)}{\beta(2)}\cdots\in(\Sigma_{0}\times\Sigma_{1})^{\omega}\).
### Games under Delayed Control
Games under delayed control are played between two players, controller and environment. For monomial convenience [10], we refer to controller as she and environment as he.
A game \(\mathcal{G}=(S,s_{0},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\rightarrow,\text{ Win})\) consists of a finite set \(S\) of states partitioned into the states \(S_{0}\subseteq S\) of the controller and the states \(S_{1}\subseteq S\) of the environment, an initial state \(s_{0}\in S_{0}\), the sets of actions \(\Sigma_{0}\) for the controller and \(\Sigma_{1}\) for the environment, a transition function
\(S\) such that \(s\in S_{i}\) and \(\sigma\in\Sigma_{i}\) implies \(\to(s,\sigma)\in S_{i-1}\), and a winning condition \(\mathrm{Win}\subseteq S^{\omega}\). We write \(s\xrightarrow{\sigma}s^{\prime}\) as shorthand for \(s^{\prime}=\to(s,\sigma)\).
A play in \(\mathcal{G}\) is an infinite sequence \(\pi=\pi_{0}\sigma_{0}\pi_{1}\sigma_{1}\pi_{2}\sigma_{2}\cdots\) satisfying \(\pi_{0}=s_{0}\) and \(\pi_{n}\xrightarrow{\sigma_{n}}\pi_{n+1}\) for all \(n\geq 0\). We say that controller wins \(\pi\) if \(\pi_{0}\pi_{1}\pi_{2}\cdots\in\mathrm{Win}\); otherwise, we say that environment wins \(\pi\). The play prefix of \(\pi\) of length \(n\) is defined as \(\pi(n)=\pi_{0}\sigma_{0}\cdots\sigma_{n-1}\pi_{n}\), i.e., \(n\) is the number of actions (equivalently, the number of transitions). We denote by \(\mathrm{Pref}(\mathcal{G})\) the set of play prefixes of all plays in \(\mathcal{G}\), which is partitioned into the sets \(\mathrm{Pref}_{c}(\mathcal{G})\) and \(\mathrm{Pref}_{e}(\mathcal{G})\) of play prefixes ending in \(S_{0}\) and \(S_{1}\), respectively. Due to our alternation assumption, play prefixes of even (odd) length are in \(\mathrm{Pref}_{c}(\mathcal{G})\) (\(\mathrm{Pref}_{e}(\mathcal{G})\)).
Fix some even \(\delta\geq 0\). A strategy for the controller in \(\mathcal{G}\) under delay \(\delta\) is a pair \((\alpha,\tau_{C})\) where \(\alpha\in(\Sigma_{0})^{\frac{\delta}{2}}\) and \(\tau_{C}\colon\mathrm{Pref}_{c}(\mathcal{G})\to\Sigma_{0}\) maps play prefixes ending in \(S_{0}\) to actions of the controller. A play \(\pi_{0}\sigma_{0}\pi_{1}\sigma_{1}\pi_{2}\sigma_{2}\cdots\) is consistent with \((\alpha,\tau_{C})\) if \(\sigma_{0}\sigma_{2}\cdots\sigma_{\delta-4}\sigma_{\delta-2}=\alpha\) and \(\sigma_{2n}=\tau_{C}(\pi(2n-\delta))\) for all \(2n>\delta-2\), i.e., controller has access to environment's actions with a delay of \(\delta\). In particular, her first \(\frac{\delta}{2}+1\) actions are independent of environment's actions and, in general, her \(n\)-th action \(\sigma_{2n}\) only depends on the actions \(\sigma_{1},\ldots,\sigma_{(2n-\delta)-1}\) picked by environment, but not on the actions \(\sigma_{(2n-\delta)+1},\ldots,\sigma_{2n-1}\). The strategy \((\alpha,\tau_{C})\) is winning under delay \(\delta\) if every play that is consistent with it is winning for controller. Controller wins \(\mathcal{G}\) under delay \(\delta\) if she has a winning strategy under delay \(\delta\) for \(\mathcal{G}\).
**Remark 1**.: _The notion of winning strategy for controller under delay \(0\) is the classical one for delay-free games (cf. [4]). Furthermore, if controller wins \(\mathcal{G}\) under delay \(\delta\), then also under every delay \(\delta^{\prime}<\delta\)[2]._
A strategy for environment is a mapping \(\tau_{E}\colon\mathrm{Pref}_{e}(\mathcal{G})\to\Sigma_{1}\). A play \(\pi_{0}\sigma_{0}\pi_{1}\sigma_{1}\pi_{2}\sigma_{2}\cdots\) is consistent with \(\tau_{E}\) if \(\sigma_{2n+1}=\tau_{E}(\pi_{0}\sigma_{0}\cdots\sigma_{2n-1}\pi_{2n+1})\) for all \(n\geq 0\), i.e., environment has access to the full play prefix when picking his next action. The strategy \(\tau_{E}\) is winning, if every play that is consistent with it is winning for the environment (i.e., the sequence of states in not in Win). Further, we say that environment wins \(\mathcal{G}\), if he has a winning strategy for \(\mathcal{G}\). Note that the two definitions of strategies are in general not dual, e.g., the one for environment is not defined with respect to a delay \(\delta\). In fact, the notion of winning strategy for environment is the classical one for delay-free games (cf. [4]).
We say that a game under delayed control \(\mathcal{G}\) is determined under delay \(\delta\), if either controller wins \(\mathcal{G}\) under delay \(\delta\) or environment wins \(\mathcal{G}\). Let us stress that determinacy is defined with respect to some fixed \(\delta\) and that \(\mathcal{G}\) may be determined for some \(\delta\), but undetermined for some other \(\delta^{\prime}\) (due to the non-dual definition of strategies). Remark 7 shows an undetermined safety (!) game under delayed control.
**Example 1**.: _Consider the game \(\mathcal{G}=(S,s_{I},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\to,\mathrm{Win})\) depicted in Fig. 1 where \(\mathrm{Win}\) contains all plays that do not visit the black vertex. Note that this is a safety condition. In particular, if controller does not pick action \(b\) at \(c_{2}\) and does not pick action \(a\) at \(c_{3}\), then the vertex \(e_{3}\) is never reached. This is straightforward without delay, but we claim that controller can also win \(\mathcal{G}\) under delay \(2\)._
_To gain some intuition, consider a play prefix \(\pi_{0}\sigma_{0}\pi_{1}\cdots\pi_{n-1}\sigma_{n-1}\pi_{n}\) with \(n\geq 4\) and \(\pi_{n}\in S_{0}\). Then, controller has to pick an action \(\sigma_{n}\) to continue the prefix. However, due to the delayed control, she has to do so based on the prefix \(\pi_{0}\sigma_{0}\pi_{1}\cdots\pi_{n-3}\sigma_{n-3}\pi_{n-2}\). If \(\pi_{n-2}\) is \(c_{2}\), then \(\pi_{n}\) is either \(c_{3}\) or \(c_{1}\). Hence, picking \(\sigma_{n}=b\) is the only safe choice. Dually, if \(\pi_{n-2}\) is \(c_{3}\), then \(\pi_{n}\) is either \(c_{2}\) or \(c_{1}\). Hence, picking \(\sigma_{n}=a\) is the only safe choice. Finally, assume \(\pi_{n-2}\) is \(c_{1}\). Then, \(\pi_{n}\) is either \(c_{2}\) or \(c_{3}\). In the former case, picking \(\sigma_{n}=a\) is the only safe choice, in the latter case, picking \(\sigma_{n}=b\) is the only safe choice. So, controller needs to distinguish these two cases, although she has no access to \(\pi_{n}\)._
_But she can do so by inspecting \(\pi_{n-3}\) (which she has access to): As a predecessor of \(\pi_{n-2}=c_{1}\), it can either be \(e_{4}\), \(e_{5}\), or \(e_{3}\). In the latter case, the play is already losing. Thus, we disregard this case, as we construct a winning strategy. So, assume we have \(\pi_{n-3}=e_{4}\) (the case \(\pi_{n-3}=e_{5}\) is dual). Then, we must have \(\pi_{n-4}=c_{2}\) (the only predecessor of \(e_{4}\)) and, by our analysis of the safe moves above, controller
must have picked \(\sigma_{n-2}=b\) (based, due to delay, on the prefix ending in \(\sigma_{n-4}=c_{2}\)). From this we can conclude \(\sigma_{n-1}=e_{2}\) and thus \(\pi_{n}=c_{3}\) (the only successor of \(e_{2}\)). Thus, she can safely pick \(\sigma_{n}=b\)._
_This intuition, and the necessary initialization, is implemented by the strategy \((\alpha,\tau_{C})\) with \(\alpha=a\) and_
\[\tau_{C}(\pi_{0}\sigma_{0}\pi_{1}\cdots\pi_{n-3}\sigma_{n-3}\pi_{n-2})=\begin{cases} a&n=2\text{ and }\pi_{0}=c_{1}\text{,}\\ b&n>2\text{, }\pi_{n-2}=c_{1}\text{, and }\pi_{n-3}=e_{4}\text{,}\\ a&n>2\text{, }\pi_{n-2}=c_{1}\text{, and }\pi_{n-3}=e_{5}\text{,}\\ b&\pi_{n-2}=c_{2}\text{,}\\ a&\pi_{n-2}=c_{3}\text{.}\end{cases}\]
_An induction over the play length shows that \((\alpha,\tau_{C})\) is winning for controller under delay \(2\)._
Our definition of games under delayed control differs in three aspects from the original definition of Chen et al. [2].
* We allow arbitrary winning conditions while Chen et al. focused on safety conditions.
* The original definition allows nondeterministic strategies (a strategy that returns a nonempty set of actions, each one of which can be taken), while we restrict ourselves here to deterministic strategies (a strategy that returns a single action to be taken). The motivation for their use of nondeterministic strategies is the fact that they can be refined if additional constraints are imposed, which Chen et al.'s algorithm computing a winning strategy relies on. Here, on the other hand, we are just interested in the existence of winning strategies. In this context, it is sufficient to consider deterministic strategies, as controller has a nondeterministic winning strategy if and only if she has a deterministic winning strategy. Also, strategies in delay games are deterministic, so the transformation between games under delayed control and delay games can be formulated more naturally for deterministic strategies.
* The original definition also allowed odd delays \(\delta\) while we only allow even delays. As we will see in Section 3, the transformation of games under delayed control to delay games is naturally formulated for even delays. This choice also simplifies definitions, as accounting for odd delays imposes an additional notational burden.
Figure 1: The game for Example 1. Controller wins all plays that never visit the black vertex. Note that we have \(\Sigma_{0}=\{a,b\}\) and \(\Sigma_{1}=\{u,u^{\prime}\}\).
### Delay Games
Delay games are played between two players, Player \(I\) (she) and Player \(O\) (he). A delay game \(\Gamma_{k}(L)\) (with constant lookahead) consists of a lookahead \(k\in\mathbb{N}\) and a winning condition \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\) for some alphabets \(\Sigma_{I}\) and \(\Sigma_{O}\). Such a game is played in rounds \(n=0,1,2,\ldots\) as follows: in round \(0\), first Player \(I\) picks a word \(x_{0}\in\Sigma_{I}^{k+1}\), then Player \(O\) picks a letter \(y_{0}\in\Sigma_{O}\). In round \(n>0\), Player \(I\) picks a letter \(x_{n}\in\Sigma_{I}\), then Player \(O\) picks a letter \(y_{n}\in\Sigma_{O}\). Player \(O\) wins a play \((x_{0},y_{0})(x_{1},y_{1})(x_{2},y_{2})\cdots\) if the outcome \(\binom{x_{0}x_{1}x_{2}\cdots}{y_{0}y_{1}y_{2}\cdots}\) is in \(L\); otherwise, Player \(I\) wins.
A strategy for Player \(I\) in \(\Gamma_{k}(L)\) is a mapping \(\tau_{I}\colon\Sigma_{O}^{*}\to\Sigma_{I}^{*}\) satisfying \(|\tau_{I}(\varepsilon)|=k+1\) and \(|\tau_{I}(w)|=1\) for all \(w\in\Sigma_{O}^{+}\). A strategy for Player \(O\) is a mapping \(\tau_{O}\colon\Sigma_{I}^{+}\to\Sigma_{O}\). A play \((x_{0},y_{0})(x_{1},y_{1})(x_{2},y_{2})\cdots\) is consistent with \(\tau_{I}\) if \(x_{n}=\tau_{I}(y_{0}\cdots y_{n-1})\) for all \(n\geq 0\), and it is consistent with \(\tau_{O}\) if \(y_{n}=\tau_{O}(x_{0}\cdots x_{n})\) for all \(n\geq 0\). So, strategies are dual in delay games, i.e., Player \(I\) has to grant some lookahead on her moves that Player \(O\) has access to. A strategy for Player \(P\in\{I,O\}\) is winning, if every play that is consistent with the strategy is won by Player \(P\). We say that Player \(P\in\{I,O\}\) wins a game \(\Gamma_{k}(L)\) if Player \(P\) has a winning strategy in \(\Gamma_{k}(L)\).
**Remark 2**.: _If Player \(O\) wins \(\Gamma_{k}(L)\), then he also wins \(\Gamma_{k^{\prime}}(L)\) for every \(k^{\prime}>k\). If Player \(I\) wins \(\Gamma_{k}(L)\), then she also wins \(\Gamma_{k^{\prime}}(L)\) for every \(k^{\prime}<k\)._
Unlike games under delayed control, delay games with Borel winning conditions are determined [8], i.e., each delay game \(\Gamma_{k}(L)\) with Borel \(L\) and fixed \(k\) is won by one of the players.
**Example 2**.: _Consider \(L=\left\{\binom{a_{0}}{b_{0}}\binom{a_{1}}{b_{1}}\binom{a_{2}}{b_{2}}\cdots| \ b_{0}\notin\{a_{0},a_{1},a_{2}\}\right\}\) over the alphabets \(\Sigma_{I}=\Sigma_{O}=\{1,2,3,4\}\)._
_Player \(I\) wins \(\Gamma_{k}(L)\) for \(k=1\) with the following strategy \(\tau_{I}\): \(\tau_{I}(\varepsilon)=12\) and \(\tau_{I}(b_{0})=b_{0}\), and \(\tau_{I}(w)\) arbitrary for all \(w\in\Sigma_{O}^{+}\) with \(|w|>1\): In round \(0\), after Player \(I\) has picked \(a_{0}a_{1}=12\), Player \(O\) has to pick some \(b_{0}\). In order to not loose immediately, he has to pick \(b_{0}\notin\{1,2\}\). Then, in round \(1\), Player \(I\) picks \(a_{2}=b_{0}\) and thereby ensures \(b_{0}\in\{a_{0},a_{1},a_{2}\}\). Hence, the play is not won by Player \(O\) (it's outcome is not in \(L\)), therefore it is winning for Player \(I\)._
_However, Player \(O\) wins \(\Gamma_{k}(L)\) for \(k=2\) with the following strategy \(\tau_{O}\): \(\tau_{O}(a_{0}a_{1}a_{2})\) is a letter in the nonempty set \(\Sigma_{O}\setminus\{a_{0},a_{1},a_{2}\}\) and \(\tau_{O}(w)\) arbitrary for all \(w\in\Sigma_{I}^{*}\) with \(|w|\neq 3\). In round \(0\), after Player \(I\) has picked \(a_{0}a_{1}a_{2}\), Player \(O\) picks \(b_{0}\notin\{a_{0},a_{1},a_{2}\}\) and thus ensures that the outcome is in \(L\)._
**Remark 3**.: _We restrict ourselves here to the setting of constant lookahead, i.e., in a delay game \(\Gamma_{k}(L)\) in round \(n\) when Player \(O\) picks her \(n\)-th letter, Player \(I\) has already picked \(k+n+1\) letters (note that we start in round \(0\) with the zeroth letter). Delay games have also been studied with respect to growing lookahead, i.e., the lookahead increases during a play [5]. However, it is known that constant lookahead is sufficient for all \(\omega\)-regular winning conditions: if Player \(O\) wins for any lookahead (no matter how fast it is growing), then she also wins with respect to constant lookahead, which can even be bounded exponentially in the size of a deterministic parity automaton recognizing the winning condition [8]. Stated differently, growing lookahead does not allow to win any more games than constant lookahead. Finally, the setting of constant lookahead in delay games considered here is the natural counterpart to games under delayed control, where the delay is fixed during a play._
### \(\omega\)-Automata
A deterministic reachability automaton \(\mathcal{A}=(Q,\Sigma,q_{I},\delta_{\mathcal{A}},F)\) consists of a finite set \(Q\) of states containing the initial state \(q_{I}\in Q\) and the set of accepting states \(F\subseteq Q\), an alphabet \(\Sigma\), and a transition function \(\delta_{\mathcal{A}}\colon Q\times\Sigma\to Q\). The size of \(\mathcal{A}\) is defined as \(|\mathcal{A}|=|Q|\). Let \(w=w_{0}w_{1}w_{2}\cdots\in\Sigma^{\omega}\). The run of \(\mathcal{A}\) on \(w\) is the sequence \(q_{0}q_{1}q_{2}\cdots\) such that \(q_{0}=q_{I}\) and \(q_{n+1}=\delta_{\mathcal{A}}(q_{n},w_{n})\) for all \(n\geq 0\). A run \(q_{0}q_{1}q_{2}\cdots\) is
(reachability) accepting if \(q_{n}\in F\) for some \(n\geq 0\). The language (reachability) recognized by \(\mathcal{A}\), denoted by \(L(\mathcal{A})\), is the set of infinite words over \(\Sigma\) such that the run of \(\mathcal{A}\) on \(w\) is (reachability) accepting.
A deterministic safety automaton has the form \(\mathcal{A}=(Q,\Sigma,q_{I},\delta_{\mathcal{A}},U)\) where \(Q,\Sigma,q_{I},\delta_{\mathcal{A}}\) are as in a deterministic reachability automaton and where \(U\subseteq Q\) is a set of unsafe states. The notions of size and runs are defined as for reachability automata, too. A run \(q_{0}q_{1}q_{2}\cdots\) is (safety) accepting if \(q_{n}\notin U\) for all \(n\geq 0\). The language (safety) recognized by \(\mathcal{A}\), again denoted by \(L(\mathcal{A})\), is the set of infinite words over \(\Sigma\) such that the run of \(\mathcal{A}\) on \(w\) is (safety) accepting.
A deterministic parity automaton has the form \(\mathcal{A}=(Q,\Sigma,q_{I},\delta_{\mathcal{A}},\Omega)\) where \(Q,\Sigma,q_{I},\delta_{\mathcal{A}}\) are as in a deterministic reachability automaton and where \(\Omega\colon Q\to\mathbb{N}\) is a coloring of the states. The notions of size and runs are defined as for reachability automata, too. A run \(q_{0}q_{1}q_{2}\cdots\) is (parity) accepting if the maximal color appearing infinitely often in the sequence \(\Omega(q_{0})\Omega(q_{1})\Omega(q_{2})\cdots\) is even. The language (parity) recognized by \(\mathcal{A}\), again denoted by \(L(\mathcal{A})\), is the set of infinite words over \(\Sigma\) such that the run of \(\mathcal{A}\) on \(w\) is (parity) accepting.
Reachability and safety automata are dual while parity automata are self-dual.
**Remark 4**.: _Let \(\mathcal{A}=(Q,\Sigma,q_{I},\delta_{\mathcal{A}},F)\) be a deterministic reachability automaton and let \(\overline{\mathcal{A}}\) be the deterministic safety automaton \((Q,\Sigma,q_{I},\delta_{\mathcal{A}},Q\setminus F)\). Then, \(L(\overline{\mathcal{A}})=\overline{L(\mathcal{A})}\)._
_Let \(\mathcal{A}=(Q,\Sigma,q_{I},\delta_{\mathcal{A}^{\prime}},\Omega)\) be a deterministic parity automaton and let \(\overline{\mathcal{A}}\) be the deterministic parity automaton \((Q,\Sigma,q_{I},\delta_{\mathcal{A}^{\prime}},q\mapsto\Omega(q)+1)\). Then, \(L(\overline{\mathcal{A}})=\overline{L(\mathcal{A})}\)._
## 3 From Games under Delayed Control to Delay Games and Back
In this section, we exhibit a tight correspondence between controller in games under delayed control and Player \(I\) in delay games. Recall that in a game under delayed control, it is the controller whose control is delayed, i.e., she is at a disadvantage as she only gets delayed access to the action picked by environment. In a delay game, it is Player \(I\) who is at a disadvantage as she has to grant a lookahead on her moves to Player \(O\). Thus, when simulating a game under delayed control by a delay game, it is natural to let Player \(I\) take the role of controller and let Player \(O\) take the role of environment. Also recall that the winning condition Win in a game under delayed control is formulated from controller's point-of-view: the winning condition requires her to enforce a play in Win. On the other hand, the winning condition \(L\) of a delay game is formulated from the point-of-view of Player \(O\): Player \(O\) has to enforce a play whose outcome is in \(L\). Thus, as Player \(I\) takes the role of controller, we need to complement the winning condition to reflect this change in perspective: The set of winning outcomes for Player \(I\) in the simulating delay game is the complement of Win.
In the remainder of this section, we show how to simulate a game under delayed control by a delay game and then the converse, i.e., how to simulate a delay game by a game under delayed control.
**Transformation 1**.: _First, we transform a game under delayed control into a delay game. In the resulting delay game, the players simulate a play in the game under delayed control by picking actions, which uniquely induce such a play. To formalize this, we need to introduce some notation. Fix a game \(\mathcal{G}=(S,s_{0},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\rightarrow,\mathrm{ Win})\). Note that a sequence \(\sigma_{0}\sigma_{1}\sigma_{2}\cdots\in(\Sigma_{0}\Sigma_{1})^{\omega}\) induces a unique play \(\mathrm{play}(\sigma_{0}\sigma_{1}\sigma_{2}\cdots)=\pi_{0}\sigma_{0}\pi_{1} \sigma_{1}\pi_{2}\sigma_{2}\cdots\) in \(\mathcal{G}\) which is defined as follows: \(\pi_{0}=s_{0}\) and \(\pi_{n+1}=\rightarrow(\pi_{n},\sigma_{n})\) for all \(n\geq 0\). Likewise, a finite sequence \(\sigma_{0}\sigma_{1}\cdots\sigma_{n}\in(\Sigma_{0}\Sigma_{1})^{*}(\Sigma_{0}+\epsilon)\) induces a unique play prefix \(\mathrm{play}(\sigma_{0}\sigma_{1}\cdots\sigma_{n})\) which is defined analogously._
_Now, we define the language \(L(\mathcal{G})\subseteq(\Sigma_{0}\times\Sigma_{1})^{\omega}\) such that \(\binom{\omega_{0}}{\sigma_{1}}\binom{\omega_{2}}{\sigma_{3}}\binom{\omega_{4}} {\sigma_{5}}\cdots\in L(\mathcal{G})\) if and only if \(\mathrm{play}(\sigma_{0}\sigma_{1}\sigma_{2}\cdots)\) is winning for controller._
Now, we prove the correspondence between \(\mathcal{G}\) and \(\Gamma_{k}(\overline{L(\mathcal{G})})\). The winning condition of the delay game is the complement of \(L(\mathcal{G})\), which implements the switch of perspective described above.
**Lemma 1**.: _Let \(\mathcal{G}\) be a game and \(\delta\geq 0\) even. Controller wins \(\mathcal{G}\) under delay \(\delta\) if and only if Player I wins \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for \(k=\frac{\delta}{2}\)._
Now, we consider the converse and transform a delay game into a game under delayed control.
**Transformation 2**.: _Fix a delay game \(\Gamma_{k}(L)\). We construct a game under delayed control to simulate \(\Gamma_{k}(L)\) as follows: The actions of controller are the letters in \(\Sigma_{I}\), and the actions of environment are the letters in \(\Sigma_{O}\). Thus, by picking actions, controller and environment construct the outcome of a play of \(\Gamma_{k}(L)\). As winning conditions of games under delayed control only refer to states visited by a play, but not the actions picked by the players, we reflect the action picked by a player in the state reached by picking that action. Here, we have to require without loss of generality that \(\Sigma_{I}\) and \(\Sigma_{O}\) are disjoint._
_Formally, we define \(\mathcal{G}(L)=(S,s_{0},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\rightarrow,\text{ Win})\) with \(S=S_{0}\cup S_{1}\), \(S_{0}=\{s_{0}\}\cup\Sigma_{O}\), \(S_{1}=\Sigma_{I}\), \(\Sigma_{0}=\Sigma_{I}\), \(\Sigma_{1}=\Sigma_{O}\), \(\rightarrow(s,a)=a\) for all \(s\in S_{0}\) and \(a\in\Sigma_{I}\), and \(\rightarrow(s,b)=b\) for all \(s\in S_{1}\) and \(b\in\Sigma_{O}\). Finally, we define \(\text{\rm Win}=\{s_{0}s_{1}s_{2}\cdots|\begin{pmatrix}s_{0}\\ s_{1}\end{pmatrix}\begin{pmatrix}s_{2}\\ s_{3}\end{pmatrix}\begin{pmatrix}s_{4}\\ s_{5}\end{pmatrix}\cdots\in L\}\)._
The following remark states that the two transformations are inverses of each other, which simplifies the proof of correctness of the second transformation. It follows by a careful inspection of the definitions.
**Remark 5**.: _Let \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\). Then, \(L=L(\mathcal{G}(L))\)._
Now, we show that the second transformation is correct, again using complementation to implement the perspective switch.
**Lemma 2**.: _Let \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\) and \(k\geq 0\). Player I wins \(\Gamma_{k}(L)\) if and only if controller wins \(\mathcal{G}(\overline{L})\) under delay \(2k\)._
## 4 Results
Lemma 1 and Lemma 2 allow us to transfer results from delay games to games under delayed control. Due to the definitions of strategies in games under delayed control not being dual, we consider both players independently, controller in Section 4.1 and environment in Section 4.2.
Recall that delay that allows controller to win satisfies a monotonicity property (see Remark 1.1): if controller wins a game under delay \(\delta\), then also under every delay \(\delta^{\prime}<\delta\). Thus, the set of delays for which controller wins is downward-closed, i.e., it is either a finite set \(\{0,2,4\ldots,\delta_{\max}\}\) or it is equal to the set \(2\mathbb{N}\) of even numbers. In the following, we study the complexity of determining whether controller wins under all possible delays, whether she wins under a given delay, and determine bounds on \(\delta_{\max}\).
Note that winning for environment is independent of delay and boils down to the classical notion of winning delay-free games [4], which is a well-studied problem. Hence, we disregard this problem. However, we do discuss the relation between environment in a game under delayed control and Player \(O\) in the simulating delay game constructed in the previous section.
### Controller's View
Before we present our results, we need to specify how to measure the size of games and delay games, especially how winning conditions are represented (recall that, so far, they are just \(\omega\)-languages). In the following, we only consider \(\omega\)-regular winning conditions specified by \(\omega\)-automata (see Section 2.3) or formulas of Linear Temporal Logic (LTL) [11], which subsume the typical specification languages
for winning conditions. Hence, the size of a game \((S,s_{0},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\rightarrow,\text{Win})\) under delayed control is given by the sum \(|S|+|\Sigma_{0}|+|\Sigma_{1}|+|\text{Win}|\), where \(|\text{Win}|\) is the size of an automaton or LTL formula (measured in the number of distinct subformulas) representing Win. Analogously, for a delay game \(\Gamma_{k}(L)\), we define the size of \(L\) as the size of an automaton or LTL formula (measured in the number of distinct subformulas) representing \(L\). The bound \(k\) is encoded in binary, if necessary.
Safety.A game \(\mathcal{G}=(S,s_{0},S_{0},S_{1},\Sigma_{0},\Sigma_{1},\rightarrow,\text{Win})\) with winning condition Win is a safety game if Win is accepted by a deterministic safety automaton.
**Remark 6**.: _When Chen et al. introduced safety games under delayed control, they did not use automata to specify their winning plays, but instead equipped the game with a set of unsafe states and declared all those plays winning for controller that never visit an unsafe state. It is straightforward to see that our definition is equivalent, as their definition is captured by a deterministic safety automaton with two states. Conversely, taking the product of a game and a deterministic safety automaton yields an equivalent game with a state-based safety condition._
Our results rely on the following two bounds on the transformations presented in Section 3, which are obtained by applying Remark 4:
1. If the winning condition Win for a game \(\mathcal{G}\) under delayed control is given by a deterministic safety automaton with \(n\) states, then the winning condition \(\overline{L(\mathcal{G})}\) is recognized by a deterministic reachability automaton with \(n\) states.
2. Dually, if the winning condition \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\) of a delay game is given by a deterministic reachability automaton with \(n\) states, then the winning condition of the game \(\mathcal{G}(\overline{L})\) under delayed action is recognized by a deterministic safety automaton with \(n\cdot|\Sigma_{I}|+1\) states.
We begin by settling the complexity of determining whether controller wins a given safety game under every delay.
**Theorem 1**.: _The following problem is \(\mathrm{PSpace}\)-complete: Given a safety game \(\mathcal{G}\), does controller win \(\mathcal{G}\) under every delay \(\delta\)?_
Next, we give a lower bound on the complexity of determining whether controller wins a given safety game under a given delay.
**Theorem 2**.: _The following problem is \(\mathrm{PSpace}\)-hard: Given a safety game \(\mathcal{G}\) and \(\delta\) (encoded in binary), does controller win \(\mathcal{G}\) under delay \(\delta\)._
Note that we do not claim any upper bound on the problem considered in Theorem 2. There is a trivial \(2\textsc{ExpTime}\) upper bound obtained by hardcoding the delay into the graph of the safety game, thereby obtaining a classical delay-free safety game. It is open whether the complexity can be improved. Let us remark though that, via the correspondence to delay games presented in Section 3, improvements here would also yield improvements on the analogous problem for delay games, which is open too [15].
Next, we turn our attention to bounds on the delay for which controller wins. Recall that due to monotonicity, the set of delays for which controller wins is downward-closed, i.e., it is either a finite set \(\{0,2,4\ldots,\delta_{\max}\}\) or it is equal to \(2\mathbb{N}\). In the following, we present tight bounds on the value \(\delta_{\max}\).
As a consequence, we settle a conjecture by Chen et al.: They conjectured that there is some delay \(\delta_{t}\) (exponential in \(|\mathcal{G}|\)), such that if controller wins \(\mathcal{G}\) under delay \(\delta_{t}\), then she wins under every delay. Note that this conjecture implies that \(\delta_{\max}\) is at most exponential.
The following theorem proves Chen et al.'s conjecture, while Theorem 4 shows that \(\delta_{t}\) must necessarily be exponential.
**Theorem 3**.: _Let \(\mathcal{G}\) be a safety game. There is a \(\delta_{t}\in\mathcal{O}(2^{|\mathcal{G}|})\) such that if controller wins \(\mathcal{G}\) under delay \(\delta_{t}\), then she wins \(\mathcal{G}\) under every \(\delta\)._
Finally, we show that the exponential upper bound on \(\delta_{\max}\) is tight.
**Theorem 4**.: _For every \(n>1\), there is a safety game \(\mathcal{G}_{n}\) of size \(\mathcal{O}(n)\) such that controller wins \(\mathcal{G}\) under delay \(2^{n}\), but not under delay \(2^{n}+2\)._
Parity.Next, we consider the case of \(\omega\)-regular winning conditions, given by deterministic parity automata. Applying Remark 4 yields the following two bounds on the transformations from Section 3:
1. If the winning condition Win for a game \(\mathcal{G}\) under delayed control is given by a deterministic parity automaton with \(n\) states, then the winning condition \(\overline{L(\mathcal{G})}\) is recognized by a deterministic parity automaton with \(n\) states.
2. Dually, if the winning condition \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\) of a delay game is given by a deterministic parity automaton with \(n\) states, then the winning condition of the game \(\mathcal{G}(\overline{L})\) under delayed action is recognized by a deterministic parity automaton with \(n\cdot|\Sigma_{I}|+1\) states.
Exponential lookahead is both sufficient to win all \(\omega\)-regular delay games that can be won and required to win some of these games [9]. Furthermore, determining whether there is some lookahead that allows Player \(O\) to win a given \(\omega\)-regular delay game is ExpTime-complete [9]. As in the case of safety games, we can transfer these results to games under delayed control with \(\omega\)-regular winning conditions.
**Theorem 5**.:
1. _The following problem is_ ExpTime_-complete: Given a game_ \(\mathcal{G}\) _with_ \(\omega\)_-regular winning condition specified by a deterministic parity automaton, does controller win_ \(\mathcal{G}\) _under every delay_ \(\delta\)_?_
2. _Let_ \(\mathcal{G}\) _be a game with_ \(\omega\)_-regular winning condition specified by a deterministic parity automaton with_ \(n\) _states. There is a_ \(\delta_{t}\in\mathcal{O}(2^{n^{2}})\) _such that if controller wins_ \(\mathcal{G}\) _under delay_ \(\delta_{t}\)_, then she wins_ \(\mathcal{G}\) _under every_ \(\delta\)_._
3. _For every_ \(n>1\)_, there is a game_ \(\mathcal{G}_{n}\) _of size_ \(\mathcal{O}(n^{2})\) _with_ \(\omega\)_-regular winning condition specified by a two-state deterministic parity automaton_ \(\mathcal{A}_{n}\) _such that controller wins_ \(\mathcal{G}\) _under delay_ \(2^{n}\)_, but not under delay_ \(2^{n}+2\)_._
Linear Temporal Logic.Finally, one can also transfer the triply-exponential upper and lower bounds on the necessary lookahead in delay games with LTL winning conditions as well as the 3ExpTime-completeness of determining whether Player \(O\) wins such a delay game with respect to some lookahead [10] to games under delayed control with LTL winning conditions. Here, we exploit the following facts:
1. If the winning condition Win for a game \(\mathcal{G}\) under delayed control is given by an LTL formula \(\varphi\), then the winning condition \(\overline{L(\mathcal{G})}\) is given by an LTL formula of size \(\mathcal{O}(|\varphi|)\).
2. Dually, if the winning condition \(L\subseteq(\Sigma_{I}\times\Sigma_{O})^{\omega}\) of a delay game is given by an LTL formula \(\varphi\), then the winning condition of the game \(\mathcal{G}(\overline{L})\) under given action is given by an LTL formula of size \(\mathcal{O}(|\varphi|)\).
**Theorem 6**.:
1. _The following problem is_ 3ExpTime_-complete: Given a game_ \(\mathcal{G}\) _with winning condition specified by an LTL formula_ \(\varphi\)_, does controller win_ \(\mathcal{G}\) _under every delay_ \(\delta\)
2. _Let_ \(\mathcal{G}\) _be a game with_ \(\omega\)_-regular winning condition specified by an LTL formula_ \(\varphi\)_. There is a_ \(\delta_{t}\in\mathcal{O}(2^{2^{2^{|\varphi|+|\beta|}}})\) _such that if controller wins_ \(\mathcal{G}\) _under delay_ \(\delta_{t}\)_, then she wins_ \(\mathcal{G}\) _under every_ \(\delta\)_._
3. _For every_ \(n>1\)_, there is a game_ \(\mathcal{G}_{n}\) _of size_ \(\mathcal{O}(n^{2})\) _with winning condition specified by an LTL formula_ \(\varphi_{n}\) _of size_ \(\mathcal{O}(n^{2})\) _such that controller wins_ \(\mathcal{G}\) _under delay_ \(2^{2^{2^{n}}}\)_, but not under delay_ \(2^{2^{2^{n}}}+2\)_._
To conclude, let us just remark that the results presented here also allow us to transfer results obtained for delay games with quantitative winning conditions [10, 13, 14] to games under delayed control with quantitative winning conditions. In fact, our result works for any winning condition, as long as the two transformations described in Section 3 are effective.
### Environment's View
In Section 3, we proved a tight correspondence between controller in a game under delayed control and Player \(I\) in a delay game. Thus, it is natural to ask whether environment and Player \(O\) also share such a tight correspondence. A first indication that this is not the case can be obtained by considering the determinacy of these games: While delay games with Borel winning conditions are determined [8], even safety games under delayed action are not necessarily determined [3].
Upon closer inspection, this is not surprising, as the strategies in games under delayed control are not dual between the players: controller is at a disadvantage as she only gets delayed access to the actions picked by environment while environment does not benefit from this disadvantage. He does not get access to the actions picked by controller in advance. In a delay game however, the strategy definitions are completely dual: Player \(I\) has to grant lookahead on her moves which Player \(O\) gets access to. Thus, environment is in a weaker position than Player \(O\).1
Footnote 1: The difference can be formalized in terms of the information the players have access to: safety games under delay are incomplete-information games while delay games are complete-information games. Although interesting, we do not pursue this angle any further.
In this section, we study the correspondence between environment and Player \(O\) in detail by formally proving that environment is weaker than Player \(O\).
Let \(\mathcal{G}\) be a safety game. If environment wins \(\mathcal{G}\) then Player \(O\) wins \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for every \(k\).
Now, we show that the converse direction fails.
There is a safety game \(\mathcal{G}\) such that Player \(O\) wins \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for some \(k\), but environment does not win \(\mathcal{G}\).
Proof.: Let \(\mathcal{G}\) be the safety game depicted in Fig. 2. With each move, the players place a coin (by either picking heads or tails) and environment wins a play by correctly predicting the second action of controller with his first action. Clearly, environment has no winning strategy in \(\mathcal{G}\) because he has no access to future moves of controller. Stated differently, if environment picks \(h\) (\(t\)) in his first move, then the play in which the second action of controller is \(t\) (\(h\)) is winning for controller.2
Footnote 2: Note that under any delay \(\delta>0\), controller cannot do this strategically, as she has to fix her first two actions in advance. But, as environment has no access to these fixed actions, he cannot react to them strategically.
Now, we consider the delay game \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for \(k=1\). Recall that the winning condition \(\overline{L(\mathcal{G})}\) contains the winning plays for Player \(O\), i.e., we have \(\left(\begin{subarray}{c}\omega_{0}\sigma_{2}\sigma_{4}\cdots\\ \sigma_{1}\sigma_{3}\sigma_{5}\cdots\end{subarray}\right)\in\overline{L( \mathcal{G})}\) if and only if \(\sigma_{1}\neq\sigma_{2}\). It is easy to see that Player \(O\) has a winning strategy in \(\Gamma_{k}(\overline{L(\mathcal{G})})\) by simply flipping the second letter picked by Player \(I\). This is possible since Player \(I\) has to provide two letters during the first round.
**Remark 7**.: _The safety game \(\mathcal{G}\) depicted in Fig. 2 is in fact undetermined under every delay \(\delta>0\). In the proof of Lemma 4, we have already established that environment does not win \(\mathcal{G}\). Now, under every delay \(\delta>0\), controller has to fix at least two actions before getting access to the first action picked by environment. This implies that there is, for every strategy for controller under delay \(\delta\), at least one consistent play that is losing for her, i.e., a play in which environment picks \(h\) (\(t\)) if the second move fixed by controller is \(t\) (\(h\)). Thus, no strategy is winning for controller under delay \(\delta\)._
_Let us remark that, according to our definition of environment strategies, he is not able to enforce a losing play for controller (the game is undetermined after all), as he does not get access to the second action fixed by controller. Also, this is again the difference to delay games: Player \(O\) has access to these first two actions when making his first move, and is thereby able to win._
The full relation between games under delayed control and delay games is depicted in Fig. 3, restricted to Borel winning conditions (note that both transformations described in Section 3 preserve Borelness). The equivalence between controller winning the game under delayed control and Player \(I\) winning the corresponding delay game has been shown in Lemma 1 and Lemma 2. Also, Lemma 2 and Remark 5 imply that undetermined safety games under delayed control and those won by environment get transformed into delay games that are won by Player \(O\). Finally, Lemma 1 and Remark 5 imply that delay games won by Player \(O\) get transformed into undetermined safety games under delayed control or to ones that are won by environment.
Figure 3: The relation between games under delayed control and delay games with Borel winning conditions. The upper ellipsis contains pairs \((\mathcal{G},\delta)\) consisting of a game \(\mathcal{G}\) under delayed control and a fixed delay \(\delta\); the lower one contains delay games \(\Gamma_{k}(L)\) for some fixed \(k\). The arrows represent the two transformations described in Section 3.
Figure 2: A safety game that environment does not win, but Player \(O\) wins the associated delay game. The initial state is marked by an arrow and the unsafe vertices are black. Note that both players have the actions \(h\) and \(t\) available.
## 5 Refining the Correspondence: Sure Winning and Almost Sure Winning
It should be noted that the above transformations of games under delayed control into delay games and vice versa hinge on the fact that environment in the game under delayed control could, though lacking recent state information to do so strategically, by mere chance play the very same actions that the informed Player \(O\) in the delay game plays in his optimal adversarial strategy. That this constitutes a fundamental difference becomes apparent if we consider almost sure winning, i.e., the existence of a mixed strategy winning with probability 1, instead of sure winning in the sense of the definition of winning strategies for games under delayed control in Section 2.1.
**Remark 8**.: _We introduce mixed strategies for games under delayed control only, as delay games (with Borel winning conditions) are determined, which means that mixed strategies do not offer any advantage over pure strategies as introduced in Section 2.2._
Given an even \(\delta\geq 0\), a mixed strategy for controller in \(\mathcal{G}\) under delay \(\delta\) is a pair \((\alpha,\tau_{C})\) where \(\alpha\in\mathcal{P}\left((\Sigma_{0})^{\frac{\delta}{2}}\right)\) is a probability distribution over \((\Sigma_{0})^{\frac{\delta}{2}}\) and \(\tau_{C}\colon\operatorname{Pref}_{c}(\mathcal{G})\to\mathcal{P}\left(\Sigma_ {0}\right)\) maps play prefixes ending in \(S_{0}\) to probability distributions over actions of controller. A mixed strategy for environment is a mapping \(\tau_{E}\colon\operatorname{Pref}_{c}(\mathcal{G})\to\mathcal{P}\left(\Sigma_ {1}\right)\).
The notion of consistency of a play with a strategy simply carries over, now inducing a Markov chain due to the probabilistic nature of the strategies. We say that a mixed strategy for Player \(I\) (for Player \(O\), controller, or environment) _wins almost surely_ if and only if it wins against any strategy of its opponent Player \(O\) (Player \(I\), environment, or controller, resp.) with probability 1, i.e., if and only if the winning condition is satisfied with probability 1 over the Markov chain induced by the game and the particular strategy combination. In this section, we write sure winning for winning as defined in Section 2, as is usual for games with randomized strategies.
The notion of almost sure winning alters chances for the players substantially by excluding the possibility of reliably playing an optimal strategy though lacking the information for doing so due to delayed observation. This can be seen from the following lemma, stating a fundamental difference between controller's power in games under delayed control and Player \(I\)'s power in the corresponding delay games.
**Lemma 5**.: _There is a game \(\mathcal{G}\) under delayed control such that controller wins \(\mathcal{G}\) almost surely under some delay \(\delta\) while Player \(O\) (not Player \(I\), which is the player corresponding to controller) wins the corresponding delay game \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for \(k=\frac{\delta}{2}\), and surely so._
Proof.: Consider the reachability game in Fig. 4 under delay 1 (or any larger delay). Intuitively, the players place a coin in each round (by picking either heads to tails with each move) and controller wins a play if the black state is visited, which happens if she selects a different coin placement than chosen by environment in the previous move.
Under any positive delay, controller wins this game with probability 1, i.e., almost surely, by a simple randomized strategy of coin tossing: by in each step randomly selecting action \(h\) or \(t\) with positive probability each, an eventual visit of the black state is guaranteed with probability 1, irrespective of being uninformed about environment's preceding move due to the delay.
The corresponding delay game \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for \(k=\frac{\delta}{2}\), however, is easily won by Player \(O\), because in delay games, the delayed Player \(I\) grants a lookahead to Player \(O\). Hence, Player \(O\) can, due to the delay, already see the next move of Player \(I\) such that he can simply copy the next coin placement by Player \(I\), safely staying in the non-black states and thereby win.
Note that Lemma 5 implies that the previously observed correspondence between Player \(I\) and controller breaks down when considering almost sure winning strategies instead of just sure winning strategies: Games under delayed control for which Player \(O\) wins the corresponding delay game, are no longer either undetermined or won by environment, but may well be won by controller almost surely.
This consequently refines the correspondence between games under delayed control and delay games shown in Fig. 3 as follows.
**Theorem 7**.: _Given a game \(\mathcal{G}\) and an even \(\delta\geq 1\), the following correspondences between \(\mathcal{G}\) and the corresponding delay game \(\Gamma_{k}(\overline{L(\mathcal{G})})\) for \(k=\frac{\delta}{2}\) hold:_
1. _Controller surely wins_ \(\mathcal{G}\) _under delay_ \(\delta\) _if and only if Player_ \(I\) _surely wins_ \(\Gamma_{k}(\overline{L(\mathcal{G})})\)_._
2. _If controller almost surely wins_ \(\mathcal{G}\) _under delay_ \(\delta\) _but cannot surely win_ \(\mathcal{G}\) _under delay_ \(\delta\) _then Player_ \(O\) _surely wins_ \(\Gamma_{k}(\overline{L(\mathcal{G})})\)_._
3. _If environment surely or almost surely wins_ \(\mathcal{G}\) _under delay_ \(\delta\) _then Player_ \(O\) _wins_ \(\Gamma_{k}(\overline{L(\mathcal{G})})\)_._
4. _If_ \(\mathcal{G}\) _is undetermined under delay_ \(\delta\) _with respect to almost sure winning strategies then Player_ \(O\) _wins_ \(\Gamma_{k}(\overline{L(\mathcal{G})})\)_._
5. _All the aforementioned classes are non-empty, i.e., there exist games under delayed control where controller wins, where controller wins almost surely (but not surely), where environment wins surely, where environment wins almost surely (but not surely), and games which are undetermined with respect to almost-sure winning strategies._
_The above correspondences are depicted in Fig. 5._
Item 2. of the above lemma is of particular interest, as it expresses a delay-related strengthening of controller relative to Player \(I\), letting controller win almost surely where Player \(I\) looses for sure. The correspondence between controller and Player \(I\) observed in the deterministic setting thus breaks down when almost sure winning is considered and mixed strategies are permitted.
**Remark 9**.: _In contrast to games under delayed control, where mixed strategies provide additional power to both the controller and the environment, the notions of sure winning and almost sure winning coincide for delay games (with Borel winning conditions) due to their determinacy [7]. Admitting mixed strategies (and almost sure winning) does not provide additional power to either of the two players in a delay game, as the determinacy result always implies existence of an infallible pure strategy for one of the players._
Figure 4: A reachability game that, under any positive delay, is won by controller almost surely via the simple randomized strategy of coin tossing (thus randomly generating head and tail events \(h\) and \(t\)), but won by player \(O\) surely if interpreted as a delay game due to the lookahead on Player \(I\)’s actions granted to Player \(O\). The initial state is marked by an arrow and controller wins if and only if the black vertex is visited at least once.
## 6 Conclusion
We have compared delay games [9] and games under delayed control [3], two types of infinite games aiming to model asynchronicity in reactive synthesis, and have exhibited the differences in definitions and charted the relation between them with respect to both deterministic and randomized strategies: One can efficiently transform a game under delayed control into a delay game such that controller wins the game under delayed control with delay \(\delta\) by a deterministic strategy if and only if Player \(I\) wins the resulting delay game with lookahead of size \(\frac{\delta}{2}\). Dually, one can efficiently transform a delay game into a game under delayed control such that Player \(I\) wins the delay game with lookahead of size \(\delta\) if and only if controller wins the resulting game under delayed control with delay \(2\delta\) by a deterministic strategy. These results allow us to transfer known complexity results and bounds on the amount of delay from delay games to games under delayed control, for which no such results were known, when considering deterministic strategies. We also proved that the analogous results fail in the setting of randomized strategies and almost sure winning conditions, as well as for the relation between environment and Player \(O\), both under deterministic and randomized strategies.
**Acknowledgements:** Martin Franzle has been supported by Deutsche Forschungsgemeinschaft under grant no. DFG FR 2715/5-1 "Konfliktresolution und kausale Inferenz mittels integrierter sozio-technischer Modellbildung". Sarah Winter is a postdoctoral researcher at F.R.S.-FNRS. Martin Zimmermann has been supported by DIREC - Digital Research Centre Denmark.
|
2304.00125 | On topological obstructions to the existence of non-periodic Wannier
bases | Recently, M. Ludewig and G. C. Thiang introduced a notion of a uniformly
localized Wannier basis with localization centers in an arbitrary uniformly
discrete subset $D$ in a complete Riemannian manifold $X$. They show that,
under certain geometric conditions on $X$, the class of the orthogonal
projection onto the span of such a Wannier basis in the $K$-theory of the Roe
algebra $C^*(X)$ is trivial. In this short note, we clarify the geometric
conditions on $X$, which guarantee triviality of the $K$-theory class of any
Wannier projection. We show that this property is equivalent to triviality of
the unit of the uniform Roe algebra of $D$ in the $K$-theory of its Roe
algebra, and provide a geometric criterion for that. As a consequence, we prove
triviality of the $K$-theory class of any Wannier projection on a connected
proper measure space $X$ of bounded geometry with a uniformly discrete set of
localization centers, coarsely equivalent to $X$. | Yu. Kordyukov, V. Manuilov | 2023-03-31T20:53:33Z | http://arxiv.org/abs/2304.00125v1 | # On topological obstructions to the existence of non-periodic Wannier bases
###### Abstract.
Recently, M. Ludewig and G. C. Thiang introduced a notion of a uniformly localized Wannier basis with localization centers in an arbitrary uniformly discrete subset \(D\) in a complete Riemannian manifold \(X\). They show that, under certain geometric conditions on \(X\), the class of the orthogonal projection onto the span of such a Wannier basis in the \(K\)-theory of the Roe algebra \(C^{*}(X)\) is trivial. In this short note, we clarify the geometric conditions on \(X\), which guarantee triviality of the \(K\)-theory class of any Wannier projection. We show that this property is equivalent to triviality of the unit of the uniform Roe algebra of \(D\) in the \(K\)-theory of its Roe algebra, and provide a geometric criterion for that. As a consequence, we prove triviality of the \(K\)-theory class of any Wannier projection on a connected proper measure space \(X\) of bounded geometry with a uniformly discrete set of localization centers, coarsely equivalent to \(X\).
The results of Section 2 were obtained jointly by both authors, the results of Section 3 were obtained by V. Manuilov and supported by the RSF grant 23-21-00068.
algebra in the \(K_{0}\) group of the Roe algebra of \(D\) (Theorem 3). We also provide a geometric criterion for the latter property (Theorems 5 and 6). As a consequence, we prove triviality of the \(K\)-theory class of any Wannier projection on a connected proper measure space \(X\) of bounded geometry with a uniformly discrete set of localization centers, coarsely equivalent to \(X\) (Corollary 11).
Let \(X\) be a proper metric measure space, that is, \(X\) is a set, which is equipped with a metric \(d\) and a measure \(m\) defined on the Borel \(\sigma\)-algebra defined by the topology on \(X\) induced by the metric, and all balls are compact. Let \(D\subset X\) be a discrete subspace such that the inclusion \(D\subset X\) is a coarse equivalence. The latter means that there exists \(C>0\) such that for any \(x\in X\) there exists \(y\in D\) with \(d(x,y)<C\). We assume that \(D\) is uniformly discrete (i.e. \(\inf_{g,h\in D,g\neq h}d(g,h)>0\)) and has bounded geometry (i.e. for any \(R>0\) the number of points in each ball of radius \(R\) is uniformly bounded). We may consider \(D\) as a measure space with the measure of any point equal to one. It is natural to think of \(D\) as a discretization of \(X\).
For a Hilbert space \(H\) we write \(\mathbb{B}(H)\) (resp., \(\mathbb{K}(H)\)) for the algebra of all bounded (resp., all compact) operators on \(H\). Recall the definition of the Roe algebra of \(X\)[11]. Let \(H_{X}\) be a Hilbert space with an action of the algebra \(C_{0}(X)\) of continuous functions on \(X\) vanishing at infinity (i.e. a \(*\)-homomorphism \(\psi:C_{0}(X)\to\mathbb{B}(H_{X})\)). We will assume that \(\{\psi(f)\xi:f\in C_{0}(X),\xi\in H_{X}\}\) is dense in \(H_{X}\) and \(\psi(f)\in\mathbb{K}(H_{X})\) implies that \(f=0\). An operator \(T\in\mathbb{B}(H_{X})\) is _locally compact_ if the operators \(T\psi(f)\) and \(\psi(f)T\) are compact for any \(f\in C_{0}(X)\). It has _finite propagation_ if there exists some \(R>0\) such that \(\psi(f)T\psi(g)=0\) whenever the distance between the supports of \(f,g\in C_{0}(X)\) is greater than \(R\). The _Roe algebra_\(C^{*}(X,H_{X})\) is the norm completion of the \(*\)-algebra of locally compact, finite propagation operators on \(H_{X}\).
Let \(H_{X}=L^{2}(X)\otimes H\) for some Hilbert space \(H\) (possibly finite-dimensional). In this case there is a standard action of \(C_{0}(X)\) on \(H_{X}\) by multiplication, and we shall skip \(\psi\) from the notation. The Roe algebra \(C^{*}(X,H_{X})\) in this case will be denoted by \(C^{*}_{H}(X)\).
Often one may forget about \(H\), namely one may take \(H\) one-dimensional. This happens when the operator of multiplication by any non-zero \(f\in C_{0}(X)\) in \(L^{2}(X)\) is not compact, i.e. when the measure on \(X\) has no atoms. In this case the algebras \(C^{*}_{H}(X)\) and \(C^{*}_{\mathbb{C}}(X)\) are isomorphic.
But for discrete space \(D\) this is not true: \(C^{*}_{H}(D)\) is not isomorphic to \(C^{*}_{\mathbb{C}}(D)\), so for discrete spaces we have two algebras: the _Roe algebra_\(C^{*}_{H}(D)\) with an infinite-dimensional Hilbert space \(H\), usually denoted by \(C^{*}(D)\), and the _uniform Roe algebra_\(C^{*}_{\mathbb{C}}(D)\), usually denoted by \(C^{*}_{u}(D)\).
## 2. Wannier projections as the image of the unit of \(C^{*}_{\mathbb{C}}(D)\)
Let \(\{\phi_{x}:x\in D\}\) be a set of functions in \(L^{2}(X)\) such that \(\operatorname{supp}\phi_{x}\cap\operatorname{supp}\phi_{y}=\emptyset\) when \(x\neq y\), \(x\in\operatorname{supp}\phi_{x}\), the diameters of \(\operatorname{supp}\phi_{x}\), \(x\in D\), are uniformly bounded, and \(\|\phi_{x}\|_{2}=1\), \(x\in D\). Let \(H_{\phi}\subset L^{2}(X)\) be the closure of the linear span of the set \(\{\phi_{x}:x\in D\}\), and let \(p_{\phi}\) denote the orthogonal projection onto \(H_{\phi}\). The set of functions \(\{\phi_{x}:x\in D\}\) is called a \(D\)-compactly supported _Wannier basis_ for \(H_{\phi}\), and the projection \(p_{\phi}\) a _Wannier projection_.
Further on we shall need a construction from [6, Section 4], which allows us to induce maps between Roe algebras from maps between spaces. Given two metric measure spaces, \(X\) and \(Y\), and two Hilbert spaces, \(H_{X}\) and \(H_{Y}\), with respective actions \(\psi_{X}\) and \(\psi_{Y}\) of \(C_{0}(X)\) and \(C_{0}(Y)\), respectively, a coarse map \(F:X\to Y\) induces a \(*\)-homomorphism \(C^{*}(X,H_{X})\to C^{*}(Y,H_{Y})\), \(T\mapsto VTV^{*}\), where \(V:H_{X}\to H_{Y}\) is an isometry that covers \(F\)
which means that there exists \(C>0\) such that \(\psi_{X}(f)V\psi_{Y}(g)=0\) when \(d(\operatorname{supp}f,\operatorname{supp}(g\circ F))>C\), \(f\in C_{0}(X)\), \(g\in C_{0}(Y)\).
Let \(V:\mathbb{C}\to H\) be an isometry, i.e. an inclusion of \(\mathbb{C}\) onto a one-dimensional subspace of \(H\), and let \(V_{X}=\operatorname{id}\otimes V:L^{2}(X)=L^{2}(X)\otimes\mathbb{C}\to L^{2}( X)\otimes H\). Clearly, \(V_{X}\) covers the identity map of \(X\), and we get the maps
\[i_{X}:C_{\mathbb{C}}^{*}(X)\to C_{H}^{*}(X)\quad\text{and}\quad i_{D}:C_{ \mathbb{C}}^{*}(D)\to C_{H}^{*}(D)\]
given by \(i_{X}(T)=V_{X}TV_{X}^{*}\) and \(i_{D}(T)=V_{D}TV_{D}^{*}\) respectively. These two maps induce maps in \(K\)-theory
\[(i_{X})_{*}:K_{0}(C_{\mathbb{C}}^{*}(X))\to K_{0}(C_{H}^{*}(X))\quad\text{and} \quad(i_{D})_{*}:K_{0}(C_{\mathbb{C}}^{*}(D))\to K_{0}(C_{H}^{*}(D)),\]
where the first map is an isomorphism when the measure on \(X\) has no atoms. These maps are independent of the choice of \(V\).
Given a Wannier basis \(\{\phi_{x}\}_{x\in D}\), let \(U:l^{2}(D)\to L^{2}(X)\) be the isometry defined by \(U(\delta_{x})=\phi_{x}\), so that the range of \(U\) is the subspace \(H_{\phi}\subset L^{2}(X)\). We shall use also the isometry \(U_{H}=U\otimes 1_{H}:l^{2}(D)\otimes H\to L^{2}(X)\otimes H\). Clearly, \(U\) covers the inclusion map \(D\subset X\), so it gives the map
\[j_{\mathbb{C}}:C_{\mathbb{C}}^{*}(D)\to C_{\mathbb{C}}^{*}(X),\quad j_{ \mathbb{C}}(T)=UTU^{*},\ T\in C_{\mathbb{C}}^{*}(D).\]
Similarly, using \(U_{H}\) instead of \(U\), we get the map
\[j_{H}:C_{H}^{*}(D)\to C_{H}^{*}(X),\quad j_{H}(T)=U_{H}TU_{H}^{*},\ T\in C_{H}^ {*}(D).\]
Note that \(C_{\mathbb{C}}^{*}(D)\) is unital, and \(j_{\mathbb{C}}(1)=p_{\phi}\) is the Wannier projection. Set \(q=i_{D}(1)\in C_{H}^{*}(D)\).
The above maps can be organized into the commutative diagram of \(C^{*}\)-algebras
and of their \(K_{0}\) groups
(1)
**Lemma 2**.: _One has \(j_{\mathbb{C}}(1)\otimes 1_{H}=p_{\phi}\otimes 1_{H}=j_{H}(q)\)._
Proof.: As \(j_{H}(q)=p_{\phi}\otimes 1_{H}\), the conclusion follows from commutativity of the diagram (1).
**Theorem 3**.: _Let \(p_{\phi}\) be a Wannier projection with a uniformly discrete set \(D\) of localization centers, coarse equivalent to \(X\). The following are equivalent:_
* \([p_{\phi}]=0\) _in_ \(K_{0}(C_{\mathbb{C}}^{*}(X))\)_;_
* \([i_{D}(1)]=0\) _in_ \(K_{0}(C_{H}^{*}(D))\)_._
Proof.: The map \((i_{X})_{*}\) is an isomorphism, and \(i_{X}(p_{\phi})=p_{\phi}\otimes 1_{H}\). Coarse equivalence of \(D\) and \(X\) implies that \((j_{H})_{*}\) is an isomorphism as well. Thus, all entries in (1), except the upper left one, are isomorphic.
Thus triviality of the \(K\)-theory class \([p_{\phi}]\) of \(p_{\phi}\) is not related to \(X\), but only to \(D\).
**Remark 1**.: Recall that a _partial translation_ is a bijection \(f:A\to B\) between subsets \(A,B\subset D\) such that \(\sup_{x\in A}d(x,f(x))<\infty\). The space \(D\) is _paradoxical_ if there exist a decomposition \(D=D_{+}\sqcup D_{-}\) and partial translations \(f_{\pm}:D\to D_{\pm}\). Theorem 4.9 in [1] shows that if \(D\) is paradoxical then [1] is zero already in \(K_{0}(C_{\mathbb{C}}^{*}(D))\).
**Remark 2**.: Theorem 3 is still true if we weaken the condition \(\phi_{x}\phi_{y}=0\) for \(x,y\in D\), \(x\neq y\). Let \(V:l^{2}(D)\to L^{2}(X)\) be defined by \(V(\delta_{x})=\phi_{x}\), \(x\in D\), as before. Instead of requiring it to be an isometry we may require only that \(V^{*}V\) is invertible. In this case \(\{\phi_{x}\}_{x\in D}\) is not a basis for \(H_{\phi}\), but only a frame. Let \(V=WH\) be the polar decomposition. Then the range of \(W\) is still \(H_{\phi}\). The isometry \(W\) does not cover the inclusion \(D\subset X\), nevertheless the map \(T\mapsto WTW^{*}\) still maps \(C_{\mathbb{C}}^{*}(D)\) to \(C_{\mathbb{C}}^{*}(X)\), and Lemma 2 and Theorem 3 hold.
## 3. Geometric criterion for triviality of \([p_{\phi}]\)
**Definition 4**.: We say that a discrete metric space \(D\) has a _ray structure_ if there exists a family \(\{D_{i}\}_{i\in I}\), of subsets of \(D\) such that \(D=\sqcup_{i\in I}D_{i}\), and a family of bijective uniformly Lipschitz maps \(\beta_{i}:\mathbb{N}\to D_{i}\), \(i\in I\).
In this case we call the subsets \(D_{i}\), \(i\in I\), _rays_.
Let \(C_{b}(D)\) denote the commutative \(C^{*}\)-algebra of bounded functions on \(D\). The inclusion \(C_{b}(D)\subset C_{\mathbb{C}}^{*}(D)\) induces a map \(\gamma:K_{0}(C_{b}(D))\to K_{0}(C_{\mathbb{C}}^{*}(D))\).
**Theorem 5**.: _Let \(D\) be a uniformly discrete space of bounded geometry. If \(D\) has a ray structure then \((i_{D})_{*}(p)=0\) for any \(p\in\gamma(K_{0}(C_{b}(D)))\). In particular, \((i_{D})_{*}([1])=0\), hence \([p_{\phi}]=0\) for any Wannier basis on \(X\) with localization centers in \(D\)._
Proof.: Fix a basis in \(H\), and let \(p_{k}\in\mathbb{K}(H)\) be the projection onto the first \(k\) vectors of the fixed basis of \(H\). Note that \(K_{0}(C_{b}(D))\) is the group of bounded \(\mathbb{Z}\)-valued functions on \(D\), and it suffices to prove the claim for the case when \(p\) is a bounded \(\mathbb{N}\)-valued function \(x\mapsto k_{x}\), \(x\in D\), i.e. \(p=[f]\), where \(f(x)=p_{k_{x}}\).
Endow \(Y=\mathbb{N}\times I\) with the wedge metric, i.e. \(d_{Y}((k,i),(l,i))=|k-l|\) and \(d_{Y}((k,i),(l,j))=k+l+1\) for \(i\neq j\), where \(k,l\in\mathbb{N}\), \(i,j\in I\). Then the family \((\beta_{i})_{i}\in I\) determines a bijective map \(\beta:Y\to D\), \(\beta((k,i))=\beta_{i}(k)\). As
\[d(\beta(k,i),\beta(l,i))<|k-l|C=Cd_{Y}((k,i),(l,i)),\]
we have \(d_{Y}(y_{1},y_{2})\geq\frac{1}{C}d(\beta(y_{1}),\beta(y_{2}))\) for any \(y_{1},y_{2}\in Y\). Therefore, the bijection \(\beta\) defines inclusions \(C_{\mathbb{C}}^{*}(Y)\subset C_{\mathbb{C}}^{*}(D)\) and \(C_{H}^{*}(Y)\subset C_{H}^{*}(D)\), the first of which is unital. If we show that the class of the projection \((i_{D})_{*}(p)\) is zero in \(K_{0}(C_{H}^{*}(Y))\) then it should be zero in \(K_{0}(C_{H}^{*}(D))\) too. Making the \(C^{*}\)-algebra smaller once again, we can pass from \(C_{H}^{*}(Y)\) to \(\prod_{i\in I}C_{H}^{*}(\mathbb{N})\subset C_{H}^{*}(Y)\), and then we have to show that the class of the projection \((i_{D})_{*}(p)\) is zero in \(K_{0}(\prod_{i\in I}C_{H}^{*}(\mathbb{N}))\).
Restricting \(f\) to one copy of \(D_{i}\), \(i\in I\), and identifying \(D_{i}\) with \(\mathbb{N}\), we get the function \(f_{i}:D_{i}\to\mathbb{K}(H)\), \(f_{i}(n)=p_{k_{n}}\) for any \(n\in\mathbb{N}\cong D_{i}\). Set \(l_{n}=\sum_{j=1}^{n}k_{n}\), \(n\in\mathbb{N}\), and let \(h_{i}:D_{i}\to\mathbb{K}(H)\) be the function defined by \(h_{i}(n)=p_{l_{n}}\). Summing up operators of multiplication by \(h_{i}\) over \(I\), we get an operator of multiplication by \(h\) on \(D\). Set \(q=\gamma(h)\). We claim that \(p+q=q\) in \(K_{0}(C_{H}^{*}(\mathbb{N}))\). Clearly, \(p+q\) is the class of the operator of multiplication by \(h^{\prime}\) such that \(h^{\prime}_{i}(n)=p_{l(n+1)}\), \(n\in\mathbb{N}\). Define \(T^{(i)}i\in C^{*}(D_{i},H)\) by \(T^{(i)}_{nm}=\left\{\begin{array}{ll}p_{l(n)}&\mbox{if $m=n+1$;}\\ 0&\mbox{otherwise.}\end{array}\right.\) Then \(T^{(i)}(T^{(i)})^{*}=h_{i}\), \((T^{(i)})^{*}T^{(i)}=h_{i}^{\prime}\). Setting \(T=\oplus_{i\in I}T^{(i)}\), we obtain Murray-von Neumann equivalence between the projections \(p\oplus q\) and \(q\). Thus \((i_{D})_{*}(p)=0\).
Finally, note that \([1]\in\gamma(K_{0}(C_{b}(D)))\).
For \(\alpha>0\) let \(D(\alpha)\) be the graph whose vertices are points of \(D\), and two vertices, \(x,y\in D\) are connected by an edge whenever \(d(x,y)\leq\alpha\).
**Theorem 6**.: _Let \(D\) be a uniformly discrete metric space of bounded geometry. The following are equivalent:_
1. _there exists_ \(\alpha>0\) _such that the graph_ \(D(\alpha)\) _has no finite connected components;_
2. _there exists a discrete metric space_ \(D^{\prime}\) _with a ray structure and an isometric inclusion_ \(D\subset D^{\prime}\) _which is a coarse equivalence._
Proof.: Bounded geometry of \(D\) implies existence of some \(N\in\mathbb{N}\) such that each vertex in \(D(\alpha)\) has no more than \(N\) neighbors. By the Nash-Williams Theorem [10], \(D_{\alpha}\) is the union of not more than \(N\) subforests. Moreover, in each connected component one can connect all trees of a subforest into a single tree.
Consider first the case when \(D(\alpha)=T\) is a tree. The simplest case is when \(T\) is an infinite tree without no dead ends, i.e. when each finite path from the root vertex to any vertex can be extended infinitely, i.e. when each vertex (with possible exception of the root) is the end-point of at least two edges. Choose an infinite ray starting at the root. They give, and denote it by \(T_{1}\). Then the graph \(T\setminus T_{1}\) is a forest. It contains finitely many trees with roots at the minimal distance from the root of \(T\). Choose infinite rays starting at these new roots. They give rays \(T_{2},\ldots,T_{m_{1}}\). Going further from the root of \(T\), we obtain, by induction, a decomposition \(T=\sqcup_{j\in J}T_{j}\), where each \(T_{j}\) is coarsely equivalent to \(\mathbb{N}\).
Now consider an infinite tree with dead ends. Suppose that the tree \(T\) has the form \(T=R\cup T_{f}\), where \(R\) is an infinite ray with vertices \(x_{0},x_{1},\ldots,x_{n},x_{n+1},\ldots\), \(T_{f}\) is a finite tree with the root \(y_{0}=x_{n}=R\cap T_{f}\). As the number of neighbors of any vertex does not exceed \(N\), there is a path \(\lambda\) in \(T_{f}\) that starts and ends at \(y_{0}\) and passes through each vertex of \(T_{f}\) not more than \(2N\) times. Let \(n_{y}\) be the number of times that the path \(\lambda\) passes through the vertex \(y\). Clone all vertices of \(T_{f}\), i.e. replace each vertex \(y\in T_{f}\) by \(n_{y}\) vertices. Similarly, clone the edges so that the path \(\lambda\) passes each edge only once. Denote the resulting finite graph by \(T^{\prime}_{f}\). Then the path \(\lambda\) runs through all vertices of \(T^{\prime}_{f}\), \(\lambda=(z_{0},z_{1},\ldots,z_{m})\). In particular, the vertex \(y_{0}\) becomes two vertices, \(z_{0}\) and \(z_{m}\). Set \(T^{\prime}=R\cup T^{\prime}_{f}\), and let the map \(\beta:\mathbb{N}\to T^{\prime}\) be given by the sequence \((x_{0},x_{1},\ldots,x_{n-1},z_{0},z_{1},\ldots,z_{m},x_{n+1},x_{n+2},\ldots)\). The map \(\beta\) clearly satisfies Definition 5. It is also clear that \(T^{\prime}\) is coarsely equivalent to \(T\).
If \(T=R\cup T^{1}_{f}\cup T^{2}_{f}\cup\cdots\) with the roots of finite trees \(x_{n_{1}},x_{n_{2}},\ldots\) then we apply the above procedure to each finite subtree one after one to obtain a ray \(x_{0},x_{1},\ldots,x_{n_{1}-1},z^{1}_{0},\ldots,z^{1}_{m_{1}},x_{n_{1}+1}, \ldots,x_{n_{2}-2},z^{2}_{0},\ldots,z^{2}_{m_{2}},x_{n_{2}+1},\ldots\), where \(z^{i}_{j},\ j=1,\ldots,m_{j}\), is the list of vertices of the modified tree \((T^{i}_{f})^{\prime}\).
If a tree is finite then we can connect it to a ray from another (infinite) tree and proceed as above. Thus we can present each of (not more than) \(N\) trees in each connected component of \(D(\alpha)\) as a disjoint union of rays, and we can pass to the general case. There is still a problem that the trees may share some common edges (Nash-Williams Theorem does not assert that the forests are disjoint). So, consider the case when several rays share some edges. Each such edge can belong to not more than \(N\) rays. Once again, we can clone these edges (and the corresponding vertices as well) so that each ray would pass through its own copy of these edges, and the resulting graph gives \(D^{\prime}\) coarsely equivalent to \(D\).
In the opposite direction, suppose that there exists \(D^{\prime}\) as in (2), and (1) does not hold, i.e. for any \(\alpha>0\) there exists a finite component of \(D(\alpha)\). Then there exists \(C>0\) such
that (a) for any \(x\in D^{\prime}\) ther exists \(y\in D\) with \(d(x,y)<C\), and (b) \(D^{\prime}=\sqcup_{i\in I}D^{\prime}_{i}\) and for each \(i\in I\) there exists a bijective map \(\beta:\mathbb{N}\to D^{\prime}_{i}\) with \(d(\beta(k+1),\beta(k))<C\) for any \(k\in\mathbb{N}\). Take \(\alpha>3C\), and let \(F\subset D\) be a finite subset such that \(d(F,D\setminus F)>3C\). Then \(d(F,D^{\prime}\setminus N_{C}(F))>2C\), where \(N_{C}(F)\) is the \(C\)-neighborhood of \(F\). Let \(x\in F\), and let \(x\in D_{i}\) for some \(i\in I\). The orbit \(D^{\prime}_{i}\) of \(x\) under the action \(\beta\) is infinite, and this, together with finiteness of \(F\), contradicts \(d(\beta(k+1),\beta(k))<C\), \(k\in\mathbb{N}\).
**Corollary 7**.: _Let \(D\) be a uniformly discrete space of bounded geometry satisfying either of the conditions of Theorem 6. Then \((i_{D})_{*}([1])=0\)._
Proof.: Let \(D^{\prime}\) be as in Theorem 6, and let \(\iota:D\to D^{\prime}\) be the corresponding inclusion. Then \(\iota\) induces the maps \(\iota_{\mathbb{C}}:C^{*}_{\mathbb{C}}(D)\to C^{*}_{\mathbb{C}}(D^{\prime})\) and \(\iota_{H}:C^{*}_{H}(D)\to C^{*}_{H}(D^{\prime})\) Set \(s=\iota_{\mathbb{C}}(1)\in C^{*}_{\mathbb{C}}(D^{\prime})\). As \(\iota\) is a coarse equivalence, the lower horizontal map in the commuting diagram
\[\begin{CD}K_{0}(C^{*}_{\mathbb{C}}(D))@>{(\iota_{\mathbb{C}})_{*}}>{}>K_{0}( C^{*}_{\mathbb{C}}(D^{\prime}))\\ @V{(i_{D^{\prime}})_{*}}V{}V@V{(i_{D^{\prime}})_{*}}V{}V\\ K_{0}(C^{*}_{H}(D))@>{(\iota_{H})_{*}}>{}>K_{0}(C^{*}_{H}(D^{\prime}))\end{CD}\]
is an isomorphism. By Theorem 5, \((i_{D^{\prime}})_{*}(s)=0\), hence \((i_{D})_{*}([1])=0\).
**Lemma 8**.: _Suppose that \(D(\alpha)\) has finite components for any \(\alpha>0\). Then \((i_{D})_{*}([1])\neq 0\)._
Proof.: By assumption, ther exists a sequence \(\{F_{n}\}_{n\in\mathbb{N}}\) of finite subsets of \(D\) such that \(d(F_{n},D\setminus F_{n})>n\). Then \(D=E\sqcup F\), where \(F=\sqcup_{n\in\mathbb{N}}F_{n}\). There is a short exact sequence of \(C^{*}\)-algebras
As \(K_{1}\) for compact operators is zero, it suffices to show that the image of \((i_{F})_{*}([1])\) in \(K_{0}(\frac{C^{*}_{H}(F)}{\mathbb{K}(l^{2}(F)\otimes H)})\) is non-zero. Note that \(C^{*}_{H}(F)\) sits between \(\prod_{n\in\mathbb{N}}C_{b}(F_{n})\otimes\mathbb{K}(H)\) and \(\prod_{n\in\mathbb{N}}\mathbb{K}(l^{2}(F_{n})\otimes H)\). As \((i_{F})_{*}([1])\) is non-zero both in \(K_{0}\left(\frac{\prod_{n\in\mathbb{N}}C_{b}(F_{n})\otimes\mathbb{K}(H)}{ \oplus_{n\in\mathbb{N}}C_{b}(F_{n})\otimes\mathbb{K}(H)}\right)\) and in \(K_{0}\left(\frac{\prod_{n\in\mathbb{N}}\mathbb{K}(l^{2}(F_{n})\otimes H)}{ \oplus_{n\in\mathbb{N}}\mathbb{K}(l^{2}(F_{n})\otimes H)}\right)\), it is non-zero in \(K_{0}\left(\frac{C^{*}_{H}(F)}{\mathbb{K}(l^{2}(F)\otimes H)}\right)\) as well.
Let \(\Gamma\) be a graph with the set of vertices \(\Gamma_{0}\) and the set of edges \(\Gamma_{1}\). The group \(C^{BM}_{k}(\Gamma)\) of \(k\)-dimensional Borel-Moore chains is the abelian group of all formal sums \(\sum_{x\in\Gamma_{k}}\lambda_{x}\cdot x\), \(k=0,1\), \(\lambda_{x}\in\mathbb{Z}\), and the quotient \(H^{BM}_{0}(\Gamma)=C^{BM}_{0}(\Gamma)/\partial C^{BM}_{1}(\Gamma)\) with respect to the standard boundary map is the \(0\)-th Borel-Moore homology group of \(\Gamma\). Details on Borel-Moore homology can be found in [2]. Let \(c=\sum_{x\in\Gamma_{0}}x\) be the chain with all coefficients equal to \(1\). If \(\Gamma\) is infinite uniformly locally finite then \([c]=0\) in \(H^{BM}_{0}(\Gamma)\), while if \(\Gamma\) is finite then \([c]\neq 0\). As Borel-More homology is functorial, we have maps \(H^{BM}_{0}(D(\alpha))\to H^{BM}_{0}(D(\beta))\) when \(\alpha<\beta\), and can pass to the direct limit. Thus we obtain the following result.
**Corollary 9**.: _The following are equivalent:_
1. \((i_{D})_{*}([1])=0\) _in_ \(K_{0}(C^{*}_{H}(D))\)_;_
2. \([c]=0\) _in_ \(\operatorname{dir}\lim_{\alpha}H^{BM}_{0}(D(\alpha))=0\)_._
It would be interesting to find a more direct proof of Corollary 9, avoiding graph theory.
Note that the equivalent properties from Theorem 6 are coarsely invariant.
**Lemma 10**.: _Let \(D_{1},D_{2}\subset X\) be two uniformly discrete subsets of bounded geometry coarsely equivalent to \(X\). If \(D_{1}\) satisfies the property (1) of Theorem 6 then \(D_{2}\) satisfies it too._
Proof.: Suppose that \(D_{1}\) does not satisfy the property (1) of Theorem 6, and, for any \(\beta>0\), let \(\Gamma_{1}(\beta)\) be a finite connected component of \(D_{1}(\beta)\). Take an arbitrary \(\alpha>0\). Coarse equivalence between \(D_{1}\) and \(D_{2}\) means that there exists \(C>0\) such that for every \(x\in D_{1}\) there exists \(y\in D_{2}\) with \(d(x,y)<C\) and vice versa. Let \(Z=\{z\in D_{2}:d(z,x)<C\) for some \(x\in\Gamma_{1}(\alpha+2C)\}\). If \(z\in Z\), \(y\in D_{2}\setminus Z\) then \(d(z,y)\geq\alpha\). As \(\Gamma_{1}(\alpha)\) is finite for any \(\alpha\), and as \(D_{1}\) is of bounded geometry, the set \(Z\) is finite. Consider \(Z\) as a subgraph of \(D_{2}(\alpha)\). It is not connected with any point from \(D_{2}(\alpha)\setminus Z\). Then any connected component of \(Z\) is a finite connected component of \(D_{2}(\alpha)\). Thus, \(D_{2}\) does not satisfy the property (1) of Theorem 6.
Recall that a metric space \(X\) has _bounded geometry_ if there exists \(r>0\) such that for any \(R>0\) there exists \(N\in\mathbb{N}\) such that any ball of radius \(R\) can be covered by not more than \(N\) balls of radius \(r\) (cf. [5], where it is discussed that this definition for manifolds can be derived from the traditional local definition via curvature).
It is shown in [8], Prop. 2.5, that if \(X\) is a complete Riemannian manifold admitting a decomposition \(X=X_{1}\cup X_{2}\) with closed \(X_{1}\) and \(X_{2}\) such that \(K_{0}(C^{*}(X_{1}))=K_{0}(C^{*}(X_{2}))=0\) then \([p_{\phi}]=0\) for any Wannier projection \(p_{\phi}\) with uniformly discrete set of localization centers. The next corollary shows that, under the bounded geometry condition, vanishing of \([p_{\phi}]\) is much more common. Importance of this condition is explained by the Greene's theorem: any smooth manifold admits a Riemannian metric of bounded geometry [4].
**Corollary 11**.: _Let \(X\) be a connected proper measure space of bounded geometry. Then, for any Wannier projection \(p_{\phi}\) with a uniformly discrete set \(D\) of localization centers, coarsely equivalent to \(X\), we have \([p_{\phi}]=0\)._
Proof.: By Lemma 10, Corollary 7 and Theorem 3, it suffices to show that there exists a uniformly discrete set \(D\) of bounded geometry, coarsely equivalent to \(X\), which satisfies the property (1) of Theorem 6. Since \(X\) has bounded geometry, there exists \(r>0\) such that any ball of radius \(R\) is covered by at most \(N\) balls of radius \(r/2\). Given \(c>0\), we say that a subset \(A\subset X\) is \(c\)-disjoint if \(d(x,y)>c\) for any \(x,y\in A\), \(y\neq x\). By Zorn Lemma, there exists a maximal discrete \(r\)-disjoint subset \(D\subset X\). It is clear that \(D\) is uniformly discrete. Maximality of \(D\) implies that for any \(x\in X\) there exists \(y\in D\) with \(d(x,y)\leq r\), so \(D\) is coarsely equivalent to \(X\). Since any ball of radius \(r/2\) contains not more than one point of \(D\), \(D\) has bounded geometry. We claim that the graph \(D(3r)\) is connected and, therefore, satisfies the property (1) of Theorem 6. Indeed, suppose the contrary. If \(D(3r)\) is not connected then we can write \(D=A_{1}\sqcup A_{2}\), where one has \(d(x,y)\geq 3r\) if \(x\in A_{1}\), \(y\in A_{2}\). Set \(X_{i}=\{x\in X:d(x,A_{i})\leq r\}\), \(i=1,2\). Then \(X=X_{1}\cup X_{2}\) and \(X_{1}\cap X_{2}=\emptyset\), moreover, \(d(X_{1},X_{2})\geq r\), which means that \(X\) is not connected.
|
2309.16963 | Effect of irradiation on the spin of millisecond pulsars | A millisecond pulsar (MSP) is an old neutron star (NS) that has accreted
material from its companion star, causing it to spin up, which is known as the
recycling scenario. During the mass transfer phase, the system manifests itself
as an X-ray binary. PSR J1402+13 is an MSP with a spin period of $5.89~{\rm
ms}$ and a spin period derivative of $\log\dot{P}_{\rm spin}=-16.32$. These
properties make it a notable object within the pulsar population, as MSPs
typically exhibit low spin period derivatives. In this paper, we aim to explain
how an MSP can posses high spin period derivative by binary evolution. By
utilizing the stellar evolution code \textsc{MESA}, we examine the effects of
irradiation on the companion star and the propeller effect on the NS during
binary evolution. We demonstrate that irradiation can modify the spin period
and mass of an MSP, resulting in a higher spin period derivative. These results
suggest that the irradiation effect may serve as a key factor in explaining
MSPs with high spin period derivatives. | Shunyi Lan, Xiangcun Meng | 2023-09-29T04:14:05Z | http://arxiv.org/abs/2309.16963v1 | # Effect of irradiation on the spin of millisecond pulsars
###### Abstract
A millisecond pulsar (MSP) is an old neutron star (NS) that has accreted material from its companion star, causing it to spin up, which is known as the recycling scenario. During the mass transfer phase, the system manifests itself as an X-ray binary. PSR J1402+13 is an MSP with a spin period of \(5.89\rm~{}ms\) and a spin period derivative of \(\log\dot{P}_{\rm spin}=-16.32\). These properties make it a notable object within the pulsar population, as MSPs typically exhibit low spin period derivatives. In this paper, we aim to explain how an MSP can posses high spin period derivative by binary evolution. By utilizing the stellar evolution code MESA, we examine the effects of irradiation on the companion star and the propeller effect on the NS during binary evolution. We demonstrate that irradiation can modify the spin period and mass of an MSP, resulting in a higher spin period derivative. These results suggest that the irradiation effect may serve as a key factor in explaining MSPs with high spin period derivatives.
binaries: close -- binaries: general -- X-rays: binaries -- pulsars: general +
Footnote †: journal: ApJL
## 1 Introduction
A neutron star (NS) in a binary system can accretes material from its companion star and spins itself up to form a millisecond pulsar (MSP). This process is known as the recycling scenario. During the mass transfer phase, depending on the mass of the companion star, the system can manifests itself as a high-mass X-ray binary (\(M_{2}\gtrsim 10M_{\odot}\)) or a low-mass X-ray binary (LMXB, \(M_{2}\lesssim 1.5M_{\odot}\)). For detailed reviews on compact object binary evolution, references such as Bhattacharya & van den Heuvel (1991), Podsiadlowski et al. (2002), and Tauris & van den Heuvel (2006) can be consulted. Presently, more than 3300 pulsars have been detected, including approximately 530 MSPs (Manchester et al., 2005)1. Some of these MSPs are referred to as transitional MSPs (tMSP) (Archibald et al., 2009; Papitto et al., 2013; Bassa et al., 2014), and there are also accreting millisecond X-ray pulsars (AMXPs) (Patruno & Watts, 2021). TMSPs exhibit changes between rotation-powered and accretion-powered states, demonstrating a direct connection between radio MSPs and AMXPs. These observations strongly support the recycling scenario. For a recent review about tMSPs, one can refer Papitto & de Martino (2022). In comparison to normal pulsars, MSPs typically have spin periods lower than \(30\rm~{}ms\), lower spin period derivatives \(\log\dot{P}_{\rm spin}\sim-20\), and lower magnetic fields \(B\sim 10^{8}\rm~{}G\). Most NSs can be born with strong magnetic fields \(B\sim 10^{13.25}\rm~{}G\)(Popov et al., 2010), nevertheless, during the recycling process, the NS accretes material, leading to a reduction in its magnetic field (Shibazaki et al., 1989). Most observational results also support canonical evolutionary theory (e.g., Tauris et al., 2012). However, there are exceptions, e.g., the MSP PSR J1402+13 (Abdollahi et al., 2022) with a spin period of \(5.89\rm~{}ms\) and a spin period derivative of \(\log\dot{P}_{\rm spin}=-16.32\).
Footnote 1: The current population of pulsars can be viewed at [https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/)
The spin evolution of MSPs has always been an important aspect of study. It can help determine the equation of state of NS and constrain X-ray binary evolution theory. Based on the mass transfer rate value, as well as the spin period and magnetic field of the NS, the NS can experience either spin-up or spin-down evolution as a result. There are essentially
three regimes depending on the mass transfer rate value: radio spin-down, propeller spin-down, and accretion spin-up (e.g., Tauris et al., 2012; Bhattacharyya, 2021; Li et al., 2021). During the accretion spin-up regime, the NS accretes material transferred from its companion star and spins itself up. When the mass transfer rate decreases to a specific value, the propeller effect is activated. During the propeller regime, the transferred mass is expelled from the binary, preventing it from being accreted by the NS. The NS loses its angular momentum which is carried by ejected matter and the magnetic field. (Romanova et al., 2018). As the mass transfer rate value further decreases and even stop, the binary system will eventually host a radio pulsar.
Despite the progress made in understanding the overall perspective of the recycling scenario, uncertainties still remain. For instance, a birthrate problem emerged when Kulkarni and Narayan (1988) proposed the birthrate of MSPs and the number of LMXBs, from which they descend in this framework, do not match. The irradiation effect in LMXBs could be a solution to the birthrate problem. Irradiation primarily affects stars with a convective envelope, which explains why it can have a significant impact on the evolution of LMXBs (Podsiadlowski, 1991). The irradiation-induced mass transfer cycles lead to a reduced time spent in the LMXB phase (Ritter et al., 2000; Buning and Ritter, 2004; Ritter, 2008), which demonstrates the potential of the irradiation effect in addressing the birthrate problem. Additionally, the irradiation effect can explain the existence of certain MSPs with a positive orbital period derivative (Patruno and Watts, 2021). A series of works by Benvenuto et al. (2012, 2014, 2015, 2017) have found that the irradiation effect may play a crucial role in the formation of black widow/redback pulsars, with black widows possibly evolving from redbacks. The irradiation effect may play a key role in the formation of AMXPs such as SAX J1808.4-3658 (Tailo et al., 2018; Goodwin and Woods, 2020). Irradiated systems may also serve as good progenitors for massive MSPs (Echeveste et al., 2020). In this paper, we will show that the irradiation effect is also a key factor to form the MSPs with high spin period derivative.
This paper is organised as follows: Section 2 describes our input physics and methods. Section 3 shows our results. In Section 4, we discuss the possible indications about our results and give some conclusions.
## 2 Methods
We use the stellar evolution code Modules for Experiments in Stellar Astrophysics (MESA,version 10398; Paxton et al., 2011, 2013, 2015, 2018, 2019) to simulate possible evolution of LMXB and the spin of MSP. We assume that the NS has a canonical initial mass of \(1.4~{}M_{\odot}\). Also, the binary evolution begin with a NS and a zero-age main-sequence companion star in a circular and synchronized orbit. The metallicity of the companion star is \(Z=0.02\). The maximum age of the system is set to be \(14~{}\rm Gyr\).
Mass transfer rate value is described by the Ritter scheme (Ritter, 1988). We adopt the isotropic re-emission model (Tauris and van den Heuvel, 2006) to compute the mass-loss from the binary system with the following parameters: \(\alpha=0,~{}\beta=0.7,~{}\delta=0\), which correspond to the fraction of mass lost from the vicinity of companion star, the vicinity of NS and the circumbinary co-plane toroid, respectively. The mass accretion rate onto NS is \(\dot{M}_{\rm NS}=(1-\alpha-\beta-\delta)|\dot{M}_{2}|\), where \(\dot{M}_{2}\) is the mass lost rate of companion star. The \(\dot{M}_{\rm NS}\) is limited by Eddington accretion rate (Tauris et al., 2012)
\[\dot{M}_{\rm Edd}\simeq 3.0\times 10^{-8}M_{\odot}{\rm yr}^{-1}~{}\left( \frac{R_{\rm NS}}{13~{}{\rm km}}\right)\left(\frac{1.3}{1+X}\right), \tag{1}\]
where the \(R_{\rm NS}=15~{}(M_{\rm NS}/M_{\odot})^{1/3}~{}{\rm km}\) is the radius of the NS, and the \(X\) is the fraction of hydrogen of transferred mass.
### Orbital evolution
In our simulation, orbital angular momentum loss is dominated by three mechanisms: gravitational wave radiation, mass loss from the system and magnetic braking.
The orbital angular momentum loss by gravitational wave radiation is calculated by (Landau and Lifshitz, 1971)
\[\dot{J}_{\rm GW}=-\frac{32G^{7/2}M_{\rm NS}^{2}M_{2}^{2}(M_{\rm NS}+M_{2})^{1 /2}}{5c^{5}a^{7/2}}, \tag{2}\]
where \(G\) is the gravitational constant, \(c\) is the speed of light in vacuum and \(a\) is the semi-major axis of the orbit.
Material lost from the system will carry specific orbital angular momentum of NS away from system. We calculate the angular momentum loss by this mechanism as
\[\dot{J}_{\rm ML}=(\alpha+\beta+\delta)|\dot{M}_{2}|\bigg{(}\frac{M_{\rm NS}} {M_{\rm NS}+M_{2}}\bigg{)}^{2}\frac{2\pi a^{2}}{P_{\rm orb}}, \tag{3}\]
where the \(P_{\rm orb}\) is the orbital period.
The magnetic braking will also cause the loss of orbital angular momentum. We use the standard magnetic braking prescription (Rappaport et al., 1983):
\[\dot{J}_{\rm MB}=-3.8\times 10^{-30}M_{2}R_{2}^{\gamma}\Omega^{3}{\rm dyn~{} cm}, \tag{4}\]
where \(R_{2}\) is the radius of the companion star, \(\gamma\) is the magnetic braking index in which we adopt the value \(\gamma=4\), \(\Omega\) is the spin angular velocity of the companion star.
### Irradiation on companion star
During mass transfer phase, the system will manifest itself as an X-ray source. In this context, the companion star can be irradiated by X-rays. Such irradiation may significantly alter the evolution of LMXBs. For how irradiation affect the
secular evolution of LMXBs, one can refer outstanding previous works, e.g., Podsiadlowski (1991), Harpaz & Rappaport (1994), Vilhu et al. (1994), Buning & Ritter (2004).
The X-ray irradiation effect on companion star is included in this work. The accretion luminosity is
\[L_{\rm X}=\frac{GM_{\rm NS}\dot{M}_{\rm NS}}{R_{\rm NS}}, \tag{5}\]
follow Harpaz & Rappaport (1994), we calculate irradiation luminosity as
\[L_{\rm irr}=\left\{\begin{array}{ll}\eta L_{\rm X}\left(\frac{R_{2}}{2a} \right)^{2}&\dot{M}_{2}<\dot{M}_{\rm Edd}\\ \eta L_{\rm X}\left(\frac{R_{2}}{2a}\right)^{2}\exp\left(1-\frac{|\dot{M}_{2 }|}{\dot{M}_{\rm Edd}}\right)&\dot{M}_{2}\geq\dot{M}_{\rm Edd}\end{array} \right., \tag{6}\]
where \(\eta\) is the irradiation efficiency. The energy will deposit in the outer layer of companion star and the deposited energy is decrease with \(e^{-\tau}\), where \(\tau\) is the optical depth at a given radius.
### Spin evolution of neutron star
The spin evolution of NS in binary system could be affected by multiple factors: gravitational wave radiation and magnetic radiation (dipole and multi-pole) of NS, spin-up and spin-down torques caused by the interactions between transferred material and magnetic field, etc. The gravitational wave and multi-pole radiation will not be discussed in this paper, since they may not dominant the energy loss (e.g., Kiziltan & Thorsett 2010; Patruno 2010).
We consider the accretion-induced magnetic filed decay of NS. Magnetic moment is calculated by following formula (Shibazaki et al. 1989):
\[\mu=\mu_{0}\left(1+\frac{\Delta M_{\rm NS}}{10^{-4}M_{\odot}}\right)^{-1}, \tag{7}\]
where \(\mu_{0}\) is the initial magnetic moment. \(\Delta M_{\rm NS}\) is the accreted mass of NS.
Interactions between magnetic field and transferred mass are characterized by three radii: magnetosphere radius \(r_{\rm mag}\), co-rotation radius \(r_{\rm co}\) and light cylinder radius \(r_{\rm lc}\). We calculate these radii by the following forms (Ghosh & Lamb 1979; Romanova et al. 2018; Yang & Li 2023):
\[r_{\rm mag}\simeq 1.5\times 10^{8}\left(\frac{\mu}{\mu_{0}}\right)^{4/7}\left( \frac{M_{\rm NS}}{M_{\odot}}\right)^{-1/7}\left(\frac{\dot{M}_{\rm NS}}{\dot {M}_{\rm Edd}}\right)^{-2/7}{\rm cm}, \tag{8}\]
\[r_{\rm co}\simeq 1.5\times 10^{8}\left(\frac{M_{\rm NS}}{M_{\odot}}\right)^{1 /3}\left(\frac{P_{\rm spin}}{1\;{\rm s}}\right)^{2/3}{\rm cm}, \tag{9}\]
\[r_{\rm lc}\simeq 4.8\times 10^{9}\frac{P_{\rm spin}}{1\;{\rm s}}\;{\rm cm}. \tag{10}\]
Follow Tauris (2012), the total torque acting on the NS is
\[N_{\rm total}=n\left(\dot{M}_{\rm NS}\sqrt{GM_{\rm NS}r_{\rm mag}}+\frac{\mu^ {2}}{9r_{\rm mag}^{3}}\right)+N_{\rm radio}, \tag{11}\]
where \(n\) is a dimensionless parameter,
\[n=\tanh\left(\frac{1-(r_{\rm mag}/r_{\rm co})^{3/2}}{0.002}\right). \tag{12}\]
\(N_{\rm radio}\) is the spin-down torque caused by magnetic dipole radiation. We calculate \(N_{\rm radio}\) as (Yang & Li 2023)
\[N_{\rm radio}=-\frac{\mu^{2}}{r_{\rm lc}^{3}}. \tag{13}\]
We assume that the NSs in our simulations have an initial spin period \(P_{\rm spin}=10\;{\rm s}\) and a constant moment of inertia \(I=10^{45}{\rm g\;cm^{2}}\). We also assume that all the transferred material is ejected from the magnetosphere during the propeller regime.
## 3 Results
We calculated a series of models of binary systems with an initial companion star mass \(M_{2}=1.0\;M_{\odot}\) and initial orbital
Figure 1: Evolution of mass transfer rate value. The initial mass of companion star is \(M_{2}=1.0\;M_{\odot}\), initial orbital period is \(P_{\rm orb}=5.0\;{\rm days}\). Irradiation efficiencies are \(\eta=0\) (top) and \(\eta=0.1\) (bottom).
periods \(P_{\rm orb}=3.0,~{}5.0,10.0,~{}20.0\) days for different values of the irradiation efficiencies \(\eta=0,~{}0.001,~{}0.01,~{}0.1\) and initial magnetic field \(\mu_{0}\) range from \(10^{29}~{}{\rm G~{}cm^{3}}\) to \(10^{32}~{}{\rm G~{}cm^{3}}\). Here, for simplicity, we only show the results from two examples with \(\eta=0,~{}0.1\), both with \(P_{\rm orb}=5.0\) days and \(\mu_{0}=4.0\times 10^{31}~{}{\rm G~{}cm^{3}}\), and will give a simple discussion on the effect of other parameters in Section 4. Figure 1 shows the evolution of mass transfer rate value for different \(\eta\). For \(\eta=0\), the mass transfer occurs in a cyclic way instead in a continuous form (as in the case of \(\eta=0\)). During the initial phase of Roche-lobe overflow (RLO), the mass transfer rate value \(\dot{M}_{2}\) increases, resulting in a corresponding increase in X-ray luminosity (\(L_{\rm X}\)). Irradiation causes the companion star to expand, leading to a higher mass transfer rate value \(\dot{M}_{2}\) compared to the system without irradiation. Detachment of the companion star from its Roche lobe occurs when the expansion due to irradiation is outweighed by the thermal relaxation of the companion star. Orbital angular momentum loss subsequently induces the companion star to reattach, initiating a new cycle of mass transfer, and so forth. Further insights about the mechanism of the mass transfer cycle can be found in Ritter et al. (2000). This phenomenon causes the system to constantly switch between the LMXB phase and the radio pulsar phase, which is quite different from that with \(\eta=0\).
Figure 2 shows the spin period evolution. The spin period evolution of NS in an irradiated system is significantly different from that of none irradiated system. For non-
Figure 4: Evolution tracks of radio phases in \(P_{\rm spin}-\dot{P}_{\rm spin}\) diagram, corresponding to Figure 1. The blue solid and red dashed lines are indicate \(\eta=0\) and \(0.1\), respectively. All line segments indicate \(r_{\rm mag}>r_{\rm lc}\). The initial values of magnetic fields for both evolutions are \(\mu_{0}=4.0\times 10^{31}~{}{\rm G~{}cm^{3}}\). Solid circles and triangles are the evolution start and end points, respectively. Green solid star present the PSR J14002+13. Black points are current pulsar population, and data are from [https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/).
Figure 3: Evolution of the mass of NS corresponding to Figure 1. Blue solid and green dashed lines indicate irradiation efficiencies \(\eta=0\) and \(0.1\), respectively.
Figure 2: Evolution of spin period corresponding to Figure 1. Blue solid and red dashed lines indicate irradiation efficiencies \(\eta=0\) and \(0.1\), respectively. The initial values of magnetic fields for both evolutions are \(\mu_{0}=4.0\times 10^{31}~{}{\rm G~{}cm^{3}}\).
irradiated system, i.e., \(\eta=0\), for the most part the first and the second RLO, the NS is actually in a spin-equilibrium state (i.e., \(r_{\rm mag}\simeq r_{\rm co}\)) (see also Tauris et al., 2012), and constantly experiences minor propeller regimes during \(\sim 12.12-12.21~{}{\rm Gyr}\) and \(\sim 12.25-12.32~{}{\rm Gyr}\). On the other hand, for \(\eta=0.1\), irradiation-induced mass transfer cycles cause the NS to experience multiple propeller regimes and radio spin-down phases before the mass transfer fully stops. Due to a higher mass transfer rate value, the NS can reach a lower spin period.
Figure 3 shows the evolution of the mass of the NS. For \(\eta=0.1\), although the mass transfer rate value during RLO is larger than that in the case of \(\eta=0\), the total duration of RLO for the irradiated system is shorter than that for the non-irradiated system. Moreover, the accretion rate remains constrained by the Eddington limit. As a result, the final mass of the MSP in the irradiated system is smaller than that in the non-irradiated system.
In Figure 4, we show the evolutionary tracks when binary systems host an radio pulsar in \(P_{\rm spin}-\dot{P}_{\rm spin}\) diagram. Same as previous figures, different line styles and colors indicate different values of irradiation efficiency. The line segments in Figure 4 present the evolutionary path segments corresponding to the case \(r_{\rm mag}>r_{\rm lc}\), i.e., the system hosts a radio MSP. Each line segment evolves from the upper left to the lower right. For \(\eta=0\), due to interaction between magnetic field and transferred material, it is difficult for NS to possessing lower spin period and higher spin period derivative at same time during radio phases. For \(\eta=0.1\), the tracks can cover a larger part of \(P_{\rm spin}-\dot{P}_{\rm spin}\) diagram. Due to the irradiation-induced mass transfer cycles, the NS can reach a lower spin period and undergo several radio phases before the mass transfer completely stops. This enables the NS to possess both a lower spin period and higher spin period derivative during the radio phases. As we can see in Figure 4, our calculations show that the irradiated models can explain the high value of the spin period derivative of PSR J1402+13.
## 4 Discussion and Conclusions
In this work, we showed binary evolution calculations of a system composed of a canonical NS whose companion is a normal star with initial mass of \(M_{2}=1.0~{}M_{\odot}\) in an orbit with initial period of \(P_{\rm orb}=5~{}{\rm days}\). We considered both the irradiation effect and the propeller effect, tracking the spin period evolution of MSPs. Our goal was to provide an explanation for the formation of MSPs having high spin period derivatives.
In our simulations, the two most crucial free parameters affecting the spin evolution are the irradiation efficiency \(\eta\) and the initial magnetic field \(\mu_{0}\). Typically, \(\eta\) is believed to range from 0.01 to 0.1 in LMXBs (Buning & Ritter, 2004), while population synthesis suggests that most NSs may have an initial magnetic field of \(\mu_{0}\sim 10^{31}~{}{\rm G~{}cm^{3}}\)(Popov et al., 2010). The value of \(\eta\) affects the peak mass transfer rate value during RLO and the number of mass transfer cycles (e.g., Benvenuto et al., 2014), which in turn determines the minimum spin period that an MSP can reach. Our additional tests (not shown here) suggest that NSs in binary systems with initial magnetic fields of \(\mu_{0}\sim 10^{31}-10^{32}~{}{\rm G~{}cm^{3}}\) may produce radio MSPs with high spin period derivatives. The exact relationship between irradiation efficiency, the initial magnetic field of the NS, the initial orbital period, the initial mass of companion star, and the spin period and spin period derivative of MSPs varies complicatedly and still needs to be investigated in details in the future.
Due to the absence of other estimated parameters, we are currently unable to rigorously constrain the evolutionary history of PSR J1402+132. In the future, we intend to conduct more comprehensive investigations utilizing binary population synthesis, encompassing topics such as but not limited to the progenitor system of PSR J1402+13 and the birth rate of MSPs having high spin period derivatives.
Footnote 2: Although a catalog GalacticMSPs.txt says an orbital period of 52.49 days, this parameter is absent from the ANTF catalog. We simply assume that the system is a binary, but have not incorporated this value into our study, although, our models can successfully reproduce this orbital period.
As shown in Figure 2 and Figure 3, the final spin period and mass of the MSP can be altered by the irradiation effect. Furthermore, the irradiation effect can modify the evolution track on the \(P_{\rm spin}-\dot{P}_{\rm spin}\) diagram, leading to a wider range of possible tracks. In conclusion, the canonical evolutionary theory faces difficulties in producing radio MSPs with high spin period derivatives, but the irradiation effect provides an potential way to conquer this problem.
## Acknowledgments
We are grateful to the anonymous referee for the constructive comments that helped us to improve the manuscript. S.L. would like to express sincere gratitude to Hai-Liang Chen, Zheng-Wei Liu, Zhen-Wei Li, Ph.Podsiadlowski, O.G.Benvenuto and A.J.Goodwin for their valuable helps and suggestions. This work is supported by the National Science Foundation of China and National Key R&D Program of China (No. 2021YFA1600403), the National Natural Science Foundation of China (Nos. 11973080, 12333008 and 12288102). X.M. acknowledges support from the Yunnan Ten Thousand Talents Plan - Young & Elite Talents Project, and the CAS 'Light of West China' Program. X.M. acknowledges support from International Centre of Supernovae, Yunnan Key Laboratory (No. 202302AN360001), the Yunnan Revitalization Talent Support Program-Science & Technology Champion Project (NO. 202305AB350003), Yunnan
Fundamental Research Projects (NO. 202201BC070003) and the science research grants from the China Manned Space Project.
|
2309.09843 | Instruction-Following Speech Recognition | Conventional end-to-end Automatic Speech Recognition (ASR) models primarily
focus on exact transcription tasks, lacking flexibility for nuanced user
interactions. With the advent of Large Language Models (LLMs) in speech
processing, more organic, text-prompt-based interactions have become possible.
However, the mechanisms behind these models' speech understanding and
"reasoning" capabilities remain underexplored. To study this question from the
data perspective, we introduce instruction-following speech recognition,
training a Listen-Attend-Spell model to understand and execute a diverse set of
free-form text instructions. This enables a multitude of speech recognition
tasks -- ranging from transcript manipulation to summarization -- without
relying on predefined command sets. Remarkably, our model, trained from scratch
on Librispeech, interprets and executes simple instructions without requiring
LLMs or pre-trained speech modules. It also offers selective transcription
options based on instructions like "transcribe first half and then turn off
listening," providing an additional layer of privacy and safety compared to
existing LLMs. Our findings highlight the significant potential of
instruction-following training to advance speech foundation models. | Cheng-I Jeff Lai, Zhiyun Lu, Liangliang Cao, Ruoming Pang | 2023-09-18T14:59:10Z | http://arxiv.org/abs/2309.09843v1 | # Instruction-Following Speech Recognition
###### Abstract
Conventional end-to-end Automatic Speech Recognition (ASR) models primarily focus on exact transcription tasks, lacking flexibility for nuanced user interactions. With the advent of Large Language Models (LLMs) in speech processing, more organic, text-prompt-based interactions have become possible. However, the mechanisms behind these models' speech understanding and "reasoning" capabilities remain underexplored. To study this question from the data perspective, we introduce instruction-following speech recognition, training a Listen-Attend-Spell model to understand and execute a diverse set of free-form text instructions. This enables a multitude of speech recognition tasks - ranging from transcript manipulation to summarization - without relying on predefined command sets. Remarkably, our model, trained from scratch on Librispeech, interprets and executes simple instructions without requiring LLMs or pre-trained speech modules. It also offers selective transcription options based on instructions like "transcribe first half and then turn off listening," providing an additional layer of privacy and safety compared to existing LLMs. Our findings highlight the significant potential of instruction-following training to advance speech foundation models.
Cheng-I Jeff Lai\({}^{1,2}\), Zhiyun Lu\({}^{l}\), Liangliang Cao\({}^{1}\), Ruoming Pang\({}^{1}\)\({}^{1}\)Apple \({}^{2}\)MIT CSAIL
Large Language Model, Speech Recognition, Speech Foundation Model, Instruction-Following
## 1 Introduction
The successes of Large Language Models (LLMs) in natural language tasks have prompted the speech community to develop speech foundation models that are able to process, reason, and generate interleaving speech, audio, and text. It could be immensely useful for digital assistant, because speech foundation models provide a _versatile_ user interface that is natural, flexible, and powerful. For instance, to perform the task of _Please transcribe the audio_, speech foundation models can accomplish it by decoding condition on the text query, instead of first processing the Natural Language Understanding query followed by an ASR as the target action [1].
**Speech LLMs.** We use the term "Speech LLM" to denote models that integrate LLMs for speech and audio tasks [2, 3, 4, 5, 6]. We think of the development of these models as equipping _existing_ LLMs with additional perception capabilities. The underlying assumption of this new modeling paradigm is that pre-trained LLMs can enable new capabilities to speech and audio processing tasks that were previously unattainable: reasoning over speech perception, open-ended multi-modal response generation, and zero-shot task generalization via in-context learning. These models generally consist of three main components: (i) an encoder or discrete units tokenizer for speech perception, (ii) a pre-trained autoregressive language model as a decoder, and (iii) a fine-tuning stage focused on speech instructions, formulated as {speech, text instruction, model outputs}. Notably, they have demonstrated the ability for understanding, or "reasoning", over the speech and audio recording via text instructions [2]. This raises the question of how each component contributes to this remarkable capability.
**A Motivating Example.** Consider the simple text query: "Ignore speech." This is a straightforward command that should be easy for a speech foundation model to process, considering it merely requires the model to output an end-of-sentence ([EOS]) token. However, our experiments with opensourced models like Whisper [7] and LTU v2 [2] revealed that they fail to execute such simple commands, despite their impressive recognition and translation capabilities. This suggests the importance of (iii) instruction-following task constructions. In other words, however advanced "reasoning" capabilities these speech foundation models possess, it is unlikely they can execute unseen actions or tasks that were not present in training distributions.
This observation led us to develop a new kind of speech recognition model, one that is instruction-following by design. Conditioned on the speech recording, our model aims to understand and execute a wide range of ASR-related tasks based on free-form text instructions, all without degrading the default ASR capabilities. See Figure 1 for an illustration. Surprisingly, we find that a 224M parameter model _without_ pre-trained speech or text foundation models, the aforementioned (i) and (ii), can achieve these capabilities.
**Related Work.** Beyond Speech LLMs, there is a growing body of research integrating visual perception into text LLMs [8, 9, 10, 11, 12]. For speech prompting, WaVPrompt [13], SpeechPrompt [14, 15], and WhisperPrompt [16] leveraged pre-trained autoregressive models--namely GPT-2, GSLM [17], and Whisper--for task-specific prompting. Instruction-based training has gained traction in NLP [18, 19, 20, 21]. Different from them, we present an instruction-trained speech recognizer that does not rely on any pre-trained components.
**Paper Organization.** We begin by formulating the tasks illustrated in Figure 1 as a set of "skills," which we define in Section 2.1. Next, we outline the method for constructing instruction prompts for these skills in Section 2.2. In Section 3, we describe an implementation of an instruction-following ASR model that acquires these skills from scratch. Section 4 presents the performance of our model in instruction-following tasks. We provide concluding remarks in Section 5.
Figure 1: Instruction-trained speech recognizer reasons over free-from text instructions and performs the desired ASR-related actions.
## 2 Skills and instructions
### Defining Skills
We identify a set of ASR-related "skills" that our instruction-following models should collectively master: (1) speech transcription, (2) ignoring speech, (3) word replacement, (4) transcript manipulation, and (5) summarization or keyword spotting. The term "speech transcription" refers to standard ASR functionality, while "ignoring speech" implies that the model should output an [EOS] token without considering the audio. For "word replacement," the model is tasked to pinpoint targeted words and replace them as specified. We categorize this into sub-tasks: common word replacement (e.g., replace 'the' with 'a'), out-of-distribution (OOD) word replacement (e.g., replace 'the' with 'quokka'), and word deletion (e.g., remove 'the'). For "transcript manipulation," the model should perform actions like deletion or repetition in the transcript while preserving its accuracy. This is broken down into sub-tasks such as repetition, transcribing the first half, and transcribing the second half. Finally, "summarization and keyword spotting" require the model to convey the essence of the speech concisely, possibly reordering sentence structures. A comprehensive summary of these skills is provided in Table 1.
Although these skills may appear simple in comparison to open-ended dialogue generation, they encompass a broad range of functionalities in ASR: deletion (skills 2 and 4), substitution (skill 3), insertion (skill 4), and a combination (skill 5). Mastering these skills and dynamically switching between them while maintaining precise ASR is non-trivial. Please refer to Table 5 for examples of expected model outputs of each skill.
### Skill Constructions
**Dataset** We build our instruction-following templates on the Librispeech 960h training set [22] and evaluate the model's instruction-following performance on its dev and test sets.
**Constructing Instructions** For each skill or sub-task, we generate a set of diverse instructions, ranging from 100 to 600 prompts, via prompting GPT-4. We constructed the initial GPT-4 prompt based on task description generation prompts specified in SpeechGPT [3]. After careful inspection, we iteratively refined the GPT-4 prompts to improve instruction diversity. An example of this iterative process is listed in Table 2. This approach not only ensures diverse instructions but also narrows the scope of out-of-distribution instructions during inference, compelling the model to reason over the text query rather than memorizing it directly. Some examples of text instructions for each skill are highlighted in blue in Table 5.
**Constructing Targets** For skill (1), speech transcription, the original ASR transcript serves as the target. For skills (2) ignoring speech, (3) word replacement, and (4) transcript manipulation, we generate the target outputs through rule-based processing. For skill (5), summarization and keyword spotting, we use GPT-4 and GPT-3.5 to generate target summaries1. We found that selecting the shortest response from multiple GPT runs produces more robust target responses.
Footnote 1: We found the following prompt works well for prompting GPTs for summarization / keyword spotting: “Find the keyword in the following sentence: [original text]. Find the most important 3-5 keywords. Make sure to rephrase it “very succinctly*. Ideally less than 5.”
## 3 Instruction-Following ASR
**Model** Our model is based on the Listen, Attend, Spell (LAS) architecture [23]2. The LAS encoder employs a Conformer-L architecture and takes an 80-dimensional log-Mel filterbank as input, with SpecAugment applied [24]. The LAS decoder is a 12-layer Transformer LM
\begin{table}
\begin{tabular}{l|l|l|l} Skills & Description / Sub-tasks & Num. of Instructions & Error Type \\ \hline Speech Transcribe & Standard ASR & 500 & N/A \\ \hline Ignore Speech & Outputs [EOS] token directly & 500 & Deletion \\ \hline \multirow{2}{*}{Word Replacement} & (3a) Common word (“the”\(\sim\) “a”) & \multirow{2}{*}{600} & \multirow{2}{*}{Substitution} \\ & (3b) OOD word (“the"\(\sim\) “quokka”) & & \\ & (3c) Word deletion (“the"\(\sim\)N/A) & & \\ \hline \multirow{2}{*}{Manipulation} & (4a) Repetition & \multirow{2}{*}{300} & Insertion \\ & (4b) Transcribe first half & & Deletion \\ & (4c) Transcribe second half & & Deletion \\ \hline Summarization/ & \multirow{2}{*}{Extract key ideas and phrases} & \multirow{2}{*}{100} & \multirow{2}{*}{Mix} \\ Keyword Spotting & & & \\ \end{tabular}
\end{table}
Table 1: Summary of ASR-related Skills
\begin{table}
\begin{tabular}{l|l} & Num. of Instructions \\ & Generated in each Ineration \\ \hline \multicolumn{2}{l}{You are asked to come up with a set of} \\
100 diverse task instructions about automatic & 100 \\ speech recognition. Here are the requirements...[3] & \\ \hline \multicolumn{2}{l}{Based on the above instructions, generate similar} \\ ASR instructions with the following two rules: & 50 \\ \multicolumn{2}{l}{1. Succinct (1-4 words) 2. Coatins similar vocab.} \\ \hline \multicolumn{2}{l}{Generate another of ASR instructions} & \multirow{2}{*}{50} \\ but with 5-8 words instead. & \\ \hline \multicolumn{2}{l}{Generate commands that “enable the speech recognition capability”. Be creative and be succinct 1-5 words.} & 50 \\ \multicolumn{2}{l}{} \\ \end{tabular}
\end{table}
Table 2: Iterative GPT-4 prompting for generating diverse skill instructions. Here we show the GPT-4 prompts adopted for retrieving the first 250 speech transcription skill instructions.
Figure 2: Our model follows “Listen-Attend-Spell” (LAS) architecture: an acoustic encoder and an autoregressive decoder with cross-attention to the encoder latents. The training objective is next token prediction over sampled prefix instructions and targeted text transcriptions. An end-of-turn ([EOT]) token is introduced to separate the two. In test-time decoding, prefix instructions are specified by users.
decoder with cross-attention to the encoder context vectors.
**Training** During training, for each utterance, we randomly sample a text instruction and apply the corresponding operation to its transcript, such as deletion, repetition, substitution, or paraphrasing, as outlined in Section 2.2. Specifically, skill sampling is performed according to Equation 1, using weight ratios \(\alpha\) for speech transcription and \(\beta\) for summarization/keyword spotting. After selecting the sampled skill \(S\), we choose an instruction \(I\) from \(S\)'s corresponding list of instructions. The original transcript is then modified to produce \(T\), which is concatenated (denoted as \(\oplus\)) with \(I\) and an End of Turn [EOT] token to form the training sample \(Y\). The model is subsequently trained on \(Y\) using next-token prediction with teacher-forcing.
\[\text{Sample Skill:}\quad\quad S\!\sim\!\begin{cases}\text{Transcribe,}&P\!= \!\frac{\alpha}{\alpha+3+\beta}\\ \text{Ignore Speech,}&P\!=\!\frac{1}{\alpha+3+\beta}\\ \text{Word Replace,}&P\!=\!\frac{1}{\alpha+3+\beta}\\ \text{Manipulate,}&P\!=\!\frac{1}{\alpha+3+\beta}\\ \text{Summarize,}&P\!=\!\frac{1}{\alpha+3+\beta}\end{cases} \tag{1}\]
\[\text{Sample Instruction:}\quad\quad\quad\quad I\!\sim\!\text{ Skill Instruction List}(S) \tag{2}\]
\[\text{Target Transcript:}\quad
which should have decoded the word as "merry." Lastly, we found a model trained only on Skill (4) diverged, whereas one trained on both Skill (1) and (4) did not, suggesting that speech transcription serves as a foundational skill from which others evolve. This indicates that skills are not likely learned in isolation; rather, the model seems to first acquire a proficiency in speech transcription, which then acts as a basis for the development of other, more specialized text manipulation skills.
### Discussions
The training of Speech LLMs typically involves three key components: (i) a pre-trained speech encoder or discrete units, (ii) a pre-trained text LLM like LLaMA [27] or PaLM [28], and (iii) speech instruction fine-tuning. The core contribution of this paper is to demonstrate that a relatively small model can effectively follow instructions without requiring components (i) or (ii). The skill set selection detailed in Section 2.1 serves to establish a clean and workable framework that emphasizes the importance of (iii). We now delineate some limitations of our current implementation of instruction-following LAS:
* **Instruction Generalization:** The model's ability to follow instructions is tightly linked to the training data, struggling with OOD instructions, such as those with unfamiliar vocabulary or sentence structures. For instance, in Table 4 row 3, the word "annihilate" was not seen in training and thus the wrong skill interpretation. Conversely, our model capably handles contrastive imperatives, a feature that stems from our instruction design in Section 2.2.
* **Task Generalization:** The model is limited to predefined tasks, Skills (1) to (5), and does not generalize well. For example, it cannot perform word replacement of "the" with "car".
* **Dialogue Engagement:** The model's responses are close-ended, lacking support for open-ended and multi-turn conversations, or clarification follow-up if instructions are ambiguous.
One way to address task generalization is to train the model on a more diverse set of instruction data. For example, exposure to numerous word replacement tasks could improve its ability to generalize to unseen target words. Both task and instruction generalization could benefit from the integration of text LLMs; for instance, initializing the LAS decoder with LLaMA could mitigate the challenges posed by OOD instructions and enhance instruction generalization. Training our instruction-following models on open-ended speech instructions, such as those in LTU [2], should also prompt the model to pick up dialogue engagement ability.
Another significant advantage of instruction-based speech recognition is its potential for enhancing safety and privacy. Our model is capable of selectively replacing target words or executing partial transcriptions. A natural extension of this capability would be to filter out sensitive information, such as profanities or personal names, a task not easily achievable via traditional ASR and LLM cascading pipelines.
## 5 Conclusion
This paper illustrates the viability and importance of instruction-based training [18, 19, 20, 21] for speech models, offering a straightforward framework for skill execution based on natural language prompts. Utilizing a small encoder-decoder model trained from scratch on Librispeech, we prove that understanding and executing free-form text instructions is feasible. Our carefully designed instructions elicited five key skills: speech transcription, ignoring speech, word replacement, manipulation, and summarization/keyword spotting. Evaluations indicate robust performance on both familiar and novel instructions without compromising ASR capabilities. Our study demonstrates the effectiveness of instruction-based speech recognition via well-crafted instruction templates.
\begin{table}
\begin{tabular}{l|l} \multicolumn{1}{c|}{**Skill 1: Speech Transcribe**} \\ \hline Decode the content of this audio. & \\ Start recognizing audio: & \\ Transcribe the following spoken words: & the influence with the timeaus has exercised upon posterity is due partly to a misunderstanding. \\ Analyze dialogue recording. & \\ List and all you do how the speech content. & \\ \hline Ignore the audio in this clip. & \\ Avoid interaction with this conversation. & \\ Omt the dialogue from this audio & N/A \\ Oute Recognition & \\ Overlook any notation of this conversation. & \\ \hline \multicolumn{1}{c|}{_Skill 3: Word Replacement_} \\ \hline Replace ‘the with ‘a’ as you listen. & a influence with a timeaus has exercised upon posterity is due partly to a misunderstanding. \\ Let us switch all’the ‘to a’,’ shall we? & influence with a timeaus has exercised upon posterity is due partly to a misunderstanding. \\ Transcribe, making ‘the ‘to ‘quoka’ in speech. & quokokink influence with quokokina |
2307.16571 | Thermodynamics of Dissipative Solitons | We establish a close analogy between the thermodynamics of the nonlinear
systems far from equilibrium and the dissipative solitons. Unlike the solitons
in the Hamiltonian systems, their dissipative counterpart looks like an
aggregation of bounded quasi-particles interacting on the short range, obeying
the Rayleigh-Jeans distribution, and possessing a temperature, entropy, and
other thermodynamic characteristics. This ensemble is confined by a collective
potential, which defines its negative chemical potential. Such a dissipative
soliton represents a strongly chirped pulse generated by a mode-locked laser
with the advantage of being energy scalable by the analogy with the
Bose-Einstein condensation from an incoherent ``basin.'' We demonstrate the
main limits of the dissipative soliton energy scaling which result from the
loss of internal soliton coherency and the thermalization due to nontriviality
of a ``free energy landscape.'' | Vladimir L. Kalashnikov, Alexander Rudenkov, Irina T. Sorokina | 2023-07-31T11:04:23Z | http://arxiv.org/abs/2307.16571v2 | # Thermodynamics of Dissipative Solitons
###### Abstract
We establish a close analogy between the thermodynamics of the nonlinear systems far from equilibrium and the dissipative solitons. Unlike the solitons in the Hamiltonian systems, their dissipative counterpart looks like an aggregation of bounded quasi-particles interacting on the short range, obeying the Rayleigh-Jeans distribution, and possessing a temperature, entropy, and other thermodynamic characteristics. This ensemble is confined by a collective potential, which defines its negative chemical potential. Such a dissipative soliton represents a strongly chirped pulse generated by a mode-locked laser with the advantage of being energy scalable by the analogy with the Bose-Einstein condensation from an incoherent "basin." We demonstrate the main limits of the dissipative soliton energy scaling which results from the loss of internal soliton coherency and the thermalization due to nontriviality of a "free energy landscape.
**Keywords: dissipative soliton, adiabatic theory, dissipative soliton resonance, thermodynamics of coherent structures, turbulence, thermalization**
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected];
[email protected];
\({}^{\dagger}\)These authors contributed equally to this work.
## 1 Introduction
Dissipative solitons (DS), particularly chirped dissipative solitons in lasers [1], have attracted significant research interest due to their intriguing properties and potential
applications in the physics of nonlinear phenomena far from equilibrium [2; 3; 4; 5]. The concept of DS encompasses various physical phenomena, including dissipative soliton resonance (DSR) in lasers [6; 7], and Bose-Einstein condensates (BEC) [8; 9], astrophysics and cosmology [10; 11], dynamical processes in complex socio-technical systems [12], and transition to turbulence [13] interweaving the concepts of the partially coherent solitons and week turbulence [14], statistical mechanics and thermodynamics [15].
The statistical mechanics and thermodynamics of multi-mode dynamics in fiber optics have been the subject of extensive research. In particular, the interplay between nonlinear effects and multiple interacting spatial modes gives rise to rich phenomena that can be explored using statistical and thermodynamic approaches [16; 17]. Recently, a thermodynamic theory of highly multi-mode nonlinear optical systems has been conjectured [18]. This study explores the thermodynamics of complex systems with numerous interacting modes and is closely connected with the phenomena of soliton incoherence and turbulence [14; 19; 20].
The promising insight into the kinetic theory and thermodynamics of solitons appeared when the soliton spectra were connected to a thermalized Rayleigh-Jeans distribution characterizing turbulence (e.g., see [14; 17; 21; 22; 23; 24]), that reveals an internal connection between ordered (solitonic) and chaotic (collapsing) states in nonlinear systems [21; 22]. The exploration of partially coherent solitons as an ensemble of quasi-particles has offered valuable insights into their statistical behavior and the emergent thermodynamic properties inherent in these systems [23; 25; 26]. This approach allowed connecting the process of the optical soliton formation and its properties with the physics of BEC [27; 28] both in the weak [14] and strong [29] nonlinear regimes.
The fruitfulness of the statistical and thermodynamic viewpoints was demonstrated in the exploration of the DS self-emergence (or mode-locking self-start), which was interpreted as the first-order phase transition or the noise-induced escape from a meta-stable state [30; 31; 32]. An entanglement of the quantum noise as a "basin" within which a soliton self-emergent [33] and its internal structure (e.g., [34]) could have a fundamental impact on developing the quantum theory of DS (for a short overview, see [13]).
The understanding and application of statistical and thermodynamic concepts to DS are still quickly evolving areas of research, and further investigations may shed light on the potential connections and implications of these concepts to DS. In particular, the energy scaling of DS could be limited by a transition from stable DS to turbulent states [35]. In this article, we expose shortly the adiabatic theory of DS, which demonstrates its close connection with the thermodynamics of incoherent solitons and turbulence. The principles of DS formation and DSR are reviewed, and the problem of DS self-emergence from quantum noise is analyzed. The behavior of the DS thermodynamic characteristics with the energy scaling and the soliton thermalization are pointed as the sources limiting the DS energy scalability.
## 2 Adiabatic theory of dissipative solitons
In contrast to the classical soliton of the nonlinear Schrodinger equation, which has a vast field of application ranging from biology and meteorology to photonics and field theory [36], DS has a nontrivial internal structure [3]. Such a structure caused by internal energy flows leads to a phase inhomogeneity characterized by such a characteristic as a "chirp" \(\Psi=\frac{\partial^{2}\phi(t)}{\partial t^{2}}\). Here, \(\phi(t)\) is a phase, and \(t\) is a "local time", i.e., a time in the system coordinates comoving with DS1. The \(\Psi\) value provides a DS energy scalability (or a mass condensation for BEC) [1]. An orientation on the strongly-chirped DS allows the development of their adiabatic theory, the essence of which is of fellows.
Footnote 1: This definition corresponds to photonics, for instance. For BEC, this corresponds to one of the spatial coordinates [28].
### DS parametric space and dissipative soliton resonance
We will base our analysis on the complex nonlinear cubic-quintic Ginzburg-Landau equation (CQGLE), which is a standard model for the study of a broad class of nonlinear phenomena far from equilibrium [3, 5, 37, 38]:
\[\frac{\partial}{\partial z}a(z,t)=-\sigma a(z,t)+\left(\alpha+i \beta\right)\frac{\partial^{2}}{\partial t^{2}}a(z,t)-\] \[-i\gamma P(z,t)\,a(z,t)+\kappa\left(1-\zeta P(z,t)\right)P(z,t) \,a(z,t)\,. \tag{1}\]
Here, \(z\) and \(t\) are the propagation coordinates and local time, respectively (see [28] for a comparison with the Gross-Pitaevskii equation). \(a(z,t)\) is a slowly-varying field envelope, and \(P(z,t)=|a(z,t)|^{2}\) is a power. The parameters are: \(\sigma\) is a saturable net gain, which consists of saturable gain minus loss coefficients; \(\alpha\) is an inverse squared bandwidth of a spectral filter; \(\beta\) is a group-delay dispersion (GDD) coefficient; \(\gamma\) is a self-phase modulation (SPM) coefficient; \(\kappa\) and \(\zeta\) describe a saturable nonlinear gain (self-amplitude modulation, SAM).
Below, we will consider the strongly chirped DS with the dimensionless chirp parameter \(\psi\propto\gamma^{2}/\kappa\chi\gg 1\). This requires satisfying the following **Proposition I**: \(\beta/\alpha,\gamma/\kappa\ll 1\), that is, the nondissipative factors dominate the dissipative ones. Since the large chirp means a fast phase change with \(t\), we can use the **Proposition II (adiabatic approximation)**: \(\frac{\partial^{2}\sqrt{P}}{\partial t^{2}}\ll 1\), that means a slow change of the DS envelope in comparison with the phase change. At last, the **Proposition III**: \(C=\alpha\gamma/\kappa\zeta\simeq 1\) will be used. It corresponds proximity to the soliton or potential condition required for the Gibbs-like statistics [39] and DSR [40].
The stationary solution ansatz \(a(z,t)=\sqrt{P(t)}\,\mathrm{e}^{i\phi(t)-iqz}\) (\(q\) is a propagation constant or wave-number) reduces Eq. (2.1) to \(\left(\Omega(t)=\frac{d\phi(t)}{dt}\right)\):
\[2\beta P(t)\,\frac{d^{2}}{dt^{2}}P(t)-\beta\left(\frac{d}{dt}P(t)\right)^{2}+4 \alpha P(t)\,\Omega(t)\,\frac{d}{dt}P(t)+\]
\[+4P(t)^{2}\left(-\beta\Omega{(t)}^{2}+\alpha\frac{d}{dt}\Omega(t)- \gamma P(t)+q\right)=0, \tag{2}\] \[2\alpha P(t)\,\frac{d^{2}}{dt^{2}}P(t)-\alpha\left(\frac{d}{dt}P(t )\right)^{2}-4\beta P(t)\,\Omega(t)\,\frac{d}{dt}P(t)-\] \[-4P(t)^{2}\left(\kappa\zeta P(t)^{2}+\alpha\Omega{(t)}^{2}+ \beta\frac{d}{dt}\Omega(t)-\kappa P(t)+\sigma\right)=0, \tag{3}\]
that, after using the first and second propositions and some algebra, leads to [13, 38, 41]:
\[P(t)=-\frac{\beta\Omega{(t)}^{2}-q}{\gamma}, \tag{4}\] \[\frac{d}{dt}\Omega(t)=\frac{1}{3}\frac{\beta\kappa\zeta\left( \Delta_{\pm}^{2}-\Omega{(t)}^{2}\right)\left(\Xi_{\pm}^{2}+\Omega{(t)}^{2} \right)}{\gamma^{2}}, \tag{5}\]
where,
\[\Delta_{\pm}^{2}=\frac{\gamma P_{\pm}}{\beta}, \tag{6}\] \[\beta\Xi_{\pm}^{2}=\frac{\gamma}{\zeta}\left(1+C-\frac{5}{3} \zeta P_{\pm}\right),\] (7) \[P_{\pm}=\frac{3}{4}\frac{1-\frac{C}{2}\pm\sqrt{\left(1-\frac{C} {2}\right)^{2}-\frac{4\zeta\sigma}{\kappa}}}{\zeta}. \tag{8}\]
Here, \(\Delta_{\pm}\) is a cut-off frequency: \(\Omega(t)^{2}\leqslant\Delta_{\pm}^{2}\); \(\Xi_{\pm}\) is a "chemical potential" whose meaning will be made clear below, and \(P_{\pm}\) is a DS peak power. The \(\pm\)-signs correspond to the _two branches_ of DS solutions.
The appearance of the cut-off frequency \(\Delta_{\pm}\) follows from Eq. (4): \(P(t)\geq 0\) by definition, therefore \(\Omega(t)^{2}\leq\Delta_{\pm}^{2}=q/\beta\). Since \(\Omega(0)=0\) by definition, the DS wavenumber is of \(q=\beta\Delta_{\pm}^{2}=\gamma P_{\pm}\), which has the same sign and is half of that for the Schrodinger soliton (SS) [42]. Since the sign of the GDD term in Eq. (2.1) is negative for the nonlinear Schrodinger equation, SS does not interact with the linear waves, which have the wavenumber defined by GDD2: \(k_{l}=\beta\omega_{l}^{2}\leq 0\neq q_{SS}=\beta P(0)/2\). However, there is a _resonance with the linear waves_ for DS: \(q=k_{l}\), which defines the cut-off frequency \(\Delta_{\pm}=q/\beta\)[44]. Thus, the existence of this dispersive resonance could be considered a key factor for DS formation (see Subsection 2.2).
Footnote 2: The linear waves obey the Bogoliubov (or Langmuir) dispersion relation [43, 24].
Eq. (5) results from a factorization procedure aimed to avoid a singularity \(\frac{d\Omega(t)}{dt}\rightarrow\infty\) (see [45] for details). This equation brings us to a spectral domain that is an advance of the theory considered. The assumption \(\psi\gg 1\) allows using the stationary phase approximation for the Fourier image of the DS amplitude:
\[e(\omega)=\sqrt{\frac{\beta}{\gamma}}\int_{-\infty}^{\infty}\mathrm{e}^{-i \omega t}\sqrt{\left(\Delta_{\pm}^{2}-\omega^{2}\right)}\,\mathrm{e}^{1\phi(t )}dt\approx \tag{9}\]
\[\approx e(\omega)=\sqrt{\frac{6\pi\gamma}{\zeta\kappa}}\,\frac{\frac{3 \gamma^{2}}{2\beta\kappa\epsilon}\frac{i\omega^{2}}{\left(\Xi_{\pm}^{2}+\omega^{2 }\right)\left(\Delta_{\pm}^{2}-\omega^{2}\right)}}{\sqrt{i\left(\Xi_{\pm}^{2}+ \omega^{2}\right)}}\mathcal{H}\left(\Delta_{\pm}^{2}-\omega^{2}\right).\]
Here, \(\mathcal{H}(x)\) is a Heaviside function, and the proportionality of the chirp \(\Psi\propto\gamma^{2}/\beta\kappa\zeta\) is visible. However, this chirp is inhomogeneous and cannot be compensated perfectly by some second-order GDD of the opposite sign for the DS compression without unavoidable energy loss. Maximum fidelity (or chirp homogeneity) of such compression corresponds to the condition \(\Delta_{\pm}^{2}=\Xi_{\pm}^{2}\)[38, 46].
Eq. (9) leads to two important conclusions: 1) **DS energy** can be expressed as
\[E=\frac{1}{2\pi}\int_{-\infty}^{\infty}p(\omega)d\omega=\frac{6\gamma\arctan \left(\frac{\Delta_{+}}{\Xi_{\pm}}\right)}{\zeta\kappa\Xi_{\pm}}, \tag{10}\]
where 2) **DS power**\(p(\omega)\) is
\[p(\omega)=|e(\omega)|^{2}=\frac{6\pi\gamma\mathcal{H}\big{(}\Delta_{\pm}^{2}- \omega^{2}\big{)}}{\zeta\kappa\left(\Xi_{\pm}^{2}+\omega^{2}\right)}. \tag{11}\]
Eq. (10) allows constructing the _master diagram_ representing the two-dimensional DS parametric space \(C\) vs. \(E\)[38, 45, 47]. Such a diagram shows the stability threshold (maximal) \(C(E)|_{\sigma=0}\), which is connected with the region of the \(+\)-solution. \(\sigma\) is positive above this threshold, i.e., a vacuum is unstable for a larger \(C\), which value varies from 2 to 2/3 with energy. The next important component is a \(\pm\) border: the \(-\)-solution lies below \(+\) one on the \(C\)-parameter. The maximal fidelity curve on the master diagram corresponds to the condition of \(\Delta_{+}=\Xi_{+}\). At last, the master diagram demonstrates a network of the isogains \(\Sigma=\frac{4\zeta\sigma}{\kappa}=const\).
One can see that the DS spectrum has a shape of the Lorentzian function of the \(\Xi_{\pm}\)-width truncated at \(\pm\Delta_{\pm}\). In fact, it reproduces a Rayleigh-Jeans distribution specific to turbulence [14, 22, 24]. As would be shown below, it is not only a formal analogy.
The following important consequence from the adiabatic theory of DS is a formulation of the **dissipative soliton resonance** (DSR) conditions [40]. DSR corresponds to a perfect DS energy scalability. Formally, DSR can be defined as \(\exists\,C^{*}:\lim_{C\to C^{*}}E=\infty\) or there exists a set of \(C\)-parameters providing infinite energy asymptotics. The region of DSR belongs to the \(+\)-branch of DS, which bottom border is defined by
\[\begin{array}{c}E=\frac{6\sqrt{2\gamma}\beta}{\kappa\sqrt{\eta}}\frac{ \arctan(\frac{\sqrt{3}\frac{4}{3}\sqrt{\Xi}}{\sqrt{6-13}\sqrt{\Xi}})}{\sqrt{6 -13}\sqrt{\Xi}},\\ C=2-4\sqrt{\Sigma},\\ P_{0}=\frac{3\sqrt{\Sigma}}{2\zeta},\\ \Delta^{2}=\frac{3\gamma\sqrt{\Sigma}}{2\beta\zeta},\\ \Xi^{2}=\frac{\gamma}{2\zeta\beta}(6-13\sqrt{\Sigma}),\end{array} \tag{12}\]
so that DSR exists within \(\Sigma_{+}\in[0,36/169]\) and \(C_{+}\in[2/3,\,2/13]\). The main _visible signatures of the transition to DSR_ are: 1) \(\Xi_{+}<\Delta_{+}\), 2) \(\Delta_{+}\to const\), 3) \(P_{+}\to const\). The energy scaling proceeds through the DS width and chirp scaling.
The limiting case of DSR corresponding to \(\Sigma\to 0\) gives \(\Delta_{+}^{2}=\gamma/\beta\zeta\), \(P_{+}=\zeta^{-1}\), and \(\Xi_{+}=0\). However, the stored energy in a laser cavity with a period \(T_{cav}\) is confined by \((\zeta T_{cav})^{-1}\). This defines a minimum value of \(\Xi_{+}\):
\[1=\frac{6\gamma}{\kappa T_{cav}\Xi_{+}}\arctan\left(\frac{\sqrt{\frac{2\kappa} {3\zeta}}}{\Xi_{+}\sqrt{\alpha}}\right). \tag{13}\]
Eq. (13) gives a minimum of \(\Xi_{+}\) and demonstrates a balance of two dimensionless scales: \(\Xi_{+}T_{cav}\) and \(\Xi_{+}\sqrt{\alpha}\). Also, one should take into account that \(T_{cav}\) is confined above by a value of the gain relaxation time \(T_{r}\)[48], not to mention the fact that a laser average power \(\sim\zeta^{-1}\) is unreachable3. These facts put physical limitations on the DS energy scaling.
Footnote 3: For instance, these parameters equal approximately 4.9 \(\mu\)s and 30 kW for a Cr:ZnS laser, respectively [47].
### DS without a spectral dissipation
As was mentioned above, the dissipative resonance \(\beta\omega_{l}^{2}=q\), defining the cut-off frequency \(\Delta_{\pm}\) is the forming factor for DS. As another forming factor, one may point out the energy balance between the spectral loss and nonlinear gain: \(\alpha\Delta_{\pm}^{2}\simeq\kappa P_{\pm}\). So that, in combination with the dispersion resonance condition, one has \(C=\alpha\gamma/\beta\kappa\simeq 1\) that is close to the DSR condition and corresponds to the Proposition III [35]. The importance of spectral filtering as an additional SAM mechanism was pointed out in [49]: the chirp scatters the frequency to the DS front and tail, where they undergo the spectral loss, which confines DS in the temporal domain. However, we do not consider this argument as comprehensive. The point is that Eq. (1) has the DS solutions for \(\alpha=0\): spike on constant background and tabletop or truncated spike [50]. Repeating the approach based on the adiabatic theory gives the truncated convex spectrum similar to that in [51]:
\[p(\omega)=\frac{6\pi\beta}{\kappa}\left|\frac{\Delta^{2}-\omega^{2}}{(\Xi^{2 }+\omega^{2})\left(\frac{3\Delta^{2}\zeta\beta}{\gamma}-1\right)}\right|, \quad\omega^{2}<\Delta^{2}, \tag{14}\]
and the DS parameters are:
\[P_{\pm} =\frac{2\left(1+\sqrt{-\Sigma+1}\right)}{3\zeta}, \tag{15}\] \[\beta\Delta_{\pm}^{2}=\gamma P_{\pm}^{2},\] \[\Xi_{\pm} =\frac{6\sqrt{1-\Sigma}\pm 7\Sigma\mp 6}{\pm 9+18\sqrt{1-\Sigma}}\]
One should note that the spectrum (14) is not Rayleigh-Jeans-like and, thus, is not thermalized [20]. One may conjecture that this exotic regime could shed light on the mechanisms of DS formation that need numerical exploration. For this goal, we solved
Eq. (1) numerically based on the FFT split-step algorithm with taking into account the net-gain dynamics: \(\sigma=\delta(E/E_{cw}-1)\) in (1) (\(\delta\) is a "stiffness" parameter, \(E_{cw}\) is continuous-wave energy, see [52]).
We considered a time window covered by \(N=2^{16}\)-cells with the \(\Delta t=10\) fs time-cell size. Eq. (1) was supplemented by an additive complex Gaussian white noise term \(\Gamma(z,m\Delta t)\) (\(m\in[1...N]\) is a cell number)4 with the covariance [53]:
Footnote 4: Here and below, \(\Delta t\) and \(\Delta\omega\) mean the cell sizes in the temporal and spectral domains, respectively.
\[\begin{split}\langle\Gamma_{m}(z)\Gamma_{n}^{*}(z^{\prime}) \rangle=\mathcal{W}\delta_{mn}\delta(z-z^{\prime})\\ \langle\Gamma_{m}(z)\Gamma_{n}(z^{\prime})\rangle=0,\\ \mathcal{W}=2\gamma h\nu|\sigma|/T_{cav}.\end{split} \tag{16}\]
Here \(\mathcal{W}\) is a normalized noise power and \(\nu\) is a carrier frequency. \(\mathcal{W}\) can be associated with a "bath temperature"5[33]. The Euler-Maruyama method was used for the solution of (Eq. (1) + \(\Gamma(z,t)\)). The initial condition is the noise (16) supplemented by a _sech_-like spike with an amplitude of doubled significant noise high and the width of \(10\Delta t\).
Footnote 5: This concept of an added noise has no to be missed with the noise temperature of optical amplifier: \(T_{n}=\frac{\hbar\omega}{k_{B}}\left[\log(1+\frac{1-e^{-\hbar\nu}/k_{B}T}{1-| \sigma|^{-2}})\right]^{-1}\)[54].
The numerical temporal and spectral profiles of DS in the absence of spectral dissipation (i.e., \(\alpha=0\) in Eq. (1)) are shown in Fig. 1. The numerical spectra and temporal profiles demonstrate the characteristic signatures of DSR: the flat-top shape of DS and the finger-like spectrum \(\Xi^{2}\ll\Delta^{2}\). The spectral peculiarities are i) a pronounced spectral condensation around \(\omega=0\) and ii) an energy transport to the higher frequencies which can result in an appearance of the modulated spectral pedestal (Fig. 1,b). The last corresponds to an appearance of perturbation spikes on the DS edges (Fig. 1, a).
The noise presence changes the DS temporal profile and the central part of its spectrum slightly, but the energy transport to the spectral wings enhances. Thus, we can conclude that such exotic chirped DS is (quasi-)stable, and the dispersion relation \(\beta\Delta^{2}=\gamma P(0)^{2}=\beta\omega_{l}^{2}\) plays a major role in the DS formation. Such a relation describes a "high of the collective confining potential," [20, 55] providing a DS temporal and spectral localization and characterized by a minimal correlation scale of DS \(\ell\propto|\Delta|^{-1}\). DS spectrum (Fig. 1, b), which is analog to a turbulence spectrum [20, 22]6, could be explained by the existence of two cascades in a spectral domain7 like that in turbulence [24, 43]. Nevertheless, the spectral cut-off due to \(\alpha\neq 0\) could stop the spectral outflowing8 and, thereby, make DS robust [49] through the "kinetic cooling" as in a low-dissipative BEC [28]. In this sense, the spectral dissipation can be treated as an additional mechanism of SAM, which "strengthens" an effective potential border that was described by as above as an interrelation \(\beta\Delta^{2}=\gamma P_{0}\iff\alpha\Delta^{2}=\kappa P_{0}\).
Footnote 6: The internal connection between chaotic and ordered states in the nonlinear systems like those considered here was pointed in a lot of works. As an example, see [56].
## 3 DS self-emergence
The DS self-emergence is crucial both practically and theoretically. In laser physics, such an emergence from a noisy "bath" is named a mode-locking self-start. The different mechanisms for the spontaneous appearance of DS were proposed, in particular, the spontaneous growth of a field fluctuation from the beat note of the free-running spectrum enhanced by the self-induced refractive index grating was proposed [57; 58]. As was conjectured, the multimode Risken-Nummedal-Graham-Haken instability can result in spontaneous (but unstable) self-mode-locking [59], which can be stabilized by the resonant coupling with the phonons [60]. Also, the dynamic gain saturation can play an important role in the fluctuation enhancement required for the DS self-emergence [61]. Also, the mode-locking self-start could be treated as a dynamic loss of the continuum-wave stability [62; 63]. Nevertheless, we think that the most prospective and general approach to the problem of the DS self-emergence is based on the thermodynamic approach developed in [32; 33; 64], which considers this phenomenon as a first-order phase transition or escape from a metastable state [65].
We base our analysis on the numerical simulations of Eq. (2.1) with the additive complex noise described above. Statistics were gathered on the \(\sim\)100 independent samplings for each parametric set. The initial condition in the form of a quantum noise (16) supplemented by a seed \(\mathcal{W}^{\prime}\,\mathrm{sech}\,(t/10\Delta t)\) was chosen, where the seed amplitude \(\mathcal{W}^{\prime}\) corresponds to a standard rogue wave definition, namely a doubled significant wave high [66].
As the basic characteristics, we used a mean self-start time \(<\tau>\) and its standard deviation \(S\). The DS characteristics were characterized by the order parameter \(\wp\)[31; 33; 67]:
Figure 1: Dimensionless power \(P(t)\) (a) and spectral power \(p(\omega)\) (b) for DS without (black) and with (red) an additive Gaussian white noise. The parameters correspond to Ref. [22]: \(\beta=220\) fs\({}^{2}\), \(\gamma=5.1\) MW\({}^{-1}\), \(\kappa=4\) MW\({}^{-1}\), \(\kappa=30\) MW\({}^{-1}\), central wavelength equals 2.27 \(\mu\)m, and \(E_{cw}=150\) nJ. \(z=15000\), \(\delta=0.04\), and DS energy \(E\approx 38\) nJ for the 15% output coupler. No spectral dissipation (\(\alpha=0\)).
\[\wp=\frac{1}{2}\left\langle{}_{4}\right|\frac{\sum_{n=1}^{N}\left|a_{n}\right|^{4} }{\left(\sum_{n=1}^{N}\left|a_{n}\right|^{2}\right)^{2}}\right\rangle, \tag{17}\]
where we use the decomposition for a field on the mesh \(n\in[1,N]\): \(a(t)\to a_{n}=|a(t_{n})|e^{i\phi_{n}}\), and the averaging over an ensemble of different samples is assumed. This parameter shows a fraction of the synchronized modes related to the overall mode amount \(N\).
Like the definition (17), one may define the system order parameter as
\[M=\sigma/\delta=E/E_{cw}-1, \tag{18}\]
which shows a level of the DS energy domination over the continuous wave. Such domination results from wave condensation caused by nonlinearity (cubic-quintic terms in Eq. (1)).
The examples of self-start dynamics are shown in Fig. 2. We could emphasize the following peculiarities of such dynamics. i) There are two thresholds - the first one corresponds to an appearance of CW-radiation at an early stage of dynamics, and the second corresponds to a mode-locking self-start at a later stage. ii) The self-start is "smooth" for low energies, i.e., transitioning from CW to mode-locking resembles a second-order phase transition (black curve). iii) The energy growth leads to a "tough" mode-locking start (first-order phase transition) with an appearance of a Q-switch spike (spikes) (red curve). iv) Further energy scaling leads to double-pulsing because the chosen \(C\) does not fit the DSR condition, which requires \(C<2/3\) (Eq. (12). For some statistical samples, the single DS with a noisy background can develop (Fig. 3). Evolution of the \(\wp\)-parameter for a given \(E_{cw}\) is shown by open red circles in Fig. 2.
Under DSR conditions (i.e., \(2/13<C<2/3\)), further energy scaling is possible. Figures 4 demonstrate an energy dependency of \(<\tau>\), \(S\), \(M\), and \(\wp\) as well as the DS temporal (\(T\)) and spectral (\(\Omega\)) FHWM widths for a mode-locking regime. Two main regimes corresponding to "\(-\)" and "\(+\)"-branches of DS (see Subsection 2.1) are clearly visible: the first (low-energy) corresponds to decreasing \(T\) and increasing \(\Omega\) with the energy growth, the second demonstrates the pulse width stretching and \(\Omega\)-decrease (i.e., the parameter \(\Xi<\Delta\) and decreases) with energy. The division between these regimes is manifested approximately by the maximum order parameters. It is a so-called maximum fidelity point corresponding to the equality of \(\Delta=\Xi\), which means the coincidence of the short- and long-range correlation times characterizing DS [47].
The self-starting time \(<\tau>\) and the corresponding standard deviation \(S\) decrease with energy but then begin to increase (Fig. 4, a). This phenomenon limiting the DS energy scalability will be considered in the next section.
## 4 DS thermalization and free energy landscape
As was pointed out in Subsection 2.1, the DS spectrum reproduces a Rayleigh-Jeans distribution characterizing an incoherent soliton [14]. Such a parallel is based on the phase inhomogeneity of strongly chirped DS, which allows treating it as an ensemble
of "quasiparticles" confined by collective potential [23]. This treatment leads to the following definitions of thermodynamic characteristics of an _isolated_ DS9[15].
Footnote 9: “isolated” in the sense that DS is connected with an environment only through the dissipative “channels” defined by Eq. (2.1) without taking into account a possible dynamics of “basin” (“vacuum”), appearance of multiple DSs, etc.
1) **DS "temperature"**:
\[\Theta=6\pi\gamma/\zeta\kappa, \tag{19}\]
Figure 3: DS power \(P(t)\) (a) and spectrum \(p(\omega)\) (b) for one of the stochastic samples at \(E_{cw}=\)350 nJ and other parameters of Fig. 2. The noisy pedestal (a) and corresponding modulated spectrum (b) are clearly visible.
Figure 2: Evolution of the intra-cavity energy \(E_{in}\) with \(z\). The laser setup corresponds to that described in [47]: Cr:ZnS active medium with an effective gain bandwidth 200 nm, \(\gamma=\)5.1 MW\({}^{-1}\), \(\kappa=\zeta=\)1 MW\({}^{-1}\), \(\beta=\)880 fs\({}^{2}\), \(C=\)1.08, and \(\delta=0.04\). The curves correspond to \(E_{cw}=30\) (black curve), 70 (red), and 400 (blue) nJ, respectively, for one of the stochastic samples. Inset shows the dimensional power profile of an intra-cavity field at \(z=\)5000 \(T_{cav}\) for the blue curve. Red open circles show an evolution of the \(\wp\)-parameter along the red solid curve for \(E_{cw}=\) 70 nJ.
2) negative **chemical potential**:
\[-\mu=\Xi^{2}, \tag{20}\]
3) **Boltzmann entropy**:
\[S_{B}=\int_{-\Delta}^{\Delta}\ln p(\omega)d\omega, \tag{21}\]
4) **Shannon entropy**:
\[S_{Sh}=-\int_{-\Delta}^{\Delta}p(\omega)\ln p(\omega)d\omega, \tag{22}\]
5) **internal energy**:
\[U=\int_{-\Delta}^{\Delta}\omega^{2}p(\omega)d\omega, \tag{23}\]
6) and **free energy**:
\[{\cal F}=U-\Theta S_{B}. \tag{24}\]
Fig. 5 (a) demonstrates the examples of the "internal" entropy \(S_{B}\) and free energy calculated from these formulas. One can see that the entropy is minimal in the vicinity of the maximum order parameter and grows with the energy scaling. Also, it increases with "temperature" \(\Theta\propto 1/\kappa\) (dotted black curve \(1^{\prime\prime}\)). Simultaneously, the chemical potential \(\Xi\) decreases (see the decreasing \(\Omega_{FWHM}\) in Fig. 4 (b)) so that negative free energy tends to \((2\Delta-S_{B})\Theta\). All these factors testify that DS diminishes its ability
Figure 4: Energy dependencies of (a): averaged DS self-starting time \(<\tau>\) (black curves 1), its standard deviations \(S\) (red curves 2) and order parameter \(\sigma/\delta\) (blue lines 3); (b): DS width \(T\) (black lines), FWHM spectral width \(\Omega_{FWHM}\) (red lines), and order parameter \(\wp\) (blue lines). Energy is normalized to \(\kappa\sqrt{\zeta/\beta\gamma}\), frequency is normalized to \(\sqrt{\beta\zeta/\gamma}\), and time is normalized to \(\kappa/\sqrt{\beta\zeta\gamma}\). The physical parameters are close to those in Ref. [47]: \(\gamma\)=5.1 MW\({}^{-1}\), \(\kappa=1\) MW\({}^{-1}\) (solid curves 1, 2, 3 and dashed curves \(1^{\prime}\), \(2^{\prime}\), \(3^{\prime}\)) and 0.5 MW\({}^{-1}\) (dotted curves \(1^{\prime\prime}\), \(2^{\prime\prime}\), \(3^{\prime\prime}\)). \(C\)=1.08 (solid curves 1, 2, 3) and 0.54 (dashed curves \(1^{\prime}\), \(2^{\prime}\), \(3^{\prime}\) and dotted curves \(1^{\prime\prime}\), \(2^{\prime\prime}\), \(3^{\prime\prime}\)). Spectral filter bandwidth equals 200 nm. The energy scaling is performed by the \(T_{cav}\) variation assuming \(E_{cw}\)=150 nJ for \(T_{cav}\)=81 ns. Points show \(<\tau>\) for the dual DSs and \(C\)=0.54, \(\kappa=1\) MW\({}^{-1}\).
to concentrate energy with the \(E_{cw}\)-growth. This could limit energy scaling due to transit to turbulence [15].
However, our analysis demonstrates that the energy scaling is limited by a more strong factor, namely thermalization. If we consider that DS emerges from an initial quantum noise ("vacuum fluctuations"), the excitation of the latter could play a decisive role. That is, if the vacuum is sufficiently "hot" (i.e., \(E_{cw}\) is large), multiple DSs can emerge in the different stochastic samples. Fig.5 (b) illustrates this situation: after some maximum energy, multiple DSs begin to develop, and their number and contribution increase with energy, which leads to an energy thermalization in a stochastic DS ensemble. It should be noted that DS in some concrete stochastic samples is dynamically stable.
Qualitatively, this phenomenon could be explained by treating the DS self-emergence as a noise-induced escape from a metastable state [68; 32]. From this point of view, the DS self-emergence corresponds to transit from a metastable state corresponding to CW-generation (see Fig. 2) to a "deeper" minimum of free energy corresponding to DS. In reality, a multitude of such local minima corresponding to different numbers of DS can co-exist so that a system could evolve to one of them stochastically, and an escaping rate would depend on the potential barrier high and a noise temperature (which is \(\propto E_{cw}\) in our case). Black points in Fig. 4 (a) demonstrate the growth of an escaping time (i.e., a decrease of the DS self-building time) so that the multi-soliton regimes begin to prevail over the one-soliton operation. In particular, a nontrivial "geography" of the free energy landscape is illustrated by a spontaneous switching between DS and SS in a laser system with the higher-order group-delay dispersions (i.e., an \(\omega\)-dependent \(\beta\)) illustrated by Fig. 6. The thorough analysis of this conjecture is waiting for further analysis.
Figure 5: (a): Entropy dependence on the normalized energy (black curves) for the parameters of the black curves in Fig. 4. red curves – the same for a free energy dependence. (b): Percentage of multiple DSs in an ensemble of 100 stochastic samples (the parameters are the same as in (a)).
## 5 Conclusion
The adiabatic theory of DS provides us with a deep inside into the DS properties and mechanisms of its formation. In particular, it demonstrates that these mechanisms resemble those for incoherent solitons and turbulence. That allows applying an ideology of statistical mechanics and thermodynamics for exploring DS and, in particular, defining the limits of its energy scalability or, in other words, the DSR existence. We see two main such factors: i) the growth of internal DS entropy in parallel with the vanishing of its chemical potential and ii) the nontriviality of the free energy landscape, which leads to the DS thermalization, i.e., multiple DS generation.
Acknowledgments.The work is supported by the Norwegian Research Council projects #303347 (UNLOCK), #326503 (MIR), and by ATLA Lasers AS. We acknowledge using the IDUN cluster [69].
|
2309.04156 | Cross-Utterance Conditioned VAE for Speech Generation | Speech synthesis systems powered by neural networks hold promise for
multimedia production, but frequently face issues with producing expressive
speech and seamless editing. In response, we present the Cross-Utterance
Conditioned Variational Autoencoder speech synthesis (CUC-VAE S2) framework to
enhance prosody and ensure natural speech generation. This framework leverages
the powerful representational capabilities of pre-trained language models and
the re-expression abilities of variational autoencoders (VAEs). The core
component of the CUC-VAE S2 framework is the cross-utterance CVAE, which
extracts acoustic, speaker, and textual features from surrounding sentences to
generate context-sensitive prosodic features, more accurately emulating human
prosody generation. We further propose two practical algorithms tailored for
distinct speech synthesis applications: CUC-VAE TTS for text-to-speech and
CUC-VAE SE for speech editing. The CUC-VAE TTS is a direct application of the
framework, designed to generate audio with contextual prosody derived from
surrounding texts. On the other hand, the CUC-VAE SE algorithm leverages real
mel spectrogram sampling conditioned on contextual information, producing audio
that closely mirrors real sound and thereby facilitating flexible speech
editing based on text such as deletion, insertion, and replacement.
Experimental results on the LibriTTS datasets demonstrate that our proposed
models significantly enhance speech synthesis and editing, producing more
natural and expressive speech. | Yang Li, Cheng Yu, Guangzhi Sun, Weiqin Zu, Zheng Tian, Ying Wen, Wei Pan, Chao Zhang, Jun Wang, Yang Yang, Fanglei Sun | 2023-09-08T06:48:41Z | http://arxiv.org/abs/2309.04156v2 | # Cross-Utterance Conditioned VAE
###### Abstract
Speech synthesis systems powered by neural networks hold promise for multimedia production, but frequently face issues with producing expressive speech and seamless editing. In response, we present the Cross-Utterance Conditioned Variational Autoencoder speech synthesis (CUC-VAE S2) framework to enhance prosody and ensure natural speech generation. This framework leverages the powerful representational capabilities of pre-trained language models and the re-expression abilities of variational autoencoders (VAEs). The core component of the CUC-VAE S2 framework is the cross-utterance CVAE, which extracts acoustic, speaker, and textual features from surrounding sentences to generate context-sensitive prosodic features, more accurately emulating human prosody generation. We further propose two practical algorithms tailored for distinct speech synthesis applications: CUC-VAE TTS for text-to-speech and CUC-VAE SE for speech editing. The CUC-VAE TTS is a direct application of the framework, designed to generate audio with contextual prosody derived from surrounding texts. On the other hand, the CUC-VAE SE algorithm leverages real mel spectrogram sampling conditioned on contextual information, producing audio that closely mirrors real sound and thereby facilitating flexible speech editing based on text such as deletion, insertion, and replacement. Experimental results on the LibriTTS datasets demonstrate that our proposed models significantly enhance speech synthesis and editing, producing more natural and expressive speech.
speech synthesis, TTS, speech editing, pre-trained language model, variational autoencoder.
## I Introduction
The advent of neural network speech systems has seen widespread adoption across diverse sectors, including social media, gaming, and film production, particularly in the realm of dubbing. These sectors often require frequent modifications to textual content to cater to user preferences, leading to an increased demand for speech generation systems that deliver high levels of naturalness and expressiveness.
Recent advancements in Text-to-Speech (TTS) systems have leveraged their powerful expressive potential to synthesize missing or inserted words based on text transcription and original audio. These systems explicitly utilize either style tokens or variational autoencoders (VAEs) [1, 2] to encapsulate prosody information into latent representations. Fine-grained prosody modeling and control have been achieved by extracting prosody features at the phoneme or word-level [3, 4, 5]. However, VAE-based TTS systems often lack control over the latent space, as the sampling during inference is performed from a standard Gaussian prior. To address this, recent research has employed a conditional VAE (CVAE) [6] to synthesize speech from a conditional prior.
Unlike TTS, which demands extensive high-quality training data, text-based speech editing (SE) capitalizes on raw audio during inference, promoting efficient speech generation at lowered costs. However, most existing speech editing methods, such as VoCo [7], EditSpeech [8], CampNet [9], and A\({}^{3}\)T [10], heavily rely on partially inferred TTS systems. These methods might cause discontinuities at editing junctions and challenges in capturing shifts in tone and prosody. In particular, modifying the transcript of a speech recording can impact the tone and prosody of the adjacent audio. The audio produced by such methods might not seamlessly integrate with the surrounding sounds, leading to an unnatural feel. This limitation restricts the prosody and style continuity of the synthesized speech. Prosody modeling, therefore, plays an indispensable role in speech editing. Numerous studies have leveraged pre-trained language models (LM) to predict prosodic attributes from utterances or segments [11, 12, 13, 14, 15], and have employed style tokens or Variational Autoencoders (VAEs) to encapsulate prosody as latent representations [3, 4, 5, 16, 17].
In this work, our primary focus is on enhancing the expressiveness and naturalness of speech synthesis. To this end, we propose the Cross-Utterance Conditional Variational Autoencoder Speech Synthesis Framework (CUC-VAE S2), which significantly combines and extends our previous conference
paper [18, 19]. This framework is designed to more accurately emulate human prosody generation by extracting acoustic, speaker, and textual features from surrounding sentences to produce context-sensitive prosodic features. Central to its architecture are the Cross-Utterance Embedding (CU-Embedding), the Cross-Utterance Enhanced CVAE, and a vocoder. The CU-Embedding generates phoneme-level embeddings by utilizing multi-head attention layers and taking BERT sentence embeddings from surrounding utterances as inputs. Conversely, the CUC-VAE estimates the posterior of latent prosody features for each phoneme, enhances the VAE encoder with an utterance-specific prior, and samples latent prosody features from the derived utterance-specific prior during inference. To tackle real-world speech synthesis, we introduce two specialized algorithms for specific applications: CUC-VAE TTS for text-to-speech and CUC-VAE SE for speech editing. The CUC-VAE TTS algorithm is a direct implementation of our framework, aiming to produce audio that carries contextual prosody using surrounding text data. The CUC-VAE SE samples real mel spectrograms using contextual data, creating authentic-sounding audio that supports versatile text edits. We use a variational autoencoder with masked training to ensure output fidelity.
Empirically, we implemented a comprehensive set of experiments on the LibriTTS English audiobook dataset to ascertain the efficacy of our proposed CUC-VAE TTS and CUC-VAE SE systems. These evaluations encompassed subjective naturalness ratings, objective reconstruction metrics, and prosody diversity experiments. Experimental results suggest that CUC-VAE TTS achieves superior performance in terms of prosody diversity, while simultaneously augmenting naturalness and intelligibility. Additionally, through subjective evaluations and a rigorous significance analysis of naturalness and similarity, it was discovered that, relative to the baseline system, CUC-VAE SE demonstrates a commendable ability to sustain high fidelity and markedly enhance naturalness across a spectrum of editing operations.
We make the following contributions:
* We introduce the innovative Cross-Utterance Conditional Variational Autoencoder Speech Synthesis (CUC-VAE S2) framework to boost the expressiveness and naturalness of synthesized speech by generating context-sensitive prosodic features.
* We propose two practical algorithms, CUC-VAE TTS for text-to-speech and CUC-VAE SE for speech editing, both integrated with FastSpeech 2, to address real-world speech synthesis challenges.
* Empirical validations of the proposed CUC-VAE TTS and CUC-VAE SE systems are performed on the LibriTTS English audiobook dataset, demonstrating their effectiveness in improving prosody diversity, naturalness, and intelligibility of synthesized speech.
The rest of this paper is organized as follows: Section II provides an overview of the relevant literature. Our proposed CUC-VAE S2 framework is detailed in Section III, followed by the introduction of two practical algorithms in Section IV. The experimental setup and results are discussed in Section V, and finally, Section VI concludes the paper.
## II Related Work
### _Non-Autoregressive TTS and FastSpeech_
Significant advancements have been made in non-autoregressive (Non-AR) text-to-speech (TTS) systems with regard to efficiency and fidelity, thanks to the progress made in deep learning. A Non-AR TTS system is designed to map input text sequences to acoustic features or waveforms without utilizing autoregressive decomposition of output probabilities. Certain Non-AR TTS systems require distillation from an autoregressive model, such as FastSpeech [20] and ParaNet [21]. However, recent efforts have been devoted to developing non-distillation-based Non-AR TTS systems, including FastPitch [22], AlignTTS [23], and FastSpeech 2 [24]. Additionally, Non-AR Tacotron-based TTS systems [25, 26] and flow-based TTS systems [27, 28] have made remarkable strides in enhancing the quality and speed of speech synthesis.
Our proposed CUC-VAE component has been integrated into a FastSpeech 2 TTS system that extends from FastSpeech [20]. FastSpeech is a Non-AR sequence-to-sequence encoder-decoder model that utilizes Transformer architecture. The length regulator proposed by FastSpeech predicts the duration of each phoneme and then up-samples the encoded phoneme sequence based on the predicted duration. The decoder then maps the up-sampled sequence to the output acoustic feature sequence of the same length, where the calculation of each output is carried out in parallel. The absence of dependence on the previous time step output accelerates model inference. FastSpeech's length regulator is trained by distilling knowledge from an autoregressive model, providing reasonable control over the speech's rhythm.
FastSpeech 2 [24] replaces the knowledge distillation for the length regulator with mean-squared error training based on duration labels. These labels are obtained from frame-to-phoneme alignment, simplifying the training process. Furthermore, FastSpeech 2 predicts pitch and energy information from the encoder output, supervised with pitch contours and L2-norm of amplitudes as labels, respectively. This additional pitch and energy prediction injects extra prosody information into the decoder, improving the naturalness and expressiveness of the synthesized speech.
### _Text-based Speech Editing_
Text-based speech editing systems have emerged as powerful tools for user interaction with speech anchored on the provided transcript. Unlike TTS which necessitates extensive high-quality training data, text-based SE leverages raw audio during the inference phase. This not only enhances efficiency but also fosters flexibility in speech generation, often at a diminished cost. One of the pioneering efforts in this domain, VoCo [7], synthesizes a word via a TTS voice resembling the target speaker and subsequently refines it using a voice conversion (VC) model to ensure a match with the desired speaker's tonality.
Furthermore, there have been innovative strides in this sphere. For example, CampNet [9] has introduced a context-aware Transformer optimized for this application, and A\({}^{3}\)T [10]
brought forward an alignment-aware Conformer. These systems excel in the intricate task of text-based speech editing through meticulous engineering. EditSpeech [8] presents a unique method incorporating partial inference combined with bidirectional fusion, ensuring seamless transitions at editing junctions. Notably, VALL-E [29] showcased impressive voice cloning outcomes on expansive datasets by favoring codec representation over traditional mel spectrograms, which synergizes perfectly with language models.
Nevertheless, certain prevalent challenges persist. Many preceding methods that rely on directly copying the mel-spectrogram or waveform of untouched regions sometimes falter, resulting in disrupted prosody around the editing boundaries. They also occasionally suffer from limited adaptability to evolving contexts. Addressing these nuances, our CUC-VAE SE system harnesses the capabilities of the CUC-VAE. It meticulously models prosody by considering both adjacent utterances and the neighboring waveform in the editing vicinity, enhancing both the quality and expressiveness of the edited speech.
### _VAEs in TTS_
Variational Autoencoders (VAEs) have emerged as a popular tool in the development of TTS systems, owing to their ability to provide an explicit model for prosody variation. VAEs operate by mapping input features onto vectors in a low-dimensional latent space, wherein they capture the variability inherent in the data, before reconstructing the input using these vectors.
Many studies have employed VAEs to capture and disentangle diverse forms of data variations in the latent space. For instance, Hsu et al. [30, 31] utilized VAEs to disentangle speaker and phoneme information. Similarly, Akuzawa et al. [32] leveraged VAEs to model speaking style for expressive speech synthesis. Moreover, Hsu et al. [2, 33] delved into disentangling prosody variation and speaker information using VAEs coupled with adversarial training. More recently, fine-grained VAEs have been employed to model prosody in the latent space for each phoneme or word [4, 5]. Furthermore, vector-quantized VAEs have been applied to discrete duration modeling [34].
Conditional Variational Autoencoder (CVAE) represents a variant of VAE that involves the conditioning of data generation on other pertinent information. During inference, sampling is performed from the predicted conditional prior distribution, as opposed to the standard Gaussian distribution in VAE.
### _Pre-trained Representation in TTS_
It is believed that language information in both current and surrounding utterances can be utilized to infer prosody [35, 36, 37, 38, 39]. Such information is often embedded in vector representations obtained from a pre-trained language model (LM), such as BERT [40]. Prior research has incorporated BERT embeddings at the word or sub-word level into auto-regressive TTS models [36, 37]. More recently, Xu et al. [35] employed chunked and paired sentence patterns from BERT. Some studies have combined BERT with other techniques to enhance the model's expressiveness. In [38], BERT was combined with a multi-task learning approach to disambiguate polyphonic characters in Mandarin's grapheme-to-phoneme (G2P) conversion. Moreover, Zhou et al. [39] used a relational gated graph network with pre-trained BERT embeddings as node inputs to extract word-level semantic representations, enhancing the expressiveness of the model.
## III Cross-Utterance Conditioned VAE Speech Synthesis Framework
### _Overview_
As illustrated in Fig. 1, our research presents a sophisticated prosody speech synthesis framework. This framework distinctively integrates a conditional VAE module, designed to efficiently extract prosody from neighboring utterances. We refer to this as the Cross-Utterance Conditioned VAE Speech Synthesis Framework (CUC-VAE S2).
As illustrated in the figure, the CUC-VAE S2 primarily utilizes the CUC-VAE to model prosody extracted from neighboring utterances. Initially, we propose a Cross-Utterance (CU) embedding module to glean prosodic information from the current phoneme sentences, speaker data, and cross-utterance information. This embedding vector with predicted duration by CU-embedding denoted as \(\mathbf{H}\) and \(\mathbf{D}\), is subsequently forwarded to the CUC-VAE, in conjunction with pre-processed audio, to generate the target mel-spectrogram.
More specifically, the encoder \(\mathcal{E}\) approximates the posterior \(\mathbf{z}\) conditioned on an utterance-specific conditional prior, \(\mathbf{z}_{p}\). This variable, \(\mathbf{z}_{p}\), is sampled from the utterance-specific prior that has been learned from the outputs \(\mathbf{H}\) and \(\mathbf{D}\) of the CU-embedding module. The sampling is performed from the estimated prior by the utterance-specific prior module and is reparameterized as follows:
\[\mathbf{z}=\mathbf{\mu}\oplus\mathbf{\sigma}\otimes\mathbf{z}_{p}, \tag{1}\]
where the element-wise addition and multiplication operations are represented by \(\oplus\) and \(\otimes\), \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) are predicted means and the covariance matrices from pre-processed audio information. The decoder \(\mathcal{D}\) employed within the CUC-VAE is derived from FastSpeech 2 [24]. Following the CUC-VAE synthesizer, an additional vocoder, HifiGAN [41], is implemented to transform the synthesized mel-spectrogram into an audible waveform. The vocoder can either directly utilize a pre-trained model or be fine-tuned according to the data predicted by the CUC-VAE synthesizer. Algorithm 1 provides the pseudocode for the training process of the CUC-VAE S2 framework at a high level.
Building upon the CUC-VAE S2 framework, we further propose two practical algorithms: CUC-VAE TTS for text-to-speech tasks, and CUC-VAE SE for speech editing tasks. These will be elaborated upon in the subsequent sections. The pre-processing module and CUC-VAE encoder vary across different tasks. Therefore, we will provide a detailed introduction to the CU-Embedding module in this section, while the other proposed modules will be discussed in the following sections for specific practical algorithms. Besides, we introduce the optimization objective in Section III-C
### _CU-Embedding Module_
The CU-embedding component of our proposed system incorporates cross-utterance information, phoneme sequence, and speaker information to generate a sequence of mixture encodings. As depicted in Fig.1, the text input consists of speaker information, the current utterance \(\mathbf{u}_{i}\), and the \(L\) utterances before and after the current one. The extra G2P conversion step is first performed to convert the current utterance into a phoneme sequence denoted as \(\mathbf{P}_{i}=[p_{1},p_{2},\cdots,p_{T}]\), where \(T\) is the number of phonemes. Additionally, the start and end times of each phoneme can be extracted using Montreal forced alignment [42].
A Transformer encoder is then utilized for encoding the phoneme sequence into a sequence of phoneme encodings. Furthermore, the speaker information is encoded into a speaker embedding \(\mathbf{s}_{i}\), which is added to each phoneme encoding to produce the mixture encodings \(\mathbf{F}_{i}\) of the phoneme sequence.
\[\mathbf{F}_{i}=[\mathbf{f}_{i}(p_{1}),\mathbf{f}_{i}(p_{2}),\cdots,\mathbf{f}_{i}(p_{T})], \tag{2}\]
The vector \(\mathbf{f}\) denotes the resulting vector obtained by adding each phoneme encoding and the speaker embedding.
To enhance the naturalness and expressiveness of the generated audio, contextual information is captured by incorporating cross-utterance BERT embeddings and a multi-head attention layer. Specifically, \(2l\) cross-utterance pairs represented as \(\mathbf{C}_{i}\) are extracted from \(2l+1\) adjacent utterances \([\mathbf{u}_{i-l},\cdots,\mathbf{u}_{i},\cdots,\mathbf{u}_{i+l}]\) using Equation (3).
\[\mathbf{C}_{i}=[c(\mathbf{u}_{i-l},\mathbf{u}_{i-l+1}),\cdots,c(\mathbf{u}_{i-1},\mathbf{u}_{i}), \cdots,c(\mathbf{u}_{i+l-1},\mathbf{u}_{i+l})], \tag{3}\]
The cross-utterance pairs, denoted as \(c(u_{k},u_{k+1})=\{[\text{CLS}],\mathbf{u}_{k},[\text{SEP}],\mathbf{u}_{k+1}\}\), consist of adjacent utterances \(\mathbf{u}_{k}\) and \(\mathbf{u}_{k+1}\). The [CLS] token is added at the beginning of each pair, while the [SEP] token is inserted at the boundary of each sentence to keep track of BERT. Subsequently, the \(2l\) cross-utterance pairs are fed into BERT to capture cross-utterance information. The resulting output consists of \(2l\) BERT embedding vectors, each of which is obtained by taking the output vector at the position of the [CLS] token and projecting it to a 768-dimensional vector for each cross-utterance pair, as demonstrated below:
\[\mathbf{B}_{i}=[\mathbf{b}_{-l},\mathbf{b}_{-l+1},\cdots,\mathbf{b}_{l-1}],\]
where each vector \(\mathbf{b}_{k}\) in \(\mathbf{B}_{i}\) represents the BERT embedding of the cross-utterance pair \(c(\mathbf{u}_{k},\mathbf{u}_{k+1})\). To extract CU-embedding vectors for each phoneme specifically, a multi-head attention
Fig. 1: An overview of the Cross-Utterance Conditioned VAE Speech Synthesis (CUC-VAE S2) Framework architecture. The primary CUC-VAE synthesizer utilizes textual information derived from neighboring text via the Cross-Utterance (CU) embedding, as well as audio information processed by the pre-processing module. A supplementary vocoder is incorporated with the purpose of transforming the synthesized mel-spectrogram into waveform.
layer has been incorporated to merge the \(2l\) BERT embeddings into a single vector, as demonstrated in Equation (4).
\[\mathbf{G}_{i}=\text{MHA}(\mathbf{F}_{i}\mathbf{W}^{\text{Q}},\mathbf{B}_{i}\mathbf{W}^{\text{K}},\bm {B}_{i}\mathbf{W}^{\text{V}}), \tag{4}\]
The multi-head attention layer is denoted as \(\text{MHA}(\cdot)\), with \(\mathbf{W}^{\text{Q}}\), \(\mathbf{W}^{\text{K}}\), and \(\mathbf{W}^{\text{V}}\) serving as linear projection matrices, and \({}_{F}I\) representing the sequence of mixture encodings for the current utterance, which functions as the query in the attention mechanism. To simplify the notation, the expression in Equation (4) is denoted as \(\mathbf{G}_{i}=[\mathbf{g}_{1},\mathbf{g}_{2},\cdots,\mathbf{g}_{T}]\), where the length of the multi-head attention mechanism is \(T\), and each element is concatenated with its corresponding mixture encoding. Subsequently, these concatenated vectors are projected by another linear layer to generate the final output \(\mathbf{H}_{i}\) of the CU-embedding, denoted as \(\mathbf{H}_{i}=[\mathbf{h}_{1},\mathbf{h}_{2},\cdots,\mathbf{h}_{T}]\), as illustrated in Equation (5).
\[\mathbf{h}_{t}=[\mathbf{g}_{t},\mathbf{f}(p_{t})]\mathbf{W}, \tag{5}\]
where \(\mathbf{W}\) is a linear projection matrix. Furthermore, an extra duration predictor takes \(\mathbf{H}_{i}\) as its input and predicts the duration \(\mathbf{D}_{i}\) for each phoneme.
### _Learning Objective_
In accordance with the Evidence Lower Bound (ELBO) objective function [1], we can derive our ELBO objective, which is expressed as follows:
\[\mathcal{L}(\mathbf{x}\mid\mathbf{H},\mathbf{D}) =\mathbb{E}_{q_{\theta}(\mathbf{x}\mid\mathbf{z},\mathbf{D},\mathbf{H})}[\log p_{ \theta}(\mathbf{x}\mid\mathbf{z},\mathbf{D},\mathbf{H})] \tag{6}\] \[-\beta_{1}\sum_{n=1}^{t}D_{\text{KL}}\left(q_{\phi_{1}}\left(\bm {z}^{n}\mid\mathbf{z}^{n}_{p},\mathbf{x}\right)\|q_{\phi_{2}}\left(\mathbf{z}^{n}_{p}\mid \mathbf{D},\mathbf{H}\right)\right)\] \[-\beta_{2}\sum_{n=1}^{t}D_{\text{KL}}\left(q_{\phi_{2}}\left(\bm {z}^{n}_{p}\mid\mathbf{D},\mathbf{H}\right)\|p(\mathbf{z}^{n}_{p})\right).\]
The index \(i\), which signifies the current instance, has been omitted for the sake of simplicity. Here, \(\theta\) denotes the parameters of the decoder module, whereas \(\phi_{1}\) and \(\phi_{2}\) represent two components of the mask CUC-VAE encoder \(\phi\) that derive \(\mathbf{z}\) from \(\mathbf{z}_{p},\mathbf{x}\) and \(\mathbf{z}_{p}\) from \(\mathbf{D},\mathbf{H}\), respectively. Additionally, \(\beta_{1}\) and \(\beta_{2}\) represent two balancing constants, and \(p(\mathbf{z}^{n}_{p})\) is selected to be a standard Gaussian distribution, i.e., \(\mathcal{N}(0,1)\). Moreover, \(\mathbf{z}^{n}\) and \(\mathbf{z}^{n}_{p}\) denote the latent representation for the \(n\)-th phoneme, and \(t=a+b^{\prime}+c\) corresponds to the length of the phoneme sequence.
## IV Practical Algorithms
In this section, we will introduce two practical algorithms based on CUC-VAE S2 framework, named CUC-VAE TTS for text-to-speech task and CUC-VAE SE for speech editing task. As described in Section III, CUC-VAE S2 framework mainly proposes a variational autoencoder conditioned on contextual embedding to achieve the ultimate goal of prosody speech synthesis for different tasks. For TTS tasks, we propose the CUC-VAE TTS algorithm to produce speech with natural and expressive prosody. Besides, the CUC-VAE SE algorithm is designed to efficiently edit speech without resorting to the splicing of the generated and original audio. The details of two algorithms are given in Section IV-A and Section IV-B.
### _CUC-VAE TTS Algorithm for Text-to-Speech_
This section presents a detailed exploration of the practical implementation of our proposed algorithm for Text-to-Speech (TTS) tasks, referred to as the CUC-VAE TTS. This algorithm is a specific application of our CUC-VAE S2 framework, designed for TTS tasks. It harnesses the capabilities of the CUC-VAE to model prosody from neighboring utterances, thereby enhancing the naturalness and expressiveness of the synthesized speech. Fig. 2 illustrates the architecture of the CUC-VAE TTS algorithm, with the preprocessing module intentionally omitted for clarity. The preprocessing module's primary function in the CUC-VAE TTS algorithm is to extract the mel-spectrogram from the waveform in the dataset.
The subsequent discussion will delve into the details of the CUC-VAE TTS algorithm. We propose the CU-enhanced CVAE to overcome the lack of prosody variation and the inconsistency between the standard Gaussian prior distribution sampled by the VAE-based TTS system and the true prior distribution of speech. Excluding the omitted preprocessing module, the CUC-VAE consists of a CU-embedding module, an encoder module, as illustrated in Fig. 2, and a decoder module derived from FastSpeech 2. The CU-embedding module's function is to extract potential prosody information from neighborhood utterances, speaker ID, and the current sentence. The CU-embedding module of the system employs a Transformer to learn the current representation of the utterance, with the dimension of phoneme embeddings and the size of the self-attention set to 256. The "BERT_BASE" configuration was utilized, consisting of 12 Transformer blocks with 12-head attention layers and a hidden size of 768. The BERT model and associated embeddings were kept fixed during training. Furthermore, information regarding different speakers was incorporated into the Transformer output using a 256-dim embedding layer.
Fig. 2 describes the details of the CUC-VAE encoder. The utterance-specific prior in the encoder aims to learn the prior distribution \(\mathbf{z}_{p}\) from the CU-embedding output \(\mathbf{H}\) and predicts duration \(\mathbf{D}\). For convenience, the subscript \(i\) is omitted in this
Fig. 2: A comprehensive overview of the practical CUC-VAE TTS algorithm.
subsection. The posterior module in the encoder takes as input reference mel-spectrogram \(\mathbf{x}\), then models the approximate posterior \(\mathbf{z}\) conditioned on utterance-specific conditional prior \(\mathbf{z}_{p}\). Sampling is done from the estimated prior by the utterance-specific prior module and is reparameterized as:
\[\mathbf{z}=\mathbf{\mu}\oplus\mathbf{\sigma}\otimes\mathbf{z}_{p}, \tag{7}\]
The element-wise addition and multiplication operations are denoted by \(\oplus\) and \(\otimes\), respectively. The variable \(\mathbf{z}_{p}\) is sampled from the utterance-specific prior that has been learned. The re-parameterization can be expressed as follows:
\[\mathbf{z}_{p}=\mathbf{\mu}_{p}\oplus\mathbf{\sigma}_{p}\otimes\mathbf{\epsilon} \tag{8}\]
The mean \(\mathbf{\mu}_{p}\) and variance \(\mathbf{\sigma}_{p}\) are learned from the utterance-specific prior module. The variable \(\mathbf{\epsilon}\) is sampled from a standard Gaussian distribution \(\mathcal{N}(0,1)\).
By substituting Eq.8 into Eq.7, the complete sampling process can be described by the following equation:
\[\mathbf{z}=\mathbf{\mu}\oplus\mathbf{\sigma}\otimes\mathbf{\mu}_{p}\oplus\mathbf{\sigma}\otimes \mathbf{\sigma}_{p}\otimes\mathbf{\epsilon}. \tag{9}\]
During the inference phase, sampling is performed from the utterance-specific conditional prior distribution \(\mathcal{N}(\mathbf{\mu}_{p},\mathbf{\sigma}_{p})\) that has been learned from the CU-embedding, instead of using a standard Gaussian distribution \(\mathcal{N}(0,1)\). For simplicity, we can formulate the data likelihood calculation as follows, where the intermediate variable utterance-specific prior \(\mathbf{z}_{p}\) from \(\mathbf{D},\mathbf{H}\) to obtain \(\mathbf{z}\) is omitted:
\[p_{\theta}(\mathbf{x}\mid\mathbf{H},\mathbf{D})=\int p_{\theta}(\mathbf{x}\mid\mathbf{z},\mathbf{H}, \mathbf{D})p_{\phi}(\mathbf{z}\mid\mathbf{H},\mathbf{D})d\mathbf{z}, \tag{10}\]
In Eq. 10, \(\phi\) and \(\theta\) represent the parameters of the encoder and decoder modules in the system, respectively.
The decoder employed in the CU-enhanced CVAE is derived from FastSpeech 2. Initially, a projection layer is incorporated to map the latent variable \(\mathbf{z}\) to a high-dimensional space to facilitate its addition to \(\mathbf{H}\). Subsequently, the decoder module is used to convert the hidden sequence into a mel-spectrogram sequence through parallelized computation.
As shown in Fig. 2, four 1D-convolutional (1D-Conv) layers with kernel sizes of 1 were employed to predict the mean and variance of 2-dim latent features. Then, the sampled latent feature was converted to a 256-dim vector by a linear layer. The length regulator in FastSpeech 2's duration predictor, which consisted of two 1D convolutional blocks with ReLU activation, followed by layer normalization and an additional linear layer to predict the length of each phoneme, was adapted to take in the outputs of the CU-embedding module. Each convolutional block was comprised of a 1D-Conv network with ReLU activation, followed by layer normalization and a dropout layer. Four feed-forward Transformer blocks were used by the decoder to transform hidden sequences into an 80-dim mel-spectrogram sequence.
Finally, the vocoder HifiGAN [41] was fine-tuned for 1200 steps on an open-sourced, pre-trained version of "UNIVERSAL_V1" to synthesize a waveform from the predicted mel-spectrogram.
### _CUC-VAE SE Algorithm for Speech Editing_
This section is dedicated to the exploration of the practical application of our proposed algorithm for speech editing tasks, known as the CUC-VAE SE. This algorithm is a specialized adaptation of our CUC-VAE S2 framework, specifically tailored to address speech editing tasks. In this study, our primary focus is on the partial editing setting, where the requirement is to edit only a portion of the waveform rather than the entire waveform. Consequently, the CUC-VAE SE is designed to leverage the strengths of the CUC-VAE to model prosody from both adjacent utterances and the neighboring waveform of the editing part. This approach significantly enhances the quality and expressiveness of the edited speech, ensuring a seamless integration of the edited part with the rest of the waveform.
In this study, we concentrate on three primary speech editing operations: deletion, insertion, and replacement. Assuming the original utterance transcript of the speech as \([\mathbf{u}_{a},\mathbf{u}_{b},\mathbf{u}_{c}]\), the modified utterance can be represented as \([\mathbf{u}_{a},\mathbf{u}_{b^{\prime}},\mathbf{u}_{c}]\), where \(\mathbf{u}_{b^{\prime}}\) denotes the modified segment while \(\mathbf{u}_{a}\) and \(\mathbf{u}_{c}\) remain unchanged. The corresponding phonemes translated by G2P can be denoted as \(\mathbf{p}_{i}=[\mathbf{p}_{a},\mathbf{p}_{b},\mathbf{p}_{c}]\), and the original speech's mel-spectrogram can be denoted as \(\mathbf{x}_{i}=[\mathbf{x}_{a},\mathbf{x}_{b},\mathbf{x}_{c}]\). Here, \(\mathbf{x}_{i}\) contains a sequence of frame-level mel-spectrogram for \(i\in a,b,c\).
Although there are three primary speech editing operations: deletion, insertion, and replacement, the replacement operation can be considered as a deletion followed by an insertion. Therefore, we can use two flags, namely \(Flag_{del}\) and \(Flag_{add}\), to indicate the location of deletion and addition, as shown in Fig. 3.
DeletionThe deletion operation allows the user to remove a segment of the speech waveform that corresponds to a particular set of words. After the deletion, the target utterance to be synthesized becomes \([\mathbf{u}_{a},\mathbf{u}_{c}]\), with \(\mathbf{u}_{b}\) representing the segment to be removed. The comparison between the original and edited utterances provides the deletion indicator, denoted by \(Flag_{del}\), which is used to guide the editing of the mel-spectrogram. Specifically, \(Flag_{del}\) is defined as follows
\[Flag_{del}=[\mathbf{0}_{a},\mathbf{1}_{b},\mathbf{0}_{c}],\]
where \({}_{0}\) and \({}_{1}\) denote zero and one vectors, respectively.
Insertion and ReplacementUnlike the deletion operation, the target synthesized speech after insertion or replacement is based on the edited utterance \([\mathbf{u}_{a},\mathbf{u}_{b^{\prime}},\mathbf{u}_{c}]\), where \(\mathbf{u}_{b^{\prime}}\) denotes the content to replace \(\mathbf{u}_{b}\). It is worth noting that the insertion process can be treated as a special case where \(\mathbf{u}_{b}=\mathbf{p}_{b}=\mathbf{x}_{b}=\varnothing\). Correspondingly, we can define the addition indicator as
\[Flag_{add}=[\mathbf{0}_{a},\mathbf{1}_{b^{\prime}},\mathbf{0}_{c}].\]
As shown in Fig. 3, the CUC-VAE synthesizer is utilized to generate the mel-spectrogram \(\mathbf{x}_{b^{\prime}}\), based on the reference mel-spectrogram \([\mathbf{x}_{a},\mathbf{x}_{c}]\) and the neighborhood utterances. In this process, two one-dimensional convolutions are employed to learn the mean \(\mathbf{\mu}\) and variance \(\mathbf{\sigma}\). Referring to \(Flag_{add}\), the
corresponding position of \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) are updated by adding 0s and 1s, resulting in \(\hat{\mathbf{\mu}}=[\mathbf{\mu}_{a},\mathbf{0}_{b^{\prime}},\mathbf{\mu}_{c}]\) and \(\hat{\mathbf{\sigma}}=[\mathbf{\sigma}_{a},\mathbf{1}_{b^{\prime}},\mathbf{\sigma}_{c}]\), respectively. This allows the generation of speech for the edited region from the utterance-specific prior distribution, while the unmodified regions are sampled from the actual audio and the utterance-specific prior distribution. During the training phase, the real audio that has been edited is not available. Therefore, certain segments of audio are masked, and the same content is restored to simulate the editing scenario. As a result, \(b^{\prime}\) is set to \(b\).
To enhance the coherence and contextual relevance of the output generated by the modified CUC-VAE module, we have embedded two specific additional modules into the CUC-VAE S2 framework, i.e., Upsampling in CUC-VAE Encoder and Duration Adjustor in preprocessing module. To achieve a smoother editing boundary, the values of \(\hat{\mathbf{\mu}}\) and \(\hat{\mathbf{\sigma}}\) are convolved using one-dimensional convolution to obtain \(\mathbf{\mu}^{\prime}\) and \(\mathbf{\sigma}^{\prime}\). By employing this approach, the module can perform sampling from the estimated prior distribution and can be further re-parameterized as follows:
\[\mathbf{z}=\mathbf{\mu}^{\prime}\oplus\mathbf{\sigma}^{\prime}\otimes\mathbf{z}_{p}, \tag{11}\]
The re-parameterization formula and ELBO objective in this module are similar to the original CUC-VAE module. Specifically, as shown in Fig. 3, the upsampling is achieved by an additional upsampling layer to ensure that the predicted sequence length matched the phoneme sequence length after editing and to enhance the naturalness of the synthesized audio.
In addition, to efficiently leverage the duration information obtained from the original audio, a similar approach to that used in [8, 10] is adopted. Consequently, the phoneme duration within the edited region is adjusted by multiplying it with the ratio of the duration of the original audio to the predicted duration of the unedited area in the audio to obtain \(\mathbf{D}^{\prime}_{i}\). Following the duration predictor and adjustor, the predicted duration is rounded to the nearest integer value.
The main difference between these two modules is that while carrying out the inference stage, the sampling process remains coherent with the training stage, sampling from the masked mel-spectrogram conditioned on the utterance-specific conditional prior. Besides, to accurately replicate the actual editing scenario, we selected the part to be masked by taking a word instead of a phoneme as a unit. Additionally, to achieve a balance between the system's ability to learn and predict audio information, we set the shielding rate to 50%, a value that has been found to be effective in previous studies [10].
On the other hand, the reconstruction of a waveform from a given transcript for each masked phoneme requires the use of a loss function for the acoustic model. The most commonly used loss function is the mean absolute error (MAE) between the reconstructed and original mel-spectrogram, with the loss being computed only on the masked segments, similar to the BERT model.
During the training process, the input reference mel-spectrogram only includes the unmasked part. To give more emphasis on the masked part, it is reasonable to increase the loss weight of this area. However, during the inference process, the goal is to synthesize naturally coherent audio with rhythms that conform to themodified text context. Therefore, setting the loss weight of the unmasked area to zero is not appropriate in the case of speech editing.
To balance the two objectives of approaching the original audio and the context of the newly modified transcript, we propose setting the loss ratio of the masked and unmasked parts to \(\lambda=1.5\) in the experiment. In this way, we expect to achieve a satisfactory outcome that is consistent with both
Fig. 3: A comprehensive overview of the practical CUC-VAE SE algorithm.
objectives.
\[\begin{split}\mathcal{L}mel&=\frac{1}{\#\text{ of frames}}(\sum i\in\text{unmask}|x_{i}^{pred}-x_{i}^{target}|\\ &+\lambda\sum_{i\in\text{mask}}|x_{i}^{pred}-x_{i}^{target}|) \end{split} \tag{12}\]
The findings of the subsequent experiments indicate that increasing the weight of the loss function associated with the reconstructed mel-spectrogram of the masked part, in comparison to other weight settings, can lead to the generation of a more natural and coherent synthesized sound.
The remaining modules, including the CU-embedding, CUC-VAE Decoder, and Vocoder, are implemented in the same manner as in the CUC-VAE TTS algorithm.
## V Experiments
In this section, we present a series of experiments designed to assess the efficacy of our proposed speech editing system. We begin by outlining the experimental setup, including the dataset and evaluation metrics used. Subsequently, we compare the naturalness and prosody diversity of speech synthesized using our CUC-VAE model with that of FastSpeech 2 and other VAE techniques. We then evaluate the naturalness and similarity of the audio generated by our system against that of EditSpeech [8], using both partial and entire inference. An ablation study is also conducted to examine the impact of limiting contextual information on our system's performance, as measured by both the mean opinion score (MOS) and reconstruction performance. Furthermore, we investigate the effect of biased training on reconstruction performance. Finally, we present two case studies that illustrate the variations in prosody with different cross-utterance information and the influence of entire inference and biased training, respectively. Audio samples from these experiments can be accessed on our demo page 1.
Footnote 1: [http://bitly.ws/uMKv](http://bitly.ws/uMKv)
### _Environmental Setting_
In this study, we performed experiments using the LibriTTS dataset, which is a multi-speaker dataset comprising the train-clean-100 and train-clean-360 subsets. These subsets contain 245 hours of English audiobooks recorded by 1151 speakers, consisting of 553 female and 598 male speakers. The dataset [43] contains adjacent sentences that can be used to extract contextual information. We randomly selected 90%, 5%, and 5% of the data from the dataset for the training, validation, and testing sets, respectively. All audio clips were resampled at a rate of 22.04 kHz.
To evaluate the performance of the proposed method, both subjective and objective tests were conducted. The subjective test involved 20 volunteers who were asked to assess the naturalness and similarity of 15 synthesized speech samples using a 5-point mean opinion score (MOS) evaluation. The MOS results were analyzed using 95% confidence intervals and p-values.
For the objective evaluation, two metrics were used: F0 frame error (FFE) [44] and Mel-cepstral distortion (MCD) [45]. FFE was utilized to evaluate the accuracy of the F0 track reconstruction, which combined the Gross Pitch Error (GPE) and the Voicing Decision Error (VDE). More specifically, GPE measures the difference between the predicted and reference F0 values, while VDE measures the accuracy of voicing decision (i.e., whether a frame is voiced or unvoiced). On the other hand, MCD measures the spectral distortion between the predicted and reference mel-spectrograms. These objective metrics were used to evaluate the performance of different VAEs and different settings of loss weights.
In detail,
\[\begin{split}\mathrm{FFE}&=\frac{\#\text{ of error frames}}{\#\text{ of total frames}}\times 100\%\\ &=\frac{N_{U\to V}+N_{V\to U}+N_{\text{F0E}}}{N}\times 100\%. \end{split}\]
where \(N_{U\to V}\) and \(N_{V\to U}\) are the numbers of unvoiced/voiced frames classified as voiced/unvoiced frames, \(N\) is the number of the frames in the utterance, and \(N_{\text{F0E}}\) is number of frames for which
\[\left|\frac{F0_{i,\text{ estimated}}}{F0_{i,\text{ reference}}}-1\right|>20\%\]
where \(i\) is the frame number.
Besides, MCD evaluated the timbral distortion, computing from the first 13 MFCCs in our trials.
\[\mathrm{MCD}(\mathbf{y},\hat{\mathbf{y}})=\frac{10\sqrt{2}}{\ln 10}\|\mathbf{y}-\hat{\mathbf{y} }\|_{2}\quad\text{(dB)}\]
where \(\mathbf{y}\) and \(\hat{\mathbf{y}}\) are the MFCCs of original and reconstructed waveform. A coefficient was utilized to convert the Mel-cepstral distortion (MCD) units into decibels. The MCD represents the difference between the synthesized and natural mel-spectrogram sequences, and a smaller MCD value indicates a closer resemblance to natural speech, thus reflecting naturalness.
In addition to assessing naturalness, we also reported word error rates (WER) from an automatic speech recognition model, which provides a measure of the intelligibility and consistency between synthetic and real speech. To this end, an attention-based encoder-decoder model trained on Librispeech 960-hour data was utilized. Notably, the model is open-sourced and can be accessed at the following URL2.
Footnote 2: [http://bitly.ws/uMKv](http://bitly.ws/uMKv)
### _Sample Naturalness and Diversity_
We measure the sample naturalness and intelligibility using MOS and WER. Complementary to the naturalness, the diversity of generated speech from the conditional prior was evaluated by comparing the standard deviation of \(E\) and \(F_{0}\) similar to [5].
The results were shown in Table. I. Compared to the global VAE and fine-grained VAE, the proposed CUC-VAE received the highest MOS and achieved the lowest FFE, MCD, and WER. Although both \(F_{0}\) and \(E\) of the CUC-VAE TTS module were lower than the baseline + fine-grained VAE, the proposed system achieved a clearly higher prosody diversity than the baseline and baseline + global VAE systems. The fine-grained VAE achieved the highest prosody variation as its latent prosody
features were sampled from a standard Gaussian distribution, which lacks the constraint of language information from both the current and the neighbouring utterances. This caused extreme prosody variations to occur which impaired both the naturalness and the intelligibility of synthesized audios. As a result, the CUC-VAE TTS module was able to achieve high prosody diversity without hurting the naturalness of the generated speech. In fact, the adequate increase in prosody diversity improved the expressiveness of the synthesized audio, and hence increased the naturalness.
### _Partial vs. Entire Inference_
To compare the performance of partial inference versus entire inference in the context of speech editing, several experiments were conducted on different systems: 1) GT, the ground truth audio; 2) GT (Mel+HifiGAN), the ground truth audio converted to mel-spectrogram and then back to audio using HifiGAN vocoder; 3) Wave_cut, a modified version of the waveform obtained by manually cutting and reinserting a specific region; 4) EditSpeech [8], an approach based on partial inference and bidirectional fusion to improve prosody near boundaries; 5) CUC-VAE-SE (Mel_cut), an approach that involves cutting the modified region from the generated mel-spectrogram and inserting it back to the original mel-spectrogram using a forced aligner; 6) CUC-VAE-SE, an approach that regenerates a complete mel-spectrogram from the entire sentence to be edited and then uses HifiGAN vocoder to generate the complete waveform;
Note that the ground truth audio samples used in this study do not contain any edited audio. Therefore, the systems designated as GT and GT (Mel+HifiGAN) were used to evaluate the reconstruction performance of the audio signals. To assess the effectiveness of the editing operations, we manually spliced the audio waveform and used the MOS similarity score of the resulting system, designated as Wave_cut, as an indicator of the upper bound performance.
Table II presents the MOS scores for naturalness and similarity obtained from the subjective evaluations of the various editing operations. Note that the EditSpeech approach is solely capable of partial inference, whereby segments of the real mel-spectrogram are combined to generate the modified speech. Consequently, the results of the deletion operation are uniform across different editing systems when partial inference is utilized.
As shown, our model employing entire inference achieved the highest score for naturalness across all editing operations. Notably, a significant gap was observed in the replacement operation, where the speech editing models based on partial reasoning faced challenges in dealing with intonation conversion. On the other hand, the MOS score for naturalness of "Mel_cut" in the deletion operation was relatively low since its performance heavily relies on the accuracy of the forced aligner, particularly when short words are deleted. In such cases, its performance may be inferior to manually conducted waveform-based deletions.
Regarding the MOS scores for similarity, our system based on entire inference demonstrated comparable performance to "Mel_cut" based on partial inference, and outperformed EditSpeech in both insertion and replacement operations. Additionally, it exhibited a similarity score close to that of "Wave_cut", which served as the upper bound indicator for similarity, with the maximum difference being approximately 0.2.
Table III reports the p-values obtained from the statistical analysis of the MOS scores for naturalness and similarity. The results indicate that our model using entire inference outperforms both "Mel_cut" and "Wave_cut" in terms of naturalness, as the p-values are significant. Meanwhile, there was no significant difference in similarity between entire inference and the two partial inference methods. The only exception was in the case of the deletion operation, where there was no significant difference in naturalness between our model using entire inference and "Wave_cut".
Table IV provides evidence of the efficacy of our mask CU-enhanced CVAE module in reconstructing the mel-spectrogram. Note that lower objective results indicate higher similarity to the original audio. As expected, CUC-VAE-SE(Mel_cut), which involves copying the 50% uned
spectrogram, showed superior reconstruction performance in terms of MCD. Despite not directly copying the unedited area's real mel-spectrogram, our entire inference-based system demonstrated comparable similarity performance to EditSpeech. Also, CUC-VAE-SE system outperformed EditSpeech in naturalness, FFE, and WER with a clear margin.
As a result, our CUC-VAE-SE system has demonstrated significantly better subjective naturalness and similarity scores as well as multiple objective metrics in speech reconstruction and multiple speech editing tasks than EditSpeech. Furthermore, it has been demonstrated that utilizing entire inference yields significantly better naturalness compared to partial inference, while the similarity of the inferred results to the true mel-spectrogram or waveform is not significantly different. This showcases the reliable reconstruction capability and rich expressiveness of our proposed system.
### _Ablation Study on Different VAEs_
In this section, we investigate the impact of using different VAEs on the performance of our system. We conduct a comparative analysis of the reconstruction performance and MOS scores of the synthesized audio across various systems, including those used in the previous experimental settings. We further compare the performance of our system with that of several baseline models, namely 1) Baseline1, uses a fine-grained VAE instead of CUC-VAE; 2) Baseline2, uses a CVAE without the context embeddings (i.e., \(l=0\)); 3) Baseline3, uses CUC-VAE with 2 neighbouring utterances (i.e., \(l=2\)); 4) Our system, use CUC-VAE with 5 neighbouring utterances (i.e., \(l=5\)).
As presented in Table V and VI, the introduction of semantic restriction of edited text and context embeddings in succession resulted in a steady increase in both the MOS for naturalness of edited and reconstructed waveforms and objective scores for reconstruction performance. The score metrics also indicate that the inclusion of more cross-utterances could enhance the system's reconstruction capability. However, the improvements observed by using more than five neighboring utterances were negligible. Consequently, the subsequent experiments were carried out using \(L=5\). These results suggest that the CUC-embedding and mask CUC-VAE module played a pivotal role in producing more coherent audio.
### _Case Studies_
#### V-F1 The influence of the utterance-specific prior
To better illustrate how the utterance-specific prior influenced the naturalness of the synthesized speech under a given context, a case study was performed by synthesizing an example utterance, "Mary asked the time", with two different neighbouring utterances: "Who asked the time? Mary asked the time." and "Mary asked the time, and was told it was only five." Based on linguistic knowledge, to answer the question in the first setting, an emphasis should be put on the word "Mary", while in the second setting, the focus of the sentence is "asked the time".
Fig. 5 showed the energy and pitch of the two utterances. The energy of the first word "Mary" in Fig. 5(a) changed significantly (energy of "Ma-" was much higher than "-ry"), which reflected an emphasis on the word "Mary", whereas in Fig. 5(b), the energy of "Mary" had no obvious change, i.e., the word was not emphasized.
On the other hand, the fundamental frequency of the words "asked" and "time" stayed at a high level for a longer time in the second audio than the first one, reflecting another type of emphasis on those words which was also coherent with the given context. Therefore, the difference in energy and pitch between the two utterances demonstrated that the speech synthesized by our model is sufficiently contextualized.
#### V-F2 The influence of entire inference and biased training
To demonstrate the effectiveness of entire Inference and biased training in avoiding unnatural transitions and reconstructing edited and non-edited regions accurately,another case study was performed by synthesizing an example utterance, "Yes, yours, your own property". The reconstruction results of different systems as shown in Fig. 4, where the area from 0.62s to 1.15s is the edited region that needs to be reconstructed. The text of the masked region is "yours".
Fig. 4(a) presents the mel-spectrogram and energy contour of the target audio, Fig. 4(b) is the result of EditSpeech which uses partial inference, and the results of our entire inference approach using unbiased and biased training in Fig. 4(c) and 4(d), respectively.
It can be observed that EditSpeech exhibits distinct segmentation lines at the boundary between the edited and non-edited regions in the mel-spectrogram, whereas the entire inference approach results in more natural transitions and produces reconstructed edited regions that are closer to the target audio. Additionally, the energy contour of the edited region in the biased training system is closer to the target audio compared to that of the unbiased training system.
## VI Conclusion
In this work, we introduce the Cross-Utterance Conditional Variational Autoencoder Speech Synthesis framework, designed to enhance the expressiveness and naturalness of synthesized speech. This framework leverages the powerful representational capabilities of pre-trained language models and the re-expression abilities of VAEs. The core component of the CUC-VAE S2 framework is the cross-utterance CVAE, which extracts acoustic, speaker, and textual features from surrounding sentences to generate context-sensitive prosodic features, more accurately emulating human prosody generation. We further propose two practical algorithms, CUC-VAE TTS for text-to-speech and CUC-VAE SE for speech editing, to address real-world speech synthesis challenges. The efficacy of these
Fig. 4: The mel-spectrograms of target speech and speech edited by EditSpeech, our system with unbiased training(loss ratio=1:1), and our system with biased training(loss ratio=1:1.5). The region marked with time (0.62s \(\sim\) 1.15s) is the edited region.
Fig. 5: Comparisons between the energy and pitch contour of same text “Mary asked the time” but different neighbouring utterances, generated by CUC-VAE TTS module.
proposed systems was thoroughly evaluated through a series of comprehensive experiments conducted on the LibriTTS English audiobook dataset. The results of these experiments demonstrated a significant improvement in the prosody diversity, naturalness, and intelligibility of the synthesized speech, thereby validating the effectiveness of our proposed systems.
|
2309.05248 | Enhancing Speaker Diarization with Large Language Models: A Contextual
Beam Search Approach | Large language models (LLMs) have shown great promise for capturing
contextual information in natural language processing tasks. We propose a novel
approach to speaker diarization that incorporates the prowess of LLMs to
exploit contextual cues in human dialogues. Our method builds upon an
acoustic-based speaker diarization system by adding lexical information from an
LLM in the inference stage. We model the multi-modal decoding process
probabilistically and perform joint acoustic and lexical beam search to
incorporate cues from both modalities: audio and text. Our experiments
demonstrate that infusing lexical knowledge from the LLM into an acoustics-only
diarization system improves overall speaker-attributed word error rate
(SA-WER). The experimental results show that LLMs can provide complementary
information to acoustic models for the speaker diarization task via proposed
beam search decoding approach showing up to 39.8% relative delta-SA-WER
improvement from the baseline system. Thus, we substantiate that the proposed
technique is able to exploit contextual information that is inaccessible to
acoustics-only systems which is represented by speaker embeddings. In addition,
these findings point to the potential of using LLMs to improve speaker
diarization and other speech processing tasks by capturing semantic and
contextual cues. | Tae Jin Park, Kunal Dhawan, Nithin Koluguri, Jagadeesh Balam | 2023-09-11T05:47:56Z | http://arxiv.org/abs/2309.05248v3 | # Enhancing Speaker Diarization with Large Language Models:
###### Abstract
Large language models (LLMs) have shown great promise for capturing contextual information in natural language processing tasks. We propose a novel approach to speaker diarization that incorporates the prowess of LLMs to exploit contextual cues in human dialogues. Our method builds upon an acoustic-based speaker diarization system by adding lexical information from an LLM in the inference stage. We model the multi-modal decoding process probabilistically and perform joint acoustic and lexical beam search to incorporate cues from both modalities: audio and text. Our experiments demonstrate that infusing lexical knowledge from the LLM into an acoustics-only diarization system improves overall speaker-attributed word error rate (SA-WER). The experimental results show that LLMs can provide complementary information to acoustic models for the speaker diarization task via proposed beam search decoding approach showing up to 39.8% relative delta-SA-WER improvement from the baseline system. Thus, we substantiate that the proposed technique is able to exploit contextual information that is inaccessible to acoustics-only systems which is represented by speaker embeddings. In addition, these findings point to the potential of using LLMs to improve speaker diarization and other speech processing tasks by capturing semantic and contextual cues.
Tae Jin Park, Kunal Dhawan, Nithin Koluguri, Jagadeesh Balam NVIDIA, Santa Clara, USA Speaker Diarization, Multi-speaker Speech Recognition, Large Language Model, Beam Search Decoding
## 1 Introduction
Multi-speaker speech recognition has been often approached by applying speaker diarization system on the input audio and feeding the speaker homogeneous segments [1] to automatic speech recognition (ASR) systems [2, 3] or speaker diarization and speech recognition are done simultaneously [4, 5]. As we can observe from the most popular previous studies [1], lexical information is only infused to improve single speaker ASR based on beam search decoding [6, 7] which reduces word error rate (WER). In general, the probability of the next word or next token is calculated by n-gram or neural language model (LM) trained on ample amount of data and the probability of the next token or word is added to the probability of the token from the acoustic model to integrate the lexical cue from the trained language model and acoustic model. This type of beam search technique can be applied to ASR models trained using connectionist temporal classification (CTC) loss [8] or recurrent neural network transducer (RNN-T) [9].
Despite the efficiency of the end-to-end ASR models which utilize an RNN-T architecture that integrates an internal LM, there are still significant benefits to be gained by these ASR models from incorporating an external language model. This improvement is mainly attributable to the disparities in the scale of training data available for acoustic and lexical modalities. Specifically, datasets for training end-to-end ASR models are limited to those containing both audio and its corresponding transcript while language models can leverage text-only datasets, which are considerably larger and more diverse. In a similar vein, when it comes to speaker diarization, the volume of text data available is orders of magnitude greater than the volume of speaker-annotated audio data, especially when measured in terms of word count. Consequently, there is a potential for improvement by integrating the language models to enhance the performance of the speaker diarization task.
The aim of this paper is to introduce an application of language models to the realm of speaker diarization, demonstrating the benefit of language model trained on large amount of text-only datasets. Fig. 1 provides a comparative overview of LM applications in ASR realm and speaker diarization realm. We refer to our proposed technique as _contextual beam search_, as it seeks the most probable word-speaker mapping \(S\) by considering context from both modalities.
The utilization of lexical cues to speaker diarization, speaker turn detection and segmentation has been investigated for a long time yet remains less popular than acoustic-only speaker diarization research. Some of the earliest studies on this topic include the systems presented in [10, 11], which leveraged linguistic patterns to identify speakers during the diarization process. Numerous studies have improved speaker segmentation or clustering accuracy by integrating ASR output to leverage lexical cues [12, 13, 14]. Additionally, by merging speaker turn probabilities based on audio and text during the clustering phase [15], lexical cues are further infused into speaker diarization results. Conversely, ASR and speaker diarization results have been jointly optimized to harness the lexical cues from ASR word outputs [4, 5]. More recently, the study presented in [16] introduced semantic information through neural embeddings generated by a spoken language processing (SLP) unit. Subsequently, a multi-modal (audio-text) speaker change detector was proposed [17], along with the introduction of a speaker error correction (SEC) system [18] based on a pre-trained language model.
Our proposed method has the following distinction against the previous studies. Firstly, our approach leverages a general-purpose LLM that is trainable on text-only datasets. This effectively addresses the data-sparsity challenge commonly associated with speaker diarization. In contrast, the systems proposed in the aforementioned studies [4, 13, 14, 15, 16, 17, 18] employ neural layers, such as RNN-T or transformer architectures to produce the final speaker logits and thus necessitate training on paired audio-text datasets, which our method circumvents. Secondly, our approach is not constrained by
Figure 1: The concepts of beam search decoding in the context of ASR and Speaker Diarization (SD).
the number of speakers in a speaker diarization module as the decoding process does not rely on logits from a neural network layer. This is another advantage that systems in [4, 14] cannot offer, as they are limited to a fixed number of speakers. Lastly, our approach functions similarly to general-purpose LMs work for end-to-end ASR models where an arbitrary LLM can be plugged in to improve the performance of the acoustic-only speaker diarization model. This offers significant advantages in various practical scenarios, especially when there is a need for modifying only the ASR model or the LM. For instance, when deploying the model for a different language, we can simply substitute ASR models and LLMs while using the same acoustic-only diarization model.
## 2 Probabilistic Modeling for Beam Search
### Probabilistic Formulations of ASR and Speaker Diarization
In ASR frameworks, the task of converting speech-to-text (STI) revolves around building a model that translates a sequence of acoustic observations into a corresponding sequence of words. Formally, if we let \(S\) denote the speaker identity, \(W\) the word token, and \(A\) the acoustic observation, the STI task can be mathematically represented as estimating the most likely word sequence \(W\) given an acoustic observation \(A\). This probability can be denoted as \(P(W|A)\). Using Bayes' theorem, this can be represented as:
\[W^{\star} =\operatorname*{argmax}_{W}\big{\{}P(W|A)\big{\}} \tag{1}\] \[=\operatorname*{argmax}_{W}\Bigg{\{}\frac{P(A|W)P(W)}{P(A)}\Bigg{\}}\] (2) \[=\operatorname*{argmax}_{W}\big{\{}P(A|W)P(W)\big{\}}, \tag{3}\]
where \(W\) is a word or a token and \(A\) is an acoustic observation. Expanding this idea to the realm of speaker diarization, our goal is to estimate the speaker label \(S\) given both the acoustic observation \(E\) and word \(W\). Formally, we can express this as:
\[S^{\star} =\operatorname*{argmax}_{S}\{P(S|E,W)\} \tag{4}\] \[=\operatorname*{argmax}_{S}\Bigg{\{}\frac{P(E,W|S)P(S)}{P(E,W)}\Bigg{\}}\] (5) \[=\operatorname*{argmax}_{S}\{P(E,W|S)P(S)\}. \tag{6}\]
For the sake of simplifying our computations and model, we make an assumption of conditional independence between the acoustic observation \(E\) and word \(W\), given the speaker identity \(S\). This assumption is mathematically represented as:
\[P(E,W|S)\stackrel{{\text{C.I.}}}{{=}}P(E|S)P(W|S). \tag{7}\]
From this conditional independence assumption, we can restructure Eq. (4) to remove the unrelated term \(P(W)\), leading to the essential expressions that require computation. The derivation is presented as:
\[S^{\star} \stackrel{{\text{C.I.}}}{{=}}\operatorname*{argmax} _{S}\big{\{}P(E|S)P(W|S)P(S)\big{\}} \tag{8}\] \[=\operatorname*{argmax}_{S}\big{\{}P(E|S)P(S|W)P(W)\big{\}}. \tag{9}\]
In alignment with the ASR framework illustrated in Fig. 1, where \(P(A|W)\) is represented as acoustic model, we utilize the acoustic-only diarization model to represent \(P(E|S)\). Additionally, we derive \(P(S|W)\) using pre-trained general-purpose language models, such as n-gram language models and LLMs. It is crucial to differentiate between \(P(W)\) in Eq. (6) and \(P(W)\) in Eq. (3) because the value of \(P(W)\) in Eq. (6) is contingent upon the condition that speakers are assigned to each word in the preceding word sequence. Let \(w_{i}\) denote \(i\)-th word: each \(i\) index corresponds to a specific word and its assigned speaker identity \(k\), as illustrated in Fig. 2. The given relationship can be expressed as:
\[\mathbf{W}_{\mathbf{S}}^{C-1}=\{(w_{1},\mathbf{q}_{1}),(w_{2},\mathbf{q}_{2}),\cdots,(w_{C-1},\mathbf{q}_{C-1})\}, \tag{10}\]
where \(C\) denotes the word sequence length (_i.e._, context length) and \(q_{i}\) is the \(i\)-th speaker probability vector generated by the speaker diarization model. The matrix \(\mathbf{S}\) encompasses a sequence of speaker probability vectors \(\mathbf{q}\), represented as \(\mathbf{S}=[\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{C-1}]\). With the notation \(\mathbf{W}_{\mathbf{S}}^{C-1}\), the probability of a word given the past transcription \(P(W)\) is expressed as:
\[P(W)=P(w_{i}|\mathbf{W}_{\mathbf{S}}^{C-1}). \tag{11}\]
### Acoustic Inference: Speaker Diarization
We employ an improved version of Multi-scale Diarization Decoder (MSDD) model [19], which is introduced in [20]. Given a maximum speaker limit \(N_{S}\), the diarization output is manifested by \(N_{S}\)-dimensional floating point numbers, each representing the probability associated with each frame of 0.05 seconds in length. We utilize time-stamps derived from an ASR model to sample the diarization these diarization logit values. The corresponding speaker probability for \(k\)-th speaker can be described as
\[q_{k}=P(E|S)|_{S=k}=\frac{\sum_{t=1}^{T}p(S=k|E,t)}{\sum_{k}^{N_{S}}\sum_{t=1}^{ T}p(S=k|E,t)}, \tag{12}\]
where \(P(S|E,t)\) denotes the sigmoid logit value at time \(t\) and \(T\) denotes the number of diarization logit frames within the word. Consequently, we obtain an \(N_{S}\)-dimensional floating point probability vector \(\mathbf{q}\) that sums up to one. As a result, we convert sigmoid values to probability values as we treat \(P(E|S)\) as a probability measure. Note that any type of diarization or speaker recognition system can be employed to determine the speaker probability \(P(E|S)\) for the sequence of \(\mathbf{q}_{i}\) values, where \(i\) denotes word index. After the diarization and ASR processes, or any other transcription and speaker recognition methods, we obtain a word sequence, \(\mathbf{w}\) and corresponding speaker probability values per word denoted as \(\mathbf{q}_{i}=[q_{1},q_{2},\ldots,q_{k}]_{i}\), as illustrated in Fig. 2.
### Scoring for Beam Search Decoding
Our proposed beam search decoding approach is based on Eq. (9). Let \(\beta\) represent the mixing coefficient between acoustics-only speaker diarization and the language model, and let \(\alpha\) represent
Figure 2: Dataflow diagram of the proposed system.
the scaling parameter for \(P(W)\). The beam search score function can then be formulated as:
\[P_{BSD}(S)=\log\bigl{(}P(E|S)\bigr{)}+\beta\log\bigl{(}P(S|W)P(W)^{\alpha}\bigr{)}. \tag{13}\]
We employ a modified version of the beam search decoder from [21] which is capable of applying Eq. (13) as a score for beam search. Fig. 3 provides a visual representation of the beam search decoding process. The calculations of \(P(E|S)\) and \(P(S|W)\) will be covered in section 3. The role of the term \(P(W)\) is crucial. Since the sum of \(P(S|W)\) for all speakers 5 equals 1, if all speakers have an equal probability in a lexical sense, \(P(S|W)\) becomes \(1/N_{S}\). For instance, if we consider cases where multiple speakers utter the same filler words, we are likely to see \(P(S|W)\) approaches \(1/N_{S}\) as there is not significant lexical context to distinguish one's speaker identity from another. The term \(P(W)\) addresses these situations by assigning a relatively low probability, compensating for the uncertain lexical cue. Hence, the beam search predominantly relies on \(P(E|S)\) to select the most probable speaker for the corresponding word.
While \(P(W)\) can serve as a confidence parameter for lexical context, the degree of compensation can be controlled by the parameter \(\alpha\). From a mathematical perspective, if \(\alpha\) is close to 0, we give less importance on the confidence of the language model and as \(\alpha\) increases, we further suppress the lexical context, proportional to the word probability.
## 3 Lexical Inference
We employ two types of language model for comparison: the KenLM [22] based n-gram language model and the GPT-based LLM from the Megatron-LM framework in the NeMo toolkit [23, 24].
### Baseline n-gram language model
#### 3.1.1 Speaker probability from n-gram
We use a 4-gram language model [25] that is publicly available and has approximately 8M probability value entries. The following example demonstrates speaker-wise transcript and the _next word_ that should be appended to a speaker's last word.
**[Speaker-0]****how** are you doing these days</s><s>well tell = more</s> **[Speaker1]**<s>things** are going very well</s><s>there is a project that i'm
**[Speaker1]**<s>**how** are you doing these days</s><s>things = going very well</s><s>well tell = more</s><s>there is a project that i'm
**[Next Word]**working
For the n-gram language model, we use the start-of-sentence (SOS) token, <s> and the end-of-sentence (EOS) token, </s>. These tokens can significantly influence the probability of the n-gram score for a given word sequence. The following equation describes the probability of having \(w_{\text{next}}\) given that all \(C\) words assigned with the particular speakers in \(S\):
\[P_{S}\left(w_{i}=w_{\text{next}}\right) =P(w_{S,i}|\mathbf{W}_{S}^{C-1}) \tag{14}\] \[=P(S,W), \tag{15}\]
where \(\mathbf{W}_{S}^{C-1}\) is a word sequence \(w_{1},\dots,w_{n}\) for the given speaker \(S\), and \(C-1\) represents the length of the context window for speaker \(k\). Therefore, the probability for the \(k\)-th speaker among \(N_{S}\) speakers is denoted by the following equation:
\[P(S|W)|_{S=k}=\frac{P(S,W)}{P(W)}\bigg{|}_{S=k}=\frac{P(w_{k,i}|\mathbf{W}_{k} ^{C-1})}{\sum_{k=1}^{N}P(w_{k,i}|\mathbf{W}_{k}^{C-1})}. \tag{16}\]
#### 3.1.2 Word Probability
As we discussed in the previous section, we calculate the \(P(W)\) term separately. Note that this probability differs from \(P(W)\) in Eq. (16) since we use the entire word sequence, which includes all speakers, to calculate the \(P(W)\) value using the following equation:
\[P(W) =\sum_{k=1}^{N_{S}}P(w_{k,i}|\mathbf{W}_{k}^{L-1})|_{w_{k,i}=w_{ \text{next}}} \tag{17}\] \[=P(w_{\text{next}}|\mathbf{W}_{S}^{L-1}), \tag{18}\]
where \(L\) is the length of the context that includes all \(N\)-speakers.
### Speaker Diarization Prompt for LLM
While the baseline n-gram language model works for the purpose of estimating the most probable speaker, a neural language model trained on large amount of data can parse more context in the given text-based conversation. This allows it to estimate a more accurate speaker probability based on lexical cues.
#### 3.2.1 Speaker probability from LLM
In the LLM-based calculation of \(P(S|W)\), we rely on a prompt that asks the expected speaker the expected speaker to provide the subsequent word. An illustrative example of such a prompt for the subsequent word working is provided below.
[Speaker0]**:**how** are you doing these days
[Speaker1]**:**things** are going very well
[Speaker0]**:**well** tell = more
[Speaker1]**:**there** is** project that i'm
[end]
Question:**The**next** word is (working)**.**Who** spoke (working)? Answer:[Speaker1]
Using the provided prompt template, we simulate the probability \(P(S,W)\) by sampling the probability values from the token indicating the speaker index that follows the token labeled as speaker. The prompt includes the text leading up to speaker. From this, we can derive \(P(S|W)\) using the subsequent equation:
\[P(S|W)|_{S=k}=\frac{p_{k}}{\sum_{k=1}^{N_{S}}p_{k}}, \tag{19}\]
where \(p_{k}\) denotes the simulated probability calculated from the logit values in the LLM output for speaker index \(k\). It is worth noting that the n-gram approach inherently cannot account for interactions between two speakers when determining speaker probability. In contrast, the LLM approach, the LLM approach allows for the consideration of interplay and combined information between speakers when estimating the speaker probability \(P(S|W)\).
#### 3.2.2 Word Probability
In case of calculating \(P(W)\), we use a specific input to sample the probability of the last word. We employ the same dialogue prompt as used for the speaker probability. However, we insert the term _next word_ and remove any text following the [end] token. With this prompt, the word probability can be expressed as:
\[P(W)|_{W=w_{\text{next}}}=\frac{p_{d}}{\sum_{d=1}^{D}p_{d}}, \tag{20}\]
where \(d\) and \(D\) represent the token index and the total number of tokens, respectively.
Figure 3: Illustration of beam search for a two-speaker dialogue.
## 4 Experimental Results
### Experiment Settings
#### 4.1.1 Pre-trained Models
**ASR:** We employ a Conformer-CTC model [26] that is implemented using the NeMo Toolkit [24]. The ASR model has approximately 122M parameters and 1024 tokens.
**Speaker Diarization:** We employ an improved version [20] of the MSDD model [19], which has 32M parameters. It builds upon Titanet [27], Transformer Encoder [28], and Clustering [29].
\(\bullet\)**LLM1:** We use an LLM based on [23] which is a scaled version of GPT [31] model of 2B parameter size. Trained on a 1.1T token subset of the dataset curated by NVIDIA Data Collector (NDC) [30], containing 70% of English dataset, 15% of Stack v1.1 dataset [32] and 15% of multi-lingual datasets from Common Crawl [33].
Footnote 1: Details regarding model training and datasets will be provided in [30]
#### 4.1.2 Evaluation Metric
**WER**: The WER is determined using a hypothesis script generated from a single-channel mixed audio clip and a transcript that contains words in onset order. Note that this WER should be differentiated from the channel-specific or speaker-specific WER in other studies. We use text normalization tool from [34] for evaluation.
**SA-WER**: The Speaker-Attributed WER (SA-WER) [3] is a WER metric grounded in speaker mapping, as established by speaker diarization. SA-WER measures the WER by comparing hypothesis and reference transcripts for each specific speaker mapping.
**cpWER**: The concatenated minimum-permutation word error rate (cpWER) [35] is a metric designed to capture both ASR and diarization accuracy. cpWER is calculated by taking the minimum WER from concatenated transcripts of multiple speakers across all potential permutations.
\(\bullet\)**acp** and \(\Delta\)**SA**: delta-cpWER and delta-SA-WER follows the following relationship.
\[\Delta\text{cp} =\text{cpWER}-\text{WER}\] \[\Delta\text{SA} =\text{SA-WER}-\text{WER}\]
Our evaluation is based on the assumption that \(\Delta\)cp and \(\Delta\)SA reflect the diarization error in cpWER and SA-WER, respectively.
### Datasets and Systems Overview
Table 1 shows the performance for each setup and dataset. For AMI-MH (Mixed Headset) [36] dataset, we use only-word version [37] of annotation. The Call Home American English Speech (CHAES, LDC97S42) is a corpus composed solely of English telephone speech data. CH-109 is a subset that includes two speakers per session, whereas CH-others signifies the other sessions found within CHAES. We optimize \(\alpha\), \(\beta\) in Eq. (13) and context window length \(C\) in Eq. (10), beam-width for beam search decoding on CH-others and AMI-MH-dev then use the parameters for CH-109 and AMI-MH-test, respectively. For this parameter optimization, we employ Optuna [38]. The Diarization Error Rate (DER) is computed using a collar of 0.25 seconds while including overlaps.
In terms of systems, _TS-match_ is a system that relies on time-stamps (TS) to match speaker diarization time-stamps with the decoded word time-stamps from ASR system. _All n-gram_ and _All LLM_ are systems where speaker probability and word probability values are calculated from the n-gram LM and the LLM, respectively. Conversely, _(LLM, n-gram)_ and _(n-gram, LLM)_ employ one model for speaker probability and another for word probability, in accordance with the provided notation. All experiments involving the LLM were conducted using the NVIDIA TESLA V100 GPU.
### Analysis of Experimental Results
In Table 1, the performance of TS-matching is compared with four combinatory setups that integrate both n-gram and LLM. Regarding \(\Delta\)cp and \(\Delta\)SA, it is crucial to highlight that speaker confusion resulting from speaker diarization results in a double error count because it manifests as an insertion for one speaker and a deletion for another. The proposed method improves the baseline system's delta-SA-WER by up to 39.8%. We can deduce couple of findings: Firstly, the trend shows that applying same type of language model leads to a lower error rate. This is likely because the \(P(W)\) and \(P(S|W)\) are less prone to discrepancies. Secondly, LLM seems more effective in estimating \(P(S|W)\), given that the average performance of applying LLM for \(P(S|W)\) demonstrates superior performance over n-gram. We speculate that the performance difference stems from the fact that the n-gram model considers only a single speaker, while LLM processes the entire transcription, taking into account all speakers within the context window. This contextual understanding is a distinct advantage LLM holds over n-gram LM, providing a more nuanced estimation of the speaker. However, using LLM demands roughly 15 times more computational time during inference when compared to the n-gram LM. This discrepancy underscores the potential need for a trade-off, suggesting a hybrid _(LLM, n-gram)_ configuration.
## 5 Conclusions
In this paper, we introduce a beam search decoding-based approach for applying a language model to speaker diarization. The proposed method offers a key advantage: it uses individually optimized models for ASR, diarization, and LLM. By training each model independently, we can leverage large-scale data sources specific to each domain. Moreover, even without fine-tuning the ASR, diarization, and LLM models, the proposed method achieves significant improvement over conventional approaches. This methodology can be seamlessly adapted to accommodate multilingual contexts by integrating multilingual ASR and LLM models--areas that have seen significant advancements recently. Looking forward, our future research will focus on several areas: Firstly, we intend to integrate beam search decoding for ASR and diarization by applying a single LLM which can achieve a more efficient system with enhanced accuracy. Secondly, we plan to integrate the ASR and diarization decoders to obtain more accurate timestamps alongside speaker logits, aiming for a streamlined multi-speaker ASR. Finally, we will explore ways to improve the model by introducing more sophisticated context, either by fine-tuning or prompt-tuning the LLM using domain-specific data.
\begin{table}
\begin{tabular}{r|c c|c c|c c|c c|c c||c c c} \hline \multicolumn{2}{c|}{Language} & \multicolumn{4}{c||}{(LM for \(P(S|W)\), LM for \(P(W)\) )} & & & & & & & & & & & \\ \cline{2-13} Model (LM) & - & \multicolumn{3}{c||}{All n-gram} & \multicolumn{3}{c||}{(LLM, n-gram)} & \multicolumn{3}{c||}{(n-gram, LLM)} & \multicolumn{3}{c||}{All LLM} & \multicolumn{3}{c}{Speaker Diarization} \\ \hline \multicolumn{2}{c|}{Matching} & TS-match & \multicolumn{3}{c||}{BSD} & \multicolumn{3}{c||}{BSD} & \multicolumn{3}{c||}{BSD} & \multicolumn{3}{c||}{BSD} & \multicolumn{3}{c||}{BSD} & \multicolumn{3}{c||}{WER} & Miss & FA & CER & DER \\ \hline \multicolumn{2}{c|}{Metric} & \(\Delta\)SA & \(\Delta\)cp & \(\Delta\)SA & \(\Delta\)cp & \(\Delta\)SA & \(\Delta\)cp & \(\Delta\)SA & \(\Delta\)cp & \(\Delta\)SA & \(\Delta\)cp & WER & Miss & FA & CER & DER \\ \hline \multicolumn{2}{c|}{CH-others} & 8.45 & 8.28 & 8.07 & 7.86 & 8.36 & 8.18 & 8.61 & 8.41 & **7.76** & **7.58** & 25.15 & 5.97 & 9.79 & 6.04 & 21.80 \\ \multicolumn{2}{c|}{CH-109} & 5.05 & 4.97 & 3.70 & 3.64 & 3.70 & 3.63 & 4.42 & 4.30 & **3.24** & **3.16** & 23.07 & 5.55 & 4.39 & 1.44 & 11.38 \\ AMI-MH-dev & 8.22 & 8.20 & 3.47 & 3.45 & 3.49 & 3.47 & 3.52 & 3.50 & **3.45** & **3.42** & 23.68 & 15.89 & 4.02 & 1.02 & 15.89 \\ AMI-MH-test & 6.38 & 6.41 & 3.92 & 3.88 & **3.74** & **3.72** & 3.98 & 3.94 & 3.84 & 3.80 & 23.25 & 12.96 & 4.98 & 1.36 & 19.03 \\ \hline \end{tabular}
\end{table}
Table 1: Performance evaluation of multi-speaker ASR based on time-stamp (TS) matching and beam search decoding (BSD). |
2309.06741 | A Multi-task Learning Framework for Drone State Identification and
Trajectory Prediction | The rise of unmanned aerial vehicle (UAV) operations, as well as the
vulnerability of the UAVs' sensors, has led to the need for proper monitoring
systems for detecting any abnormal behavior of the UAV. This work addresses
this problem by proposing an innovative multi-task learning framework (MLF-ST)
for UAV state identification and trajectory prediction, that aims to optimize
the performance of both tasks simultaneously. A deep neural network with shared
layers to extract features from the input data is employed, utilizing drone
sensor measurements and historical trajectory information. Moreover, a novel
loss function is proposed that combines the two objectives, encouraging the
network to jointly learn the features that are most useful for both tasks. The
proposed MLF-ST framework is evaluated on a large dataset of UAV flights,
illustrating that it is able to outperform various state-of-the-art baseline
techniques in terms of both state identification and trajectory prediction. The
evaluation of the proposed framework, using real-world data, demonstrates that
it can enable applications such as UAV-based surveillance and monitoring, while
also improving the safety and efficiency of UAV operations. | Antreas Palamas, Nicolas Souli, Tania Panayiotou, Panayiotis Kolios, Georgios Ellinas | 2023-09-13T06:21:23Z | http://arxiv.org/abs/2309.06741v1 | # A Multi-task Learning Framework for Drone State Identification and Trajectory Prediction
###### Abstract
The rise of unmanned aerial vehicle (UAV) operations, as well as the vulnerability of the UAVs' sensors, has led to the need for proper monitoring systems for detecting any abnormal behavior of the UAV. This work addresses this problem by proposing an innovative multi-task learning framework (MLF-ST) for UAV state identification and trajectory prediction, that aims to optimize the performance of both tasks simultaneously. A deep neural network with shared layers to extract features from the input data is employed, utilizing drone sensor measurements and historical trajectory information. Moreover, a novel loss function is proposed that combines the two objectives, encouraging the network to jointly learn the features that are most useful for both tasks. The proposed MLF-ST framework is evaluated on a large dataset of UAV flights, illustrating that it is able to outperform various state-of-the-art baseline techniques in terms of both state identification and trajectory prediction. The evaluation of the proposed framework, using real-world data, demonstrates that it can enable applications such as UAV-based surveillance and monitoring, while also improving the safety and efficiency of UAV operations.
Machine learning; multi-task learning; trajectory prediction; state identification; UAV applications.
## I Introduction
Unmanned aerial vehicles (UAVs) have gained immense popularity in recent years, attracting attention from both the research community and the industrial sector. This has resulted in the development of numerous hardware and software modules specifically designed to meet the diverse needs of the market. UAVs' customizable features make them suitable for a wide range of applications, including remote surveillance, search and rescue operations, and autonomous deliveries. However, the numerous applications of UAVs (drones) necessitate strict safety and security measures during missions. As with any other machines, drones' build-in hardware and software components are susceptible to faults and malicious cyber-attacks that may disrupt their normal operation. Thus, proper monitoring mechanisms must be in place to detect any abnormal behavior.
One of the key tasks involved in the safety assessment and flight monitoring of autonomous UAVs is to ensure that they follow a predetermined trajectory defined in the flight/mission plan within the expected time [1]. Real-time monitoring utilizes historical trajectory as a-priori knowledge to estimate the drone's current state, as well as to predict the remaining trajectory. The historical trajectory is updated by GPS measurements once the UAV operation is activated (i.e., the drone is in flight). The monitoring system is then responsible to compute the safety index based on the distance from obstacles, which relies heavily on the accuracy of predicting the remaining trajectory. Hence, accurately and quickly obtaining the future flight trajectory of autonomous UAVs is of great significance in reducing accidents and improving UAV operation safety levels.
Clearly, both tasks are important for different types of drone applications. For example, trajectory prediction is vital for applications such as search and rescue, surveillance, and delivery, where accurate and reliable trajectory prediction is essential. Additionally, in terms of current state identification, this task provides the drone operator with the ability to identify the drone's state at any moment and especially in an environment where there is no visibility. Moreover, this task is useful in the event that the drone's operator perceives that the flight is being performed normally, while the drone is actually performing different movements/actions (e.g., due to a malicious attack).
At present, there exists a plethora of systems that attempt to identify the current state of a drone and predict its trajectory. The majority of these systems employ classification algorithms to recognize the drone's current state, while deep neural networks are utilized for trajectory prediction. For instance, certain studies have concentrated on solving multilabel classification problems to accurately identify the drone's current state [2]. Alternatively, other research efforts have used deep learning models, that primarily rely on long short-term memory (LSTM) networks constructed by recurrent neural networks (RNNs), to predict the drone's trajectory for a specific number of future time-steps [3].
In accordance, this work proposes a novel multi-task learning framework (MLF-ST), where the two aforementioned tasks can be performed by training and executing a single model. The proposed MLF-ST framework is trained with a common input and has the ability to output the two-fold objective of state identification and trajectory prediction. Specifically, the contributions of this work are as follows:
* A multi-task learning model is proposed that utilizes LSTM to integrate the identification of a drone's current state and the prediction of its future trajectory for a given number of time-steps. The proposed model achieves this by leveraging annotated time series data, collected from various sensors for multiple drone flights in diverse outdoor environments and conditions. This approach offers a robust solution to the complex challenge of simultane
ously performing multiple tasks during the operation of the drone.
* A sliding window technique is implemented to segment the input data into smaller subsets, providing the model with more contextual information about the past. The approach aims to improve the accuracy of the model's predictions for the future, resulting in a robust and effective solution for data processing.
In the rest of the paper, Section II outlines the state of the art, while the methodology and the algorithm implementation of this work are described in Sections III and IV, respectively. The description of the datasets and the various experiments performed and results obtained are presented in Section V, while the main conclusions along with future research directions are discussed in Section VI.
## II Related Work
In this work, a real-world dataset is utilized [4] to train and evaluate the proposed multi-task learning framework for drone state identification and trajectory prediction. The dataset includes readings from multiple sensors mounted on a DJI Matrice 100 quadcopter, which were collected during various missions with similar prescribed actions. This dataset was also employed in previous studies [5, 6, 7] to train models for predicting UAV battery consumption. For example, in [5], temporal convolutional networks were employed to establish an energy consumption model that predicted the energy consumption of a moving drone, in an effort to evaluate the probability of the drone depleting its energy during a pre-planned mission. Moreover, work in [6] developed a machine-learning algorithm that evaluated energy usage during takeoff, cruise, and landing, utilizing distinct models for each one of the three regimes in order to predict the drone's total energy consumption during a mission. Moreover, in [7], a comprehensive and precise model for energy consumption prediction was established with the use of ensemble learning, and combining the random forest and extreme gradient boosting machine learning algorithms, demonstrating a mean absolute percentage error of approximately \(10\%\).
In another comparative study of two state-of-the-art supervised approaches [8], long short-term memory (LSTM) and convolutional LSTM (ConvLSTM) were used to identify potentially defective sensors while analyzing streamed data from a UAV. The first approach involved using multiple LSTM networks to detect anomalies in the continuous values of each attribute, while the second one depended on a multi-output ConvLSTM network to identify abnormalities in the values of the concerning attribute by considering the effect of all other attributes. In addition, the authors proposed a method to address the redundancy issue that arises from using multiple LSTM networks to analyze multivariate and correlated data. A real-life dataset of four flights employing a fixed-wing aircraft was used to conduct experiments, and both approaches were able to detect different types of faults. However, it was demonstrated that the multi-output ConvLSTM is faster and achieves better results in most of the cases examined.
In terms of trajectory prediction, which is an active research topic in various fields (i.e., transportation, security, and robotics), a brief overview of related works that utilize multi-task learning models is provided below. For example, a recent work [9], proposed a multi-task learning framework for trajectory prediction, which jointly learned to predict the future locations and directions of moving objects, utilizing both spatial and temporal features to improve prediction accuracy. Moreover, the work in [10] proposed an RNN-based model for trajectory prediction (i.e., for capturing the temporal dependencies of trajectories) and used an LSTM network to model the spatial features. Also, works in [11, 12] implemented two different models for trajectory prediction; the first model was based on multimodal and multi-task learning, utilizing various weather parameters to improve the prediction of typhoon trajectories, while the second model was based on deep feature representation, which extracted features from historical trajectories using a deep neural network. Furthermore, using historical flight data [3] deployed an LSTM-based model to predict the future locations of UAVs. Finally, in [13], a deep multi-task learning framework for joint localization, perception, and prediction of moving objects was presented. Specifically, a deep neural network was utilized to learn these tasks jointly, while also improving the accuracy of trajectory prediction.
This work complements the aforementioned research efforts by implementing a novel multi-task learning framework that succeeds in _simultaneously_ performing state classification and trajectory prediction of a UAV agent utilizing various data modalities and under different environmental conditions, aiming to create an accurate system that can be successfully deployed in emergency response situations. Further, contrary to other research efforts, the end goal of MLF-ST is to extract an accurate result for both trajectory prediction and current state identification without utilizing two different models for the two tasks.
## III Methodology
In this section, the methodology of the proposed framework is described in detail. At first, the framework overview provides a comprehensive description of the proposed approach, highlighting its key components and their interactions. This is followed by a brief description of LSTM and the multi-task learning techniques. Moreover, the multi-task learning section discusses the rationale behind the use of this technique in the proposed model and how it can improve the overall system performance.
### _Framework Overview_
The proposed MLF-ST framework (in Fig. 1) consists of a shared LSTM layer followed by separate output layers for each task. The input to the model is a window of historical drone sensor data with a fixed size (\(WS\)), which includes measurements such as GPS coordinates, altitude, wind speed, wind angle, velocity components of the ground speed, angular velocity components, ground acceleration, and orientation. The
shared LSTM layer has \(256\) units with ReLU activation, which enables the model to capture the temporal dependencies within the input data. The output of the shared LSTM layer is then fed into a separate LSTM layer with \(128\) units and ReLU activation, which is used for both tasks.
For the multilabel classification task, a batch normalization layer is applied to the output of the LSTM layer, followed by a fully connected layer with \(64\) units and no activation function. The final output layer for the classification task is a sigmoid activation function that outputs a vector of probabilities for each of the possible drone states such as HOVER-IDLE, ASCENT, TURN, horizontal movement on a straight line (HMSL), and DESCENT.
For the trajectory prediction task, a time-distributed dense layer with \(64\) units is applied to the output of the biLSTM layer, which generates a sequence of predicted coordinates for each time-step in the forecast horizon (i.e., a fixed number of time-steps into the future - typically ranging from \(1\) to \(3\) seconds). The final output layer for the trajectory prediction task is a linear activation function, which outputs a vector of three values representing the drone's predicted \((x,y,z)\) coordinates at each time-step in the forecast horizon.
During the training step, a weighted combination of the mean squared error loss for the trajectory prediction task and the binary cross-entropy loss for the classification task is minimized. The Adam optimizer with a learning rate of \(0.0001\) and a batch size of \(64\) is employed and the proposed model is trained for \(100\) epochs. Furthermore, to circumvent overfitting, early stopping criteria is implemented, ceasing the training process when no discernible improvement is observed on the validation set. Finally, it is worth to note that exhaustive experimentation, encompassing various hidden layers and batch sizes has been conducted, with the selected hyperparameters exhibiting superior performance in terms of minimizing prediction errors.
### _Background on LSTM_
LSTM is an example of an RNN architecture which is proficient in capturing long-term dependencies in sequential data by managing the flow of information through memory cells and gates [14]. The main building block of an LSTM network is the LSTM cell which consists of three gates (_input_, _output_, and _forget_) and a memory cell. In particular, the input gate controls the flow of new information into the memory cell, the output gate is responsible for the output of information from the memory cell, and the forget gate controls the retention or deletion of information from the memory cell. The gates are implemented using sigmoid functions which output values between \(0\) and \(1\), and the memory cell uses a hyperbolic tangent function which outputs values between \(-1\) and \(1\).
The equations for the LSTM cell can be expressed as:
\[f_{t} =\sigma(W_{f}x_{t}+U_{f}h_{t-1}+b_{f})\] \[i_{t} =\sigma(W_{i}x_{t}+U_{i}h_{t-1}+b_{i})\] \[\tilde{C}t =\tanh(W_{c}x_{t}+U_{c}ht-1+b_{c})\] \[C_{t} =f_{t}\odot C_{t-1}+i_{t}\odot\tilde{C}t\] \[o_{t} =\sigma(W_{o}x_{t}+U_{o}ht-1+b_{o})\] \[h_{t} =o_{t}\odot\tanh(C_{t})\]
where \(x_{t}\) is the input at time-step \(t\), \(h_{t-1}\) is the output from the previous time-step, \(\sigma\) denotes the sigmoid function, \(W_{f},W_{i},W_{c},W_{o}\) are weight matrices for the forget gate, input gate, memory cell, and output gate, respectively, \(U_{f},U_{i},U_{c},U_{o}\) are weight matrices for the corresponding gates and memory cell from the previous time-step, and \(b_{f},b_{i},b_{o},b_{c}\) are bias vectors for the forget gate, input gate, output gate, and memory cell, respectively. Further, \(f_{t},i_{t},\tilde{C}_{t}\) are defined as the values of the forget gate, input gate, and candidate cell state at time-step \(t\), \(C_{t}\) is the cell state at time-step \(t\), \(o_{t}\) is the value of the output gate at time-step \(t\), and \(h_{t}\) is the output at time-step \(t\). Finally, \(\odot\) represents element-wise multiplication.
LSTM networks have in general achieved considerable success in diverse fields such as natural language processing [15], speech recognition [16], and image captioning [17], demonstrating to be highly effective in modeling long-term dependencies in sequential data, and outperforming traditional RNNs in several tasks. However, LSTM networks tend to be computationally expensive and require meticulous tuning of hyperparameters such as the number of LSTM cells and the learning rate [18].
### _Background on Multi-task Learning_
Multi-task learning (MTL) is a technique that aims to improve the performance of a model on multiple related tasks
Fig. 1: An overview of the proposed multi-task learning framework (MLF-ST) methodology.
by jointly learning those tasks [19]. In general, MTL has become increasingly popular due to its ability to improve generalization and reduce overfitting [20].
MTL can be implemented using various techniques, including hard parameter sharing, soft parameter sharing, and task-specific attention [20]. Hard parameter sharing involves sharing the same set of parameters across all tasks [19], while soft parameter sharing allows for task-specific parameters that are encouraged to be similar to each other [20]. Task-specific attention on the other hand is a technique that allows the model to selectively attend to different parts of the input for each task [21].
MTL has been applied to different domains, including natural language processing [22], computer vision [23], speech recognition [24], and drug discovery [25]. In natural language processing, MTL has been employed to jointly learn multiple tasks, such as named entity recognition and part-of-speech tagging [22]. In computer vision, MTL is used for object detection, segmentation, and classification [23], while in speech recognition, is deployed to jointly learn multiple tasks, such as acoustic modeling and language modeling [24]. In drug discovery, MTL has been utilized to predict multiple properties of a molecule (e.g., toxicity, solubility, etc.) [25].
## IV MLF-ST Algorithm
This section provides a detailed description of the MLF-ST algorithm (Alg. 1). The MLF-ST algorithm is comprised of several stages, in an effort to enhance its functionality: (i) Stage-1: data partitioning, where a sliding window technique is employed to divide the data into subsets; (ii) Stage-2: a pre-processing phase which includes standard normalization to prepare the data for subsequent processing; (iii) Stage-3: data splitting which separates the data into distinct sets for training, testing, and validation purposes; (iv) Stage-4: the construction phase that builds the multi-task learning mode; (v) Stage-5: a compilation phase, where the model is fine-tuned for optimal efficacy; (vi) Stage-6: the fitting phase, that involves the training of the model with the use of the previously generated data subsets; (vii) Stage-7: the evaluation phase (that follows once training is finished), for assessing the accuracy and performance of the model; (viii) Stage-8: the results processing phase; this phase includes applying inverse transformations to the scaled data, obtaining original data, redefining the testing set, and making predictions. It also reshapes trajectory predictions, creates a confusion matrix for current state identification, and constructs data frames containing the actual and predicted positions (\((x,y,z)\) coordinates). Further, to refine the results, the Haversine formula is incorporated to calculate the distance between the actual and predicted positions (\(x\) and \(y\) coordinates), providing distance measurements in meters. The Euclidean distance formula is then employed to determine the error between the actual and predicted positions by taking into account the differences in the \(z\) coordinates.
Finally, the proposed algorithm's output is visualized, show-casing the results of both trajectory prediction and current state identification. This in-depth analysis of the MLF-ST algorithm offers valuable insights into its functioning and performance, enabling further optimization and application in various multi-task learning scenarios.
The reader should note that Alg. 1 can also be seen as comprised of several key procedures (as shown in the pseudocode), namely the sliding window (Stage-1), model (Stages2-7), and results (Stage-8) procedures.
In the following section, experimental results are presented, demonstrating the effectiveness of the proposed multi-task framework for drone state identification and trajectory prediction.
```
1:Acquired flight data
2:Define input size (IS), window size (WS) and horizon size (HS)
3:procedure(Sliding Window)
4: Define L1, L2, L3 as empty lists
5:for\(i<IS-WS-HS+1\)do
6: append to L1 array Input from \(i\) to WS\(+i\)
7: append to L2 array Input from \(i\)+WS to WS\(+i\)+HS
8:endfor
9:endprocedure
10:Reshape arrays L1 and L2 from 3D to 2D
11:Apply standard scaling on input data
12:Reshape arrays L1 and L2 from 2D to 3D
13:Split data into training, validation, and testing sets
14:procedure(Model)
15:Build the multi-task learning model
16:Define input and output of the model
17:Define target sets for training, testing, and validation
18:Define batch size, number of epochs, learning rate
19: Compilation of the model. Define the loss function for each task and metrics.
20:Train the model
21:Evaluate the model
22:endprocedure(Results)
23:Apply inverse transform to the scaled data
24:Get the original data
25:Define again testing set
26:Make predictions
27:Reshape trajectory predictions from 3D to 2D
28:Apply inverse transform to scaled predictions
30:Get the original values of predictions
31:Reshape again trajectory predictions from 2D to 3D
32:Make confusion matrix for current state identification
33:Make data frames which contain the actual and predicted positions \((x,y,z)\)
34:Apply Haversine formula between actual and predicted positions
35:Get the distance of actual and predicted positions \((x,y)\) in meters
36:Apply Euclidean distance formula to get the error between actual and predicted positions
37:endprocedure
38:Output:Trajectory prediction and current state identification results
```
**Algorithm 1** MLF-ST Algorithm
## V Performance Evaluation
### _Hardware and Software Configuration_
For the performance evaluation of the proposed MLF-ST framework a UAV is deployed in real-world experiments. Specifically, the autonomous drone is equipped with an onboard processing unit, advanced sensors, flight control systems, a solid-state weather station (TriSonica), and the NVIDIA Jetson Xavier NX Developer Kit that is responsible
for saving high-precision flight data. Also, it is important to note that the data and commands are transmitted through a 4G communication architecture (i.e., a 4G dongle is applied on the moving agent). For the software implementation, the Tensorflow 2's Keras API [26] is used to build the multi-task learning model, while the scikit-learn library [27] is employed to preprocess the time series data. Also, the NumPy and Pandas Python libraries are used to analyze and visualize the results of this work.
### _Dataset Collection_
This section details the sources, characteristics, and the way the data used in the experiments was annotated, including the pre-processing steps taken into account in order to ensure data quality and consistency. Specifically, in this work two different datasets are utilized, i.e., the real-world datasets provided by [4] (Dataset-1) and [28] (Dataset-2).
#### Iv-B1 Dataset-1: Data Collected with Package Delivery Quadcopter Drone
In particular, the first dataset utilized (Dataset-1) is obtained from [4], which includes sensor data collected from a multirotor UAV during a variety of flights. Specifically, the DJI Matrice 100 (M100) UAV was used to perform autonomous takeoffs, transport payloads along a triangular flight pattern, and land. The flights were conducted at different altitudes such as \(25\)m, \(50\)m, \(75\)m, and \(100\)m, with speeds ranging from \(4\)m/s to \(12\)m/s, and payloads ranging from \(0\)g, \(250\)g, \(500\)g. All specific flights were repeated at least three times, for a total of \(195\) flights. Specifically, this work focuses on the \(182\) flights that followed the R1 route as specified in [4], as these flights demonstrated similar flight modes at each time-step. The sensor data is synchronized at a frequency of \(10\)Hz and the specifics of each sensor are outlined in Table I.
#### Iv-B2 Dataset-2: Dataset Obtained from Multi-modal Sensor Onboard the Drone.
This dataset contains time-series data from numerous drone flights that took place at the University of Cyprus (UCY) campus. Each flight record has a unique identifier (ID) and a timestamp indicating when the flight occurred. In particular, the drone's position is represented by the coordinates \((x,y,z)\) and altitude and the orientation of the drone is described by the quaternion \((o_{x},o_{y},o_{z},o_{w})\) (i.e., orientation x,y,z,w). The drone's linear and angular velocity are represented by \((v_{x},v_{y},v_{z})\) and \((av_{x},av_{y},av_{z})\), respectively. Also, the linear acceleration of the drone is defined as \((a_{x},a_{y},a_{z})\).
Furthermore, the dataset also contains information about the battery voltage (BV), battery current (BC), and the payload attached (i.e., the devices onboard the drone, such as the Nvidia Jetson, various sensors, weather station, etc.).
Also, the dataset includes annotations for the current state of the drone, including IDLE-HOVER, ASCEND, TURN, HMSL, and DESEND. These states can be used for classification to identify the current state of the drone. Thus, the labeled dataset is used for predicting the trajectory of the drone using the proposed MLF-ST framework.
For the annotation procedure, the change in position \((x,y,z)\) and yaw is inspected. Specifically, if the position \((x,y)\) changes, it means that the drone moves in a horizontal straight line, and if position \(z\) changes, it means that the drone performs an ascending or descending operation (depending on whether it increases or decreases). Also, if the yaw changes, it means that the drone performs a turn, and finally, if none of the above features changes, then the drone is in idle or hover mode.
In addition to the features already mentioned, the dataset also includes information from various sensors including a weather station and an inertial measurement unit (IMU). The weather station provides information about the weather conditions during the flight which includes wind speed and wind angle. These weather variables are important factors that influence the flight of the drone and battery consumption. Further, the IMU provides information on the drone's acceleration, angular velocity, and magnetic field. Specifically, the accelerometer provides information about the drone's linear acceleration, the gyroscope provides information about the drone's angular velocity, while the magnetometer measures the earth's magnetic field, which can be subsequently used to determine the drone's orientation.
Field experiments are performed in order to collect empirical data using a specific type of UAV (DJI Matrice 300 (M300)). The M300 is equipped with advanced sensors and flight control systems, providing high-precision flight data. The flights are designed to cover a range of flight patterns, which include triangular, square, polygonal, and random flight patterns.
### _Experimental Results_
This section showcases the experimental results for the proposed MLF-ST framework by comparing the localization performance for each estimated trajectory with the ground truth (i.e., measurements acquired via GPS+IMU), while also demonstrating the current state identification by performing multilabel classification for each aforementioned dataset.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Features** & **Summary** \\ \hline wind speed & airspeed provided by the anemometer (in m/s) \\ \hline \multirow{2}{*}{wind angle} & angle of the air flowing through the anemometer with respect to the north (in deg) \\ \hline \multirow{2}{*}{battery voltage (BV)} & system voltage measured right after the battery (in V) \\ \hline battery current (BC) & system current measured right after the battery (in A) \\ \hline position \((x,y)\) & longitude \& latitude of the UAV (in deg) \\ \hline position \(z\) & altitude of the UAV with respect to the sea-level (in m) \\ \hline orientation \((o_{x},o_{y},o_{z},o_{w})\) & UAV orientation (in quantenions) \\ \hline velocity \((v_{x},v_{y},v_{z})\) & velocity components of ground speed (in m/s) \\ \hline angular velocity \((av_{x},av_{y},av_{z})\) & angular velocity components (in d/s) \\ \hline linear acceleration \((a_{x},a_{y},a_{z})\) & ground acceleration in (m/s’2) \\ \hline payload & mass of the payload attached to the UAV (in g) \\ \hline \end{tabular}
\end{table} TABLE I: Specifics of sensors used in Dataset-1.
To investigate the average performance of the proposed MLF-ST framework concerning Dataset-2, \(10\) real outdoor experiments are conducted in the area of the UCY campus. For the purposes of the experiments, the M300 drone is utilized (specifications about the UAV agent utilized can be found in _[https://www.dji.com/matrice-300_](https://www.dji.com/matrice-300_)).
#### Iv-B1 Trajectory Prediction - Dataset-1
Table II tabulates the location estimation accuracy of the proposed MLF-ST framework for Dataset-1. Even though the estimated location performance decreases, as the time-step (\(t\)) increases, the proposed system is able to provide accurate location estimation that is comparable to the ground truth values (with Euclidean distance error lower than \(3.5\)m).
Further, Fig. 2 illustrates the cumulative distribution function (CDF) of the forecast for time intervals \(t+1\), \(t+2\), and \(t+3\). The analysis of the CDF for prediction errors highlights that both the mean prediction errors and the overall dispersion of errors are indeed considerably small.
Specifically, \(90\%\) of the \(1\)-second look-ahead predictions manifest an error magnitude less than \(2.45\) meters, while the \(2\)-second look-ahead accounts for \(90\%\) of errors falling below \(2.5\) meters. Also, the \(3\)-second horizon encompasses \(90\%\) of errors not exceeding \(3.56\) meters.
#### Iv-B2 Current State Identification Results (Dataset-1)
In this work, regarding the second main task (current state identification), the performance analysis of the multilabel classification for the drone's movement is presented with the use of a confusion matrix. Dataset-1 is comprised of five classes, namely IDLE-HOVER, ASCEND, TURN, HMSL, and DESCEND. The confusion matrix, as shown in Fig. 3, represents the model's performance in classifying these movements.
To better understand the performance of the proposed framework's multilabel classification, the Precision is computed. Moreover, the Recall, and F1-score for each class are presented in Table III. To calculate the Precision, Recall, and F1-score metrics, the following equations can be used:
\[Precision=\frac{TP}{TP+FP} \tag{1}\]
\[Recall=\frac{TP}{TP+FN} \tag{2}\]
\[F1-score=\frac{2\cdot(Precision\cdot Recall)}{Precision+Recall} \tag{3}\]
The results indicate that the model performs considerably well for the IDLE-HOVER, ASCEND, and HMSL classes, with F1-scores greater than \(0.93\). However, the model struggles with the TURN and DESCEND classes, with F1-scores of \(0.5898\) and \(0.8836\), respectively.
Also, for the objective of conducting a multilabel classification analysis, overall pertinent micro-average and macro-average metrics are meticulously computed and evaluated for Precision, Recall, and F1-score. The micro-average metrics are higher than the macro-average metrics as shown in Table IV, indicating high overall model performance. However, the macro-average metrics present that there is still room for improvement in the classification performance, particularly for the TURN and DESCEND classes.
In summary, the multilabel classification task is investigated, exhibiting considerable efficacy in
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Prediction time [s] & Euclidean distance error [m] \\ \cline{2-3} All testing flights & t+1 & 2.5521 \\ \cline{2-3} & t+2 & 2.6637 \\ \cline{2-3} & t+3 & 3.4412 \\ \hline \end{tabular}
\end{table} TABLE II: Average Euclidean prediction error for Dataset-1.
Fig. 3: Multilabel classification confusion matrix.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Precision & Recall & F1-score \\ \hline IDLE-HOVER & 0.9671 & 0.9333 & 0.9499 \\ \hline ASCEND & 0.9561 & 0.9186 & 0.937 \\ \hline TURN & 0.795 & 0.4683 & 0.5898 \\ \hline HMSL & 0.9842 & 0.9826 & 0.9834 \\ \hline DESCEND & 0.8079 & 0.975 & 0.8836 \\ \hline \end{tabular}
\end{table} TABLE III: Class-wise metrics.
Fig. 2: Average error CDF for predicted trajectories at \(t+1\), \(t+2\), \(t+3\) (Dataset-1).
movement categories. However, it is imperative to address the sub-optimal performance observed in the TURN and DESCEND classifications. Potential strategies for refinement may encompass the incorporation of supplementary training data, the application of feature engineering techniques, or the exploration of alternative algorithmic approaches. Subsequent research endeavors will concentrate on these facets with the intent of augmenting the model's performance, thereby contributing to the evolution of robust and precise drone movement classification systems.
#### V-B3 Trajectory Prediction Results (Dataset-2)
Table V presents the spatial estimation precision of the proposed MLF-ST framework when Dataset-2 is utilized. Despite the decline in the estimated location performance as the temporal interval (\(t\)) expands, the proposed framework delivers precise spatial approximations that are analogous to the ground truth measurements (exhibiting Euclidean distance error less than \(8.1\) meters).
Figure 4 depicts the CDF of the trajectory predictions for the time intervals \(t+1\), \(t+2\), and \(t+3\). It must be noted that the empirical data employed for the extraction of the distinct CDFs encompassed an array of motion patterns, including triangular, square, circular, and stochastic trajectories. In particular, for the \(1\)-second look-ahead predictions, \(90\%\) exhibit an error magnitude below \(3.4\) meters. Also, the \(2\)-second look-ahead accounts for \(90\%\) of errors that are under \(3.59\) meters, while for the \(3\)-second time horizon, \(90\%\) of the errors do not surpass \(4.2\) meters.
It is worth mentioning that Dataset-1 comprises of \(227,552\) entries, whereas Dataset-2 contains only \(95,840\) entries, leading to a lower trajectory prediction performance. The small number of entries for Dataset-2 is also the reason for not utilizing it for state identification purposes. Nevertheless, it is important for the reader to note that the expansion of Dataset-2 can potentially lead to a better or analogous system performance to the results obtained with Dataset-1. This is currently in progress, as Dataset-2 is continuously being augmented with new data points obtained through various drone flights.
In this study, MLF-ST framework resulted in average Euclidean distance errors of 2.5521m, 2.6637m, and 3.4412m for the first three seconds using a sliding window technique with WS equals 30 and HS equals 30, respectively. In comparison, another study [3] reported an overall mean error of 1.5m. It is essential to consider potential differences in methodologies and datasets when comparing these results. It is worth mentioning that, in study [3], the mean error was extracted after four flights, compared to the specific work that the results were extracted using a dataset equipped from 195 flights.
Regarding the multilabel classification task, the MLF-ST framework in this study achieved an accuracy of \(98.51\%\), closely matching the compared paper's \(98.5\%\) accuracy [2]. This demonstrates competitive performance in classification accuracy.
## VI Conclusion
A novel multi-task learning framework (MLF-ST) is presented, leveraging deep neural networks to simultaneously optimize the desired two-fold objective of drone state identification and trajectory prediction. The proposed framework, that utilizes shared layers and various metrics to extract information from a set of input data, can efficiently learn the useful features for both tasks. Further, by incorporating LSTM networks and sliding window techniques along with the drone's historical data, MLF-ST is able to accurately identify the drone's current state and predict its future trajectory. An extensive evaluation of the presented framework showcases its ability to contribute to the overall safety and efficiency of drone operations. Also, the extensive testing conducted on two different datasets, collected under various flight conditions, further validates the robustness and adaptability of the MLF-ST framework.
Future research avenues include the extension of the proposed MLF-ST framework by incorporating additional tasks, different deep-learning techniques, and the employment of various sensor modalities to enhance the capabilities of drone monitoring systems.
## Acknowledgements
This work was supported by the Secure and Safe Multi-Robot Systems (SESAME) H2020 Project under Grant Agreement 101017258. It was also supported by the European Union's Horizon 2020 research and innovation program under grant agreement No 739551 (KIOS CoE - TEAMING) and from the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy.
|
2309.06999 | An adaptive functional regression framework for spatially heterogeneous
signals in spectroscopy | The attention towards food products characteristics, such as nutritional
properties and traceability, has risen substantially in the recent years.
Consequently, we are witnessing an increased demand for the development of
modern tools to monitor, analyse and assess food quality and authenticity.
Within this framework, an essential set of data collection techniques is
provided by vibrational spectroscopy. In fact, methods such as Fourier near
infrared and mid infrared spectroscopy have been often exploited to analyze
different foodstuffs. Nonetheless, existing statistical methods often struggle
to deal with the challenges presented by spectral data, such as their high
dimensionality, paired with strong relationships among the wavelengths.
Therefore, the definition of proper statistical procedures accounting for the
peculiarities of spectroscopy data is paramount. In this work, motivated by two
dairy science applications, we propose an adaptive functional regression
framework for spectroscopy data. The method stems from the trend filtering
literature, allowing the definition of a highly flexible and adaptive estimator
able to handle different degrees of smoothness. We provide a fast optimization
procedure that is suitable for both Gaussian and non Gaussian scalar responses,
and allows for the inclusion of scalar covariates. Moreover, we develop
inferential procedures for both the functional and the scalar component thus
enhancing not only the interpretability of the results, but also their
usability in real world scenarios. The method is applied to two sets of MIR
spectroscopy data, providing excellent results when predicting milk chemical
composition and cows' dietary treatments. Moreover, the developed inferential
routine provides relevant insights, potentially paving the way for a richer
interpretation and a better understanding of the impact of specific wavelengths
on milk features. | Federico Ferraccioli, Alessandro Casa, Marco Stefanucci | 2023-09-13T14:52:34Z | http://arxiv.org/abs/2309.06999v1 | # An adaptive functional regression framework for spatially heterogeneous signals in spectroscopy
###### Abstract
The attention towards food products characteristics, such as nutritional properties and traceability, as well as towards the adherence of production systems to environmental and ethical procedures, has risen substantially in the recent years. Consequently, we are witnessing an increased demand for the development of modern tools to monitor, analyse and assess food quality, security, and authenticity. Within this framework, an essential set of data collection techniques is provided by vibrational spectroscopy. In fact, methods such as Fourier near-infrared (NIR) and mid-infrared (MIR) spectroscopy have been often exploited to analyze different foodstuffs. Nonetheless, existing statistical methods often struggle to deal with the challenges presented by spectral data, such as their high-dimensionality, paired with strong relationships among the wavelengths. Therefore, the definition of proper statistical procedures accounting for the intrinsic peculiarities of spectroscopy data is paramount.
In this work, motivated by two dairy science applications, we propose an adaptive functional regression framework for spectroscopy data. The method stems from the trend filtering literature, allowing the definition of a highly flexible and adaptive estimator able to handle different degrees of smoothness. We provide a fast optimization procedure that is suitable for both Gaussian and non-Gaussian scalar responses, and allows for the inclusion of scalar covariates. Moreover, we develop inferential procedures for both the functional and the scalar component thus enhancing not only the interpretability of the results, but also their usability in real world scenarios. The method is applied to two sets of MIR spectroscopy data, providing excellent results when predicting both milk chemical composition and cows' dietary treatments. Moreover, the developed inferential routine provides relevant insights, potentially paving the way for a richer interpretation and a better understanding of the impact of specific wavelengths on milk features.
**Keywords:** Adaptive Regression, Trend Filtering, Functional Data, Bootstrap, Spectroscopy
## 1 Introduction
In the past decades, increased consumers' attention towards food quality and security has fostered the development of new technologies to analyze different foodstuffs. More specifically, adherence to environmental-friendly procedures, product traceability, and quantification of the nutritional properties have become central topics on the agenda both of the public opinion and of the scientific community. Moreover, methods to assess food authenticity are increasingly important, as expensive products are often subject to fraud and adulteration.
In this framework, commonly adopted methodologies often require lengthy and expensive laboratory extraction routines to collect data, thus jeopardizing their usefulness. As a consequence, other alternatives have recently been proposed to overcome these drawbacks, with vibrational spectroscopy techniques currently playing a pivotal role. Methods such as Fourier
transform near-infrared (NIR) and mid-infrared (MIR) spectroscopy are known to be fast, relatively cheap, and nondisruptive ways to collect huge amounts of data on a plethora of materials. In fact, they have been used in different fields, ranging from medicine (Petrich, 2001; Talari et al., 2017) and astronomy (Keller et al., 2006; Tennyson, 2019), to food and animal science (Reid et al., 2006; Berzaghi and Riovanto, 2009; Porep et al., 2015).
In this work, we focus specifically on MIR spectroscopy, where the light is passed through a sample of a given material at a sequence of wavelengths in the mid-infrared region, activating the sample's chemical bonds. This leads to an absorption of the energy from the light, the amount of which, evaluated at different wavelengths, creates the spectrum of the analyzed sample. Each spectrum contains an invaluable amount of information about the sample since, according to _Beer-Lambert Law_(Beer, 1852), the absorption of energy is specific to atoms and molecules and is proportional to the concentration of the corresponding chemical substance. As a consequence, nowadays spectral data are being used to predict different characteristics of a given material. With specific reference to the dairy framework, the one considered in this work, MIR spectroscopy showed promising results in predicting traits such as milk coagulation properties (Visentin et al., 2016), fatty acids (Soyeurt et al., 2006), protein and lactose concentration (De Marchi et al., 2014), energy efficiency and intake (McParland and Berry, 2016), as well as in discriminating between different cows' dietary treatments (Frizzarin et al., 2021).
Despite being widely used, the peculiar characteristics of spectroscopy data introduce statistical challenges that need to be addressed. First, spectral data lie in high-dimensional spaces, as each single spectrum usually consists of more than 1000 absorbance values measured at different wavelengths. Moreover, the relationships among variables are rather complex, often preventing the use of standard models developed for time-dependent data. In fact, even if adjacent spectral regions tend to be highly correlated, strong correlations are also observed among distant wavelengths, since the same chemical components can have several absorption peaks in distinct spectral regions. Lastly, as pointed out by Politsch et al. (2020), the underlying signal is often spatially heterogeneous. Therefore, flat spectral regions are often followed by more irregular ones, characterized by multiple peaks, posing cumbersome issues in the modelling process.
Both with regression and classification aims in mind, these data have been often analyzed by means of latent variable models. Methods such as _Partial Least Squares_ (PLS) and _Principal Component Analysis_ (PCA) have been widely used to tackle some of the mentioned problems. With a similar underlying rationale, _Factor Analysis_ has also been considered (see e.g. Casa et al., 2022, for a recent work) since, allowing one to reduce the dimensionality of the data while focusing on a proper reconstruction of the correlation matrix, it seems particularly suitable for the framework. Recently, statistical and machine learning techniques have also been explored in order to relate spectral information to different milk traits (see e.g. Frizzarin et al., 2021).
All these methods do not account for the peculiar structure of the spectral data and for the natural ordering among the variables, which can be considered by resorting to approaches pertaining to the functional data analysis setting (FDA; Ramsay and Silverman, 2005). In fact, even if Alsberg (1993) suggested that spectra should be represented as continuous functions of wavelengths, in this framework functional approaches have been to some extent overlooked until relatively recently (Saeys et al., 2008). Some works that it is worth mentioning, even if not necessarily focused on MIR spectral data, are the ones by Reiss and Ogden (2007); Morris et al. (2008); Zhao et al. (2012); Yang et al. (2016); Codazzi et al. (2022).
As briefly mentioned before, the varying degrees of smoothness of MIR spectroscopy data over their domain pose some challenges that need to be tackled when evaluating FDA strategies. Saeys et al. (2008) suggest to adopt a basis approach with unequally spaced knots, with knot placement driven by subject-matter knowledge on the properties of the analyzed material. In this work, we take a different approach by considering _trend filtering_ as a building block of our approach (see Politsch et al., 2020, for a discussion in a similar framework).
### Trend Filtering
Trend filtering is a signal reconstruction technique initially developed by Kim et al. (2009) and further studied, among others, by Tibshirani (2014). In the context of nonparametric regression, where data \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{p})^{\top}\in\mathbb{R}^{p}\) are supposed to be generated by the model \(y_{i}=f_{0}(\omega_{i})+\varepsilon_{i}\), \(i=1,\ldots,p\), trend filtering solves the following empirical problem
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\|\mathbf{y}-\mathbf{f}\|_{2}^{2}+ \lambda\|\mathbf{D}^{(k+1)}\mathbf{f}\|_{1} \tag{1}\]
where \(\mathbf{f}=(f(\omega_{1}),\ldots,f(\omega_{p}))^{\top}\), \(\mathbf{D}^{(k+1)}\in\mathbb{R}^{(p-k-1)\times p}\) is the discrete difference matrix of order \(k+1\), and \(\lambda>0\) is a tuning parameter. The resulting discretely estimated function \(\widehat{\mathbf{f}}\) has a number of interesting properties, the most important being its adaptivity to local features of the true underlying function \(f_{0}\). More precisely, the specification of the penalty yields a solution which, even if generally not sparse, exhibits a sparse \((k+1)\)-th derivative. This behaviour resembles a spline function of degree \(k\) which possesses continuous derivatives up to order \(k-1\), while the \(k\)-th derivative is zero except for the points where polynomials meet, also known as _knots_ of such spline. As shown in Tibshirani (2014), for \(k=1,2\) the resulting estimated function is indeed a spline, while for \(k>2\) it is _close_, but not exactly equal, to a spline function with unequally spaced knots. The method is quite general thanks to the choice of \(k\), e.g. with \(k=0\) one obtains a stepwise function with a first derivative that is different from zero only where jumps lie. In fact, given the form of \(\mathbf{D}^{(1)}\), in this specific case the penalty becomes \(\sum_{j=1}^{p-1}|f(x_{j})-f(x_{j+1})|\) and the problem is equivalent to the Fused Lasso (Tibshirani et al., 2005). With \(k=1\) the second derivative is penalized, thus yielding an estimate that is piecewise linear. These and higher-order examples can be found in the original paper of Tibshirani (2014). A prominent instance is cubic trend filtering (\(k=3\)) that allows to fit to the data something very similar to a cubic spline with unequally spaced knots. The further relevance of this approach can be appreciated also from another point of view; in the literature, several adaptive estimation procedures have been proposed, mainly focusing on finding good sets of spline knots (see e.g., Dimatteo et al., 2001; Zhou and Shen, 2001). The trend filtering approach implicitly overcomes this problem since, by solving the minimization in (1) for a given \(\lambda\), only a number \(p_{\lambda}<p\) of knots are selected; the entire path spans a range of nested solutions without the need of a forward/backward knot search algorithm.
In this paper, after a brief description of the analyzed data in Section 2, in Section 3 we extend the main trend filtering concepts to functional regression with Gaussian scalar response, developing an estimator able to infer spatially inhomogeneous regression functions. We further investigate this modeling strategy in the case of a _partial_ functional linear model, i.e. by including a set of scalar covariates and propose an extension intended for non-Gaussian response, as for example presence/absence or count data. Efficient estimation algorithms are presented in Section 4, followed by a simulation study in Section 5. In Section 6 we present the result of the analysis on real data, while in Section 7 we draw some final conclusions and remarks.
## 2 Mid-infrared spectroscopy data
In this study, we consider two different sets of mid-infrared spectroscopy data.
The first data set consists of a collection of 730 milk samples produced from 622 cows from different research herds in Ireland between August 2013 and August 2014. All the animals involved in the study were following a predominantly grass-based diet, with relevant heterogeneity in terms of number of parities and stage of lactation. Samples were collected during morning and evening milking and analyzed using a MilkoScan FT6000 (Foss Electronic A/S, Hillerod, Denmark). The resulting spectra consists of 1060 absorbance observations in the mid-infrared light region (see Figure 1). Furthermore, some additional information is available such as the date and the time (am/pm) of the milkings, the breed, the number of parities and the days in
milk for all the cows involved in the study. Note that some milk related traits have been collected by means of wet chemistry techniques. Among these traits are included both technological, such as rennet coagulation time and heat stability, and protein related ones, as for example \(\kappa\)-casein and \(\alpha_{S1}\)-casein. In the analyses reported in Section 6, we focus on the prediction of the \(\kappa\)-casein. Lastly, we retain only one observation per cow, thus working with 622 milk spectra. For a more complete description of the data collection process and of the data themselves, readers can refer to Visentin et al. (2015).
On the other hand, the second data set considered has been collected at the Teagasc MoorePAR Dairy Research Farm (Fermoy, Co.Cork, Ireland), in an experiment designed by O'Callaghan et al. (2016b), which represents the first study of its kind in Ireland and, to the best of our knowledge, in the world. Further information on the experimental setting can be found in O'Callaghan et al. (2016a); O'Callaghan et al. (2017). The data consist of MIR spectra, comprising 1060 wavelengths in the region from 925cm\({}^{-1}\) and 5010cm\({}^{-1}\), obtained by analyzing 4320 milk samples using a Pro-Foss FT6000 series instrument. A total number of 120 Holstein-Friesian cows have been involved in the study, and milked twiced daily in the morning and in the afternoon in three consecutive years (2015, 2016 and 2017). The data collection scheme has been carried out in a balanced way, both in terms of the year and of the number of cattle' parities. Moreover, we restrict our attention to the samples collected from May to August, since in the summer period there is the highest prevalence of grass growth. For each of the years considered, the cattle were randomly assigned to a specific dietary treatment for the entire lactation period. The treatment diets included grass (GRS), with cows mantained outdoors on a perennial ryegrass sward only, clover (CLV), consisting of perennial ryegrass with 20% white clover sward only, and total mixed ration (TMR), where cows were mantained indoors with nutrients combined in a single mix consisting of grass and maize silage and concentrates. In this work, given the strong compositional similarities between GRS and CLV diets, these two classes have been merged to create a general pasture-based diet group. As a consequence the final data set consists of 2931 samples from pasture-fed cattles and 1389 from TMR-fed ones. Lastly, some
Figure 1: Mid-infrared spectra in the region from \(925cm^{-1}\) to \(5010cm^{-1}\), corresponding to the first case study.
additional information on fat, protein, and lactose content has been obtained by calibrating the FT6000 against wet chemistry results and is available for the milk samples considered.
## 3 Proposed methodology
Given the data \(\mathcal{D}=\{X_{i}(\omega),y_{i}\}_{i=1}^{n}\), we assume that \(y_{1},\ldots,y_{n}\) are scalar values drawn from a Gaussian random variable \(y_{i}|X_{i}(\omega)\sim N(\mu_{i},\sigma^{2})\) and \(X_{1}(\omega),\ldots,X_{n}(\omega)\) are realizations of a functional covariate. We model the conditional expected value of \(y_{i}\) as \(\mathbb{E}(y_{i}|X_{i}(\omega))=\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega\) where \(f(\omega)\) is an unknown regression function. This leads to the functional linear model
\[y_{i}=\int X_{i}(\omega)f(\omega)d\omega+\varepsilon_{i}\,,\]
with \(\varepsilon_{i}\sim N(0,\sigma^{2})\) being an additive noise term. There are several works in the functional data analysis literature devoted to estimation of this model, ranging from basis approaches (Ramsay and Silverman, 2005), penalized strategies (Crambes et al., 2009) and functional principal component representations (Yao et al., 2005). A complete review of different estimation methodologies is outside the scope of this work, and readers may refer to Morris (2015) for a general overview of such methods; however, here we spend some words on the penalization approach, which shares some features with our proposal.
Smoothing splines are the most popular tool in the family of penalized estimators. In the context of functional regression, given the data \(\mathcal{D}\), they are obtained as the solution of the following optimization problem:
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda\lVert\mathbf{D}^{(2)}\mathbf{f}\rVert_{2}^{2}\,, \tag{2}\]
where \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\top}\) is the vector of scalar responses, \(\mathbf{X}=(X_{1}(\mathbf{\omega}),\ldots,X_{n}(\mathbf{\omega}))^{\top}\) is the matrix of functional data observed on a regular grid \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{p})\) and \(\mathbf{D}^{(2)}\) is the matrix of second order discrete differences. The approach is justified as a discrete counterpart of a certain variational problem and provides as a solution a natural cubic spline with knots at observation points, see Wahba (1990), Crambes et al. (2009) and Goldsmith et al. (2011) for details. The amount of penalization, managed by the tuning parameter \(\lambda\), represents a trade-off between two extreme solutions, one being a completely wiggly interpolating function (\(\lambda=0\)) and the other being a constant (\(\lambda=\infty\)). Despite being very intuitive and easy to implement, this estimator lacks the local adaptivity property, i.e. the resulting estimated curve is not able to capture the different levels of smoothness of the true function \(f\). Therefore, we propose to rely on a different penalization strategy and estimate the regression curve by
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda\lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}. \tag{3}\]
This expression represents a generalization of the trend filtering loss function, with the design matrix no longer being the \(p\times p\) identity matrix (Tibshirani, 2014), but an \(n\times p\) matrix of discretely observed functional data. The key difference between optimization problems (2) and (3) is that in the latter, thanks to the \(\ell_{1}\)-penalty applied on certain discrete derivative of \(f\), we are able to estimate the regression function taking care of its local features. Indeed, similarly to the original trend filtering estimate, the function that minimizes (3) is equal (when \(k=1,2\)) or very similar (when \(k>2\)) to a spline function with unequally spaced knots along its domain. Clearly, the smoothing splines approach, not being able to adapt to local features of the curve, either misses its smooth or its wiggly parts, depending on the selected degrees of freedom. This is a consequence of the regularization term which does not produce spatial heterogeneity nor knot selection. This makes the approach in (3) particularly appealing for spectroscopy data, where the effect of a functional covariate (e.g. the MIR spectrum) on a scalar variable (e.g. a material trait) can be studied with particular attention to local effects.
### Extensions
In the current framework, when scalar covariates are available alongside the functional one, their inclusion in the modelling strategy can bring additional information useful to predict the response variable. More formally, in this setting we denote the observed data by \(\mathcal{D}=\{X_{i}(\omega),y_{i},\mathbf{z}_{i}\}_{i=1}^{n}\), with \(\mathbf{z}_{i}=(z_{i1},\ldots,z_{ir})^{\top}\) being a set of \(r\) scalar covariates corresponding to the \(i\)-th observation. We then model the conditional expected value of \(y_{i}\) as \(\mathbb{E}(y_{i}|X_{i}(\omega),\mathbf{z}_{i})=\mu_{i}=\int X_{i}(\omega)f(\omega) d\omega+\sum_{j=1}^{r}z_{ij}\gamma_{j}\), where \(\{\gamma_{j}\}_{j=1}^{r}\) are unknown regression coefficients. For such data structure, it is worth considering the partial functional linear model (Shin, 2009)
\[y_{i}=\int X_{i}(\omega)f(\omega)d\omega+\sum_{j=1}^{r}z_{ij}\gamma_{j}+\varepsilon _{i}\,.\]
Note that, if needed, the intercept can be easily included in the model as a constant scalar covariate. Following the trend filtering paradigm, we propose to estimate the function \(f\) and the vector \(\mathbf{\gamma}=(\gamma_{1},\ldots,\gamma_{r})^{\top}\) by solving the optimization problem
\[\mathbf{\tilde{\theta}}=\arg\min_{\mathbf{\theta}\in\mathbb{R}^{p+r}}\lVert\mathbf{y}- \tilde{\mathbf{X}}\mathbf{\theta}\rVert_{2}^{2}+\lambda\lVert\tilde{\mathbf{D}}^{(k+1)} \mathbf{\theta}\rVert_{1}\,, \tag{4}\]
where \(\tilde{\mathbf{X}}=[\mathbf{X}|\mathbf{Z}]\in\mathbb{R}^{n\times(p+r)}\), \(\mathbf{Z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{n})^{\top}\in\mathbb{R}^{n\times r}\), \(\mathbf{\theta}=(\mathbf{f}^{\top},\mathbf{\gamma}^{\top})^{\top}\in\mathbb{R}^{p+r}\) and \(\tilde{\mathbf{D}}^{(k+1)}=[\mathbf{D}^{(k+1)}|\mathbf{0}_{(p-k-1)\times r}]\in\mathbb{R}^ {(p-k-1)\times(p+r)}\). Note that, with this formulation, the penalty does not affect the parametric part of the model. When \(r\) is large, one can include an \(\ell_{1}\)-penalty for the vector \(\mathbf{\gamma}\) in order to achieve sparsity in the estimated coefficients; see, for instance, Kong et al. (2016). Since the application presented in Section 6 involves a small set of covariates, this potential extension has not been pursued in this work.
An additional generalization of our proposal is required when assumption \(y_{i}\sim N(\mu_{i},\sigma^{2})\) is not met because the scalar responses \(y_{1},\ldots,y_{n}\) are generated by some other distribution. For example, for count data, we can assume \(y_{i}\sim Poisson(\lambda_{i})\), and for presence/absence data \(y_{i}\sim Bernoulli(\pi_{i})\). In these settings, where a functional linear model is not adequate, a generalized functional linear model (James, 2002; Muller and Stadtmuller, 2005; Goldsmith et al., 2011) can be applied. In particular, we assume that \(g(\mathbb{E}(y_{i}|X_{i}(\omega)))=g(\mu_{i})=\int X_{i}(\omega)f(\omega)d\omega\), with \(g(\cdot)\) being a suitably chosen link function. Now the empirical minimization problem is recasted as
\[\mathbf{\hat{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}L(\mathbf{y};\mathbf{X}\mathbf{f})+\lambda \lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}, \tag{5}\]
where the loss function \(L(\mathbf{y};\mathbf{X}\mathbf{f})\) depends upon the distribution of the response variable. The objective is now represented by a nonlinear function of the unknown parameter \(\mathbf{f}\) and its direct minimization is usually not straightforward. As a consequence, in Section 4 we present a clever modification of the proposed algorithm to deal with the modified loss appearing in (5). Lastly note that in the presence of explanatory scalar covariates and a non-Gaussian response, the last two specifications can be combined together by adjusting (5) as it has been done in equation (4) for problem (3).
Another potential extension would be to combine two (or more) penalties in the optimization problem. This allows to estimate functions that exhibit a complex behaviour, typically piecewise polynomials of different order. The loss function in this context is
\[\mathbf{\hat{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda_{1}\lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}+\lambda_{2} \lVert\mathbf{D}^{(\ell+1)}\mathbf{f}\rVert_{1}, \tag{6}\]
where \(k\) and \(\ell\) are integers and \(\lambda_{1}\), \(\lambda_{2}\) are regularization parameters. This modification can be employed when additional scalar covariates are observed and/or when the distribution of the response is not Gaussian. In section 4 we will illustrate how to solve problem (6) with the same toolbox used for the other cases.
### Inference
In this section, we describe a strategy to build confidence intervals for most of the pointwise estimates introduced in the previous section. Given the complexity of functional regression models, inferential procedures have sometimes been overlooked, with the focus often being on pointwise estimation. Nonetheless, the introduced procedure represents a key component that improves the usability of the methodology in real world scenarios. In the case of the trend filtering framework, the construction of confidence intervals and confidence bands can be addressed via bootstrap procedures. Standard frequentist inference is not suitable, since the distribution of the trend filtering estimator is non-Gaussian, even when the observational noise is Gaussian (Politsch et al., 2020). Here we propose a Wild bootstrap procedure (Mammen, 1993), that is particularly appropriate in high-dimensional regression models when the noise distribution is unknown. Briefly, the idea behind the Wild bootstrap is to construct an auxiliary random variable with zero mean and unit variance (and ideally higher moments equal to \(1\)). This random variable is then used to define a transformation of the observed residuals that gives a valid bootstrap sample (see Algorithm 1).
A classical choice for the auxiliary random variable is the two point distribution suggested in Mammen (1993), that is
\[u_{i}^{*}=\begin{cases}\hat{\epsilon}_{i}(1+\sqrt{5})/2&\text{with probability}(1+\sqrt{5})/(2\sqrt{5}),\\ \hat{\epsilon}_{i}(1-\sqrt{5})/2&\text{with probability}(\sqrt{5}-1)/(2 \sqrt{5}).\end{cases} \tag{7}\]
Other examples are the Rademacher distribution, that takes values \(1,-1\) with equal probability, the Uniform distribution on the interval \([-\sqrt{3},\sqrt{3}]\), or various transformations of the Gaussian distribution (Mammen, 1993). In general, since it is not possible to define a random variable that has mean \(0\), variance \(1\), and all higher moments equal to \(1\), the different choices lead to different values for the third and the fourth moment. For instance, the third and fourth moments of the Rademacher distribution are \(0\) and \(1\), respectively, while the two point distribution defined above have third and fourth moments \(1\) and \(2\), respectively. The specific choice is generally driven by considerations on the symmetry of the observed residuals.
Given the full bootstrap estimate set \(\{\widehat{\mathbf{f}}^{(b)}\}_{b=1}^{B}\), for any \(\alpha\in(0,1)\), we can define a \((1-\alpha)\) quantile-based pointwise variability band as
\[V_{1-\alpha}(f(\omega_{j}))=\left(\widehat{f}_{\alpha/2}(\omega_{j}),\widehat {f}_{1-\alpha/2}(\omega_{j})\right), \tag{8}\]
where
\[\widehat{f}_{\gamma}(\omega_{j})=\inf_{g}\left\{g:\frac{1}{B}\sum_{b=1}^{B} \mathbb{I}(\widehat{f}^{(b)}(\omega_{j})\leq g)\geq\gamma\right\},\quad\text {for all}\quad j=1,\ldots,p.\]
Optimization procedure
In the literature, several algorithms for solving the original trend filtering problem have been proposed; see, among others, Kim et al. (2009), Tibshirani (2014) and Tibshirani and Taylor (2011). Some of these algorithms are not directly generalizable to our context, where the presence of the \(n\times p\) data matrix \(\mathbf{X}\) makes the optimization task more challenging. To solve problem (3), we rely on the Alternating Direction Method of Multipliers (ADMM) framework and consider an extension of the approach by Ramdas and Tibshirani (2016) where a specialized acceleration scheme is proposed.
ADMM algorithms are a wide class of algorithms particularly useful for solving constrained problems of the form
minimize \[f(\mathbf{\alpha})+g(\mathbf{\delta})\,,\] (9) subject to \[\mathbf{A}\mathbf{\alpha}+\mathbf{B}\mathbf{\delta}+\mathbf{c}=0.\]
A general ADMM algorithm proceeds by minimizing the augmented Lagrangian of (9). Since the objective function is separable, minimization can take place in an alternate fashion. ADMM approaches are largely used in penalized estimation schemes, which can often be recasted as in (9), leading to a faster optimization thanks to variable splitting. Specifically, the problem in (3) can be stated as
minimize \[\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{\delta}\|_{1}\,,\] (10) subject to \[\mathbf{D}^{(k+1)}\mathbf{\alpha}-\mathbf{\delta}=0.\]
where \(f(\mathbf{\alpha})=\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}\) is the \(\ell_{2}\)-loss and \(g(\mathbf{\delta})=\|\mathbf{\delta}\|_{1}\) the \(\ell_{1}\)-norm. As shown in Boyd et al. (2011), updates for the parameters are straightforward. In fact, since \(f\) is quadratic, the update step for \(\mathbf{\alpha}\) has a least squares form and the \(\mathbf{\delta}\) update amounts in soft-thresholding a given vector. Although these updating rules could be applied, an acceleration scheme for such problem is exploited. In fact, as demonstrated in Ramdas and Tibshirani (2016), a different parametrization of the ADMM can save computational time due to the existence of efficient algorithms for the constant-order trend filtering problem. The idea is to reformulate the problem as follows.
minimize \[\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_ {1}\,,\] (11) subject to \[\mathbf{D}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0.\]
where \(\mathbf{D}^{(1)}\) is the discrete difference matrix of order 1. The reader can verify the equivalence between problem (10) and problem (11). Hereafter, we derive the specialized parameter updates needed for the generic \(t+1\) iteration:
\[\mathbf{\alpha}^{t+1} =(\mathbf{X}^{\top}\mathbf{X}+\rho(\mathbf{D}^{(k)})^{\top}\mathbf{D}^{(k)})^{-1 }(\mathbf{X}^{\top}\mathbf{y}+\rho(\mathbf{D}^{(k)})^{\top}(\mathbf{\delta}^{t}-\mathbf{u}^{t}))\,, \tag{12}\] \[\mathbf{\delta}^{t+1} =\arg\min\|\mathbf{D}^{(k)}\mathbf{\alpha}^{t+1}+\mathbf{u}^{t}-\mathbf{\delta}^ {t}\|_{2}^{2}+\lambda/\rho\|\mathbf{D}^{(1)}\mathbf{\delta}^{t}\|_{1}\,,\] (13) \[\mathbf{u}^{t+1} =\mathbf{u}^{t}+\mathbf{D}^{(k)}\mathbf{\alpha}^{t+1}-\mathbf{\delta}^{t+1}\,. \tag{14}\]
The update for \(\mathbf{\delta}\) is much more involved than a simple soft-thresholding and requires solving a new constant-order trend filtering problem, that is, a one-dimensional fused lasso problem. However, fast solutions are available by employing the dynamic programming solver by Johnson (2013) or the proposal by Davies and Kovac (2001) based on the taut string principle. Ramdas and Tibshirani (2016) showed the superiority of this specialized ADMM formulation over the classical one in terms of convergence rates: the single operation is more expensive than the one in the usual parametrization, but convergence is achieved in fewer iterations, leading to an overall gain in terms of computational time.
The parameter \(\rho\) is sometimes made adaptive by allowing a different value at each iteration to speed up the learning process. Using a varying \(\rho\), one has to compute \((\mathbf{X}^{\top}\mathbf{X}+\rho^{t}\mathbf{D}^{\top}\mathbf{D})^{-1}\) at each iteration of the algorithm, and this can be prohibitive even for moderate dimensions. With a fixed \(\rho\) instead, one can precompute the quantity \((\mathbf{X}^{\top}\mathbf{X}+\rho\mathbf{D}^{\top}\mathbf{D})^{-1}\) that is never modified by the updating rules at the expense of some more iterations. In our implementation, we found this last approach faster and, in particular, we followed Ramdas and Tibshirani (2016) setting \(\rho=\lambda\), which led to stable solutions. From a practical point of view, often the entire solution path is needed as a function of \(\lambda\). In this case, a speed up is made by considering warm starts i.e., by starting the algorithm from the solution obtained for the previous value of the regularization parameter.
Lastly, note that slight modifications are needed in the presence of scalar covariates: the problem is stated as the minimization of \(\|\mathbf{y}-\tilde{\mathbf{X}}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_{1}\) subject to \(\tilde{\mathbf{D}}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0\) and the updating rules are the same as ((12) - (14)) except for the substitution of \(\tilde{\mathbf{X}}\) and \(\tilde{\mathbf{D}}^{(k)}\) in place of \(\mathbf{X}\) and \(\mathbf{D}^{(k)}\).
For the generalized functional linear model, we develop an iterative reweighted penalized least squares approach based on the alternation of a Newton step and an ADMM step. Specifically, problem (5) can be written as
\[\text{minimize} L(\mathbf{y};\mathbf{X}\mathbf{\alpha})+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_{1}\,,\] (15) subject to \[\mathbf{D}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0\,.\]
In the first step of the algorithm, given the current estimate \(\mathbf{\alpha}^{t}\) we approximate the generic loss function \(L(\mathbf{y};\mathbf{X}\mathbf{\alpha})\) around \(\mathbf{\alpha}^{t}\) by a quadratic loss \(\|\tilde{\mathbf{y}}^{t}-\tilde{\mathbf{X}}^{t}\mathbf{\alpha}\|_{2}^{2}\), where \(\tilde{\mathbf{y}}^{t}=(\mathbf{W}^{t})^{1/2}\mathbf{s}^{t}=(\mathbf{W}^{t})^{1/2}(\mathbf{X}\mathbf{ \alpha}^{t}+(\mathbf{V}^{t})^{-1}(\mathbf{y}-\mathbf{\mu}^{t}))\) and \(\tilde{\mathbf{X}}^{t}=(\mathbf{W}^{t})^{1/2}\mathbf{X}\), building a penalized least squares problem. This step is intended as a Fisher scoring update. The quantities \(\mathbf{\mu}=\mathbb{E}(\mathbf{y}|\mathbf{X})\), \(\mathbf{V}=V(\mathbf{\mu})\) and \(\mathbf{W}=V(\mathbf{\mu})^{-1}(g^{\prime}(\mathbf{\mu}))^{-2}\) depend on the random variable characterizing the response. For example, if \(y_{i}\sim Bernoulli(\pi_{i})\) we have \(\mu_{i}=\pi_{i}=\exp\{\int X_{i}(\omega)f(\omega)d\omega\}/(1+\exp\{\int X_{i }(\omega)f(\omega)d\omega\})\), \(V(\mu_{i})=\mu_{i}(1-\mu_{i})\), \(g^{\prime}(\mu_{i})=1/V(\mu_{i})\) and \(W_{ii}=V(\mu_{i})\) while if \(y_{i}\sim Poisson(\lambda_{i})\) we have \(\mu_{i}=\lambda_{i}=\exp\{\int X_{i}(\omega)f(\omega)d\omega\}\), \(V(\mu_{i})=\mu_{i}\), \(g^{\prime}(\mu_{i})=1/V(\mu_{i})\) and \(W_{ii}=V(\mu_{i})\).
In the second step we solve the penalized problem by applying ADMM updates ((12) - (14)) until convergence, just by replacing \(\mathbf{X}\) with \(\tilde{\mathbf{X}}^{t}\) and \(\mathbf{y}\) with \(\tilde{\mathbf{y}}^{t}\), thus obtaining \(\mathbf{\alpha}^{t+1}\). The two steps are repeated until some stopping criterion is achieved and the final estimator is obtained.
Lastly, note that it is possible to use the same machinery presented in this section for the multiple penalty approach too, by stacking the two matrices \(\mathbf{D}^{(k+1)}\) and \(\mathbf{D}^{(\ell+1)}\) to form \(\tilde{\mathbf{D}}\), which will replace the difference matrix in the ADMM updates.
## 5 Simulation study
In this section, we assess the performance of the proposed methods by means of simulations. We first generate a sample of functional data \(\{X_{i}(\omega)\}_{i=1}^{n}\) from a B-spline basis with 10 equispaced internal knots, drawing each coefficient from a standard normal distribution. The resulting functions are evaluated on an equispaced grid of \(p=100\) points in order to form the \(n\times p\) matrix \(\mathbf{X}\), and then kept fixed in all simulation repetitions. We define several scenarios that can be addressed with one of the methods previously described in the following way.
In scenario a), given the sample of functional data \(\{X_{i}(\omega)\}_{i=1}^{n}\) we generate a sample of scalar responses from a Gaussian distribution \(y_{i}\sim N(\mu_{i},\sigma^{2})\) where the expected value depends linearly on the functional covariate, i.e. \(\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega\). We set \(n=250\) and a signal-to-noise ratio equal to 4.
In scenario b), in addition to the functional data sample, we also consider a set of scalar covariates \(z_{1i},\ldots,z_{ri}\) for each observational unit. These covariates are generated from a standard Gaussian distribution and are independent of each other. Then, a sample of scalar responses is
Figure 2: Estimated functional coefficient for \(f_{3}(\omega)\), with different combinations of \((\lambda_{1},\lambda_{2})\) in the mixed penalty case \(k=0,l=3\). The dashed-black lines correspond to the true function, while the solid-red lines correspond to estimates. From top to bottom, the estimated functional coefficient approaches a piecewise-constant function; from left to right, it approaches a cubic function.
obtained from a Gaussian distribution \(y_{i}\sim N(\mu_{i},\sigma^{2})\) where the expected value depends linearly on the functional and scalar covariates, i.e. \(\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega+\sum_{j=1}^{r}z_{ji}\gamma_{j}\). We set \(n=250\), \(r=5\), \(\gamma=(2,-1,1,0,0)\) and a signal-to-noise ratio equal to 4.
In scenario c), given the functional data sample, we generate scalar responses from a Bernoulli distribution \(y_{i}\sim Bernoulli(\pi_{i})\) where \(g(\pi_{i})=\text{logit}\{\pi_{i}\}\) depends linearly on the functional covariate, i.e. \(\text{logit}\{\pi_{i}\}=\int X_{i}(\omega)f(\omega)d\omega\). We set \(n=250\).
We combine each described scenario with three different specifications of the unknown regression function \(f(\omega)\). In detail, \(f_{1}(\omega)\) is a piecewise cubic function in \([0,1]\) built from a cubic B-spline basis with 3 internal knots at \(0.2,0.75\) and \(0.9\), \(f_{2}(\omega)\) is the classical mexican hat function
\[f_{2}(\omega)=(1-\omega^{2})\text{exp}\{-\omega^{2}/2\}\quad\text{for $\omega \in[-5,5]$},\]
and \(f_{3}(\omega)\) is the same function with truncated peaks
\[f_{3}(\omega)=\begin{cases}f_{2}(\omega)&\text{if $f_{2}(\omega)\in[-0.3,0.5] $}\,,\\ 0.5&\text{if $f_{2}(\omega)>0.5$}\,,\\ -0.3&\text{if $f_{2}(\omega)<-0.3$}\,.\end{cases}\]
All functions are evaluated on the same equispaced grid of \(p=100\) points used to generate \(\{X_{i}(\omega)\}_{i=1}^{n}\).
For all the \(B=100\) synthetic samples generated, we estimate the regression parameters with the trend filtering approach, penalizing the fourth derivative (_TF-4_), the first derivative (_TF-1_) and both of them (_Mixed-TF_). For comparison purposes we also employ the spline method (_SPL_) outlined in Goldsmith et al. (2011) penalizing the second derivative of the function. The tuning parameters for all the methods have been selected using a separate validation set. In Table 1 we present for our methods and the spline estimator the value of the Integrated Mean
\begin{table}
\begin{tabular}{l l l l} \hline _Function_ & \(f_{1}(\omega)\) & \(f_{2}(\omega)\) & \(f_{3}(\omega)\) \\ \hline Scenario a) & & & \\ _TF-4_ & 0.335 (0.401) & 0.123 (0.173) & 0.265 (0.073) \\ _TF-1_ & 1.969 (1.821) & 2.120 (0.680) & 0.595 (0.339) \\ _MTF_ & 0.588 (0.514) & 0.297 (0.296) & 0.184 (0.081) \\ _SPL_ & 0.669 (0.666) & 0.207 (0.860) & 0.357 (0.405) \\ Scenario b) & & & \\ _TF-4_ & 0.382 (0.434) & 0.139 (0.189) & 0.269 (0.083) \\ _TF-1_ & 1.584 (0.335) & 2.035 (0.189) & 0.561 (0.069) \\ _MTF_ & 0.513 (0.395) & 0.302 (0.299) & 0.194 (0.083) \\ _SPL_ & 1.101 (1.014) & 0.221 (0.941) & 0.368 (0.477) \\ Scenario c) & & & \\ _TF-4_ & 2.051 (1.861 & 0.822 (0.586) & 0.680 (0.472) \\ _TF-1_ & 4.334 (2.180) & 2.705 (1.091) & 0.907 (0.363) \\ _MTF_ & 3.713 (2.386) & 1.165 (0.974) & 0.579 (0.298) \\ _SPL_ & 2.698 (1.122) & 0.908 (1.676) & 0.630 (0.366) \\ \hline \end{tabular}
\end{table}
Table 1: Average Integrated Mean Squared Error (MISE) and its standard error (in parentheses) over 100 repetitions for the estimation of three regression functions (details in the text). TF-4: Trend filtering with penalization on fourth derivative only; TF-1: Trend filtering with penalization on the first derivative only; MTF: Trend filtering with penalization on both fourth and first derivative; SPL: Penalized splines as in Goldsmith et al. (2011).
Squared Error (MISE) defined as
\[\text{MISE}(\widehat{f})=\int\{f(\omega)-\widehat{f}(\omega)\}^{2}d\omega,\]
evaluated on the finite grid \(\mathbf{\omega}\), averaged over all simulation repetitions, and its standard error (in parenthesis). The proposed approach shows superior performance in all the combinations of functions and scenarios considered, when compared to the spline methodology. In fact, this latter strategy is not well suited in situations where the regression function is spatially heterogeneous. Moreover we observe that, among the different specifications of the trend filtering, the one penalizing the fourth derivative achieves the best results in estimating \(f_{1}(\omega)\) and \(f_{2}(\omega)\), in all considered scenarios. Unfortunately, penalizing the first derivative does not lead to satisfactory results, for two main reasons: the estimated regression function is not continuous, against what is commonly assumed in functional data analysis, and the estimation error is large due to inherent smoothness of the considered unknowns. However, adding the first derivative penalization to the plain trend filtering of order four leads to an improved performance if the regression function is particularly complex, as in the case of \(f_{3}(\omega)\). To elucidate the behaviour of double penalization in this scenario, in Figure 2 we graphically depict the unknown function and several estimates based on different values of the parameter \(\mathbf{\lambda}=(\lambda_{1},\lambda_{2})\). Starting from the upper left corner where the impact of regularization is the lowest, we see that increasing \(\lambda_{1}\) keeping \(\lambda_{2}\) fixed, leads to almost piecewise constant solutions. By contrast, increasing \(\lambda_{2}\) while keeping \(\lambda_{1}\) fixed, leads to almost piecewise cubic functions. However, since \(f_{3}(\omega)\) exhibits both features, a better reconstruction is obtained by combining the two penalties, as can be appreciated in the lower right corner of the figure. Lastly, note that this specification automatically includes the "marginal" models with only one of the two derivative penalized.
## 6 Applications to milk spectral data
In the following, the proposed method is applied to the first set of data introduced in Section 2. Following suggestions from the literature, prior to running the analyses a variables aggregation step has been performed. In fact, it has been pointed out (see e.g., Murphy et al., 2010) that the aggregation of adjacent wavelengths implies almost negligible losses in terms of information and predictive abilities. This is coherent with the idea that, when dealing with spectra, the strong correlations among wavelengths allow to work on data with slightly lower resolution while retaining most of the informative content. Accordingly, we aggregate four adjacent wavelengths, to reduce the overall computational cost, resulting in a dataset with \(n=622\) milk spectra and \(p=264\) wavelengths.
As briefly mentioned in Section 2, in the regression framework the proposed method has been used to predict the \(\kappa\)-casein content in the milk samples. The actual observed values for the response variable, expressed in grams per liter of milk, were collected using reverse-phase high performance liquid chromatography (HPLC), with an adaptation of the methodology considered in Visser et al. (1991). This technology is known to be expensive and time-consuming and is not considered suitable for modern large-scale applications; therefore, the calibration of statistical tools, used in conjunction with infrared spectroscopy, can be highly beneficial for research in the dairy framework and for the dairy production systems.
\(\kappa\)-casein has been selected as the milk trait to be predicted as it is one of the major components of milk, playing an essential role in cheese production systems, affecting both cheese yield and its characteristics (Wedholm et al., 2006). Moreover, \(\kappa\)-casein is also used as a food additive and it generally represents an important economic factor whose timely and precise prediction might increase the efficiency of the dairy production chain. For these reasons, milk casein content is nowadays also considered as one of the determinants to estimate the breeding values of the animals, inspiring research lines on genetic control and selective breeding (Bittante
and Cecchinato, 2013; Bittante et al., 2022). Exploratory analysis of this variable revealed a strong asymmetric behaviour of the empirical distribution. For this reason, we considered as a response variable for our model the logarithm of \(\kappa\)-casein.
In this section, the model in (6) has been considered, with \(k=3\) and \(l=0\), thus penalizing the fourth and the first derivative, respectively. This choice can be justified by the assumption of a regression function that is smooth in some parts of the domain and flat in some others. Note, as mentioned in section 5, that the marginal formulations with penalization only on the fourth or on the first derivative are included as limiting models. The hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\), which control the strenght of the penalty, have been selected resorting to a cross-validation scheme. In conjunction with the spectral variables, we consider also some scalar covariates, as per the extension of the model outlined in Section 3.1; in particular, information on the season when the milk samples have been collected, the milking time (morning or afternoon milking), the number of cows' parities and the number of days an animal has been milking in the current lactation (days in milk). This implies the presence of \(r=6\) additional scalar variables, with a total of 270 covariates. The estimated regression function, together with the inferential results obtained by means of the procedure introduced in Section 3.2, are visually reported in Figure 3, while the results concerning the scalar variables are shown in Table 2.
The method shows high prediction accuracy, with a cross-validated mean square prediction error equal to 0.04986. The result has been additionally compared with a PLS-based regression approach, unarguably representing the state-of-the-art when working with infrared spectroscopy data, which resulted in a cross-validated mean square prediction error of 0.05457.
As mentioned, our proposal introduces other relevant strength points while showing improved predictive performance. First, it respects and preserves the functional nature of the data, without mapping them into lower-dimensional latent spaces. Consequently, it provides richer insights on the studied phenomenon and, generally speaking, an easier interpretation of the results; this potentially sheds light on the chemical factors which are the main determinants of casein content in milk. In fact, a thorough analysis of the results depicted in Figure 3 highlights some interesting behaviours. First of all, the inferential routine outlined in Section 3.2, allows to detect some spectral regions which are considered to be uninformative for the determination of the \(\kappa\)-casein content in the milk. For instance, our method considers as uninformative the spectral regions from 1619 cm\({}^{-1}\) to 1673 cm\({}^{-1}\) and from 3069 cm\({}^{-1}\) to 3663 cm\({}^{-1}\). In literature these highly-noisy regions are designated as water absorption areas, usually considered as uninformative, thus removed from the data prior to the analyses (Rutten et al., 2011). Nonetheless, the determination of these regions is still controversial and not unambiguous, as it can be influenced by spectrometer specific characteristics. Interestingly, our method marks as uninformative more wavelengths being adjacent to these highly noisy regions; this is coherent with practitioners' experiences which often point out that water may influence larger portions of the spectra, with respect to the ones suggested in the literature.
Focusing on the variables regarded as significant, it has to be noted that the proposal suggests that \(\kappa\)-casein can be predicted using a relatively small portion of the infrared spectrum. This is
\begin{table}
\begin{tabular}{l r r r} \hline Covariate & Lower (0.025) & Estimate & Upper (0.975) \\ \hline Intercept & 0.257 & **0.438** & 0.627 \\ Spring & -0.222 & **-0.133** & -0.038 \\ Summer & -0.075 & -0.019 & 0.026 \\ Milk time (morning) & -0.186 & **-0.129** & -0.074 \\ Parity (2) & -0.061 & -0.019 & 0.030 \\ Parity (3) & -0.043 & 0.004 & 0.052 \\ DIM & -0.001 & -0.000 & 0.000 \\ \hline \end{tabular}
\end{table}
Table 2: Estimated coefficients, and 95% confidence intervals, for the scalar covariates.
consistent with the results obtained in Frizzarin et al. (2021) where standard predictive tools displayed good performances exploiting fewer wavelengths, with respect to the ones used to predict other milk proteins and technological traits. These indications are particularly important for the dairy industry, where there is an increasing demand for cheaper and potentially portable instruments, scanning only relevant portions of the spectrum.
A proper interpretation of the specific peaks shown in the estimated regression function is complex since, for composite materials as milk, chemical constituents have absorption peaks at different wavelengths which often overlap (Soyeurt et al., 2006). Nonetheless, some interesting behaviours can be highlighted. In general, we see a strong influence for the wavelengths in the so called _fingerprint region_, below 1400 cm\({}^{-1}\)(Hewavitharana and van Brakel, 1997); this region is often regarded as highly informative for the analysis of proteinaceous material as here chemical bonds related to amide groups are formed (Van Der Ven et al., 2002). Coherently, being \(\kappa\)-casein a protein, our method flags as influential, and with a positive effect on the \(\kappa\)-casein concentration, those wavelengths around 1550 cm\({}^{-1}\) and 1250 cm\({}^{-1}\) which are associated with amide II and amide III bands (De Marchi et al., 2009). In the region around 1100 cm\({}^{-1}\) and between 1200 cm\({}^{-1}\) and 1300 cm\({}^{-1}\), the peaks often depend on the phosphate bands; interestingly, phosphorus in milk occurs only in casein and not in whey proteins (Hewavitharana and van Brakel, 1997) and our method seems to be able to detect the importance of these areas.
Lastly, some insights can be obtained by inspecting the results for the scalar covariates. For example, milk samples collected in spring appear to have a significant decrease in terms of \(\kappa\)-casein concentration; knowing that cows calve in the first months of the year, this is consistent with the suggestions in Sorensen et al. (2003) where it is stated that casein concentration is usually lower after calving. Moreover, Quist et al. (2008); Forsback et al. (2010) showed that casein content is higher for afternoon and evening milkings, with respect to the morning ones;
Figure 3: Top: Sample of 25 spectra, with dark (light) colors corresponding to low (high) values of \(\kappa\)-casein. Bottom: Estimated functional coefficient with \(95\%\) bootstrap bands. The regions that do not contain zero are highlighted in red in the bottom line.
as it can be seen in Table 2, this is confirmed by our results.
Generally speaking, the devised procedure is capable of adequately predict the content of \(\kappa\)-casein in milk while, at the same time, paving the way for a convenient interpretation of the results, which is often more cumbersome when different predictive tools are considered. Finally, it should be noted that interpretation must be paired, as it might be enriched, by a close cooperation with an expert in the dairy and animal science framework.
### Application to cow dietary treatments
In this section, one of the extension discussed in Section 3.1, is considered to analyze the second datasets introduced in Section 2. More specifically, assuming that the response variable arises from a Bernoulli distribution, we employ the proposed strategy to predict the cows' dietary treatment, relying only on the spectral information. Coherently with the application in the previous section, a wavelength aggregation step has been performed. Moreover, consistently with Frizzarin et al. (2021), some outlying spectra have been removed. This results in a set of data with \(n=4261\) spectra, \(2893\) from pasture-fed cows and \(1368\) from TMR-fed ones, and \(p=264\) measured wavelengths.
Hereafter, model (5) has been employed; in particular we considered \(k=0\), therefore penalizing the first derivative, with the loss function adequately chosen to accomodate the binary nature of the response variable. The hyperparameter \(\lambda\) has been selected again by cross-validation. The application of our method produces highly satisfactorily performances, resulting in a cross-validation missclassification error equal to \(2.98\%\). This result has been again compared with the one obtained by means of a PLS-based discriminant analysis strategy, which produced a similar cross-validated error equal to \(2.60\%\). This provides a strong indication about the suitability of our proposal, also when considered for classification purposes.
Note that the extension to model (5) of the procedure outlined in Section 3.2 is not trivial. Nevertheless, even if it is not possible to draw formal inferential conclusions on the estimated functional coefficient, a closer inspection of the result allows us to obtain relevant insights, which can be further explored and integrated with subject-matter knowledge. Firstly, the penalization on the first derivative allows one to obtain an estimated functional coefficient being flat and equal to zero, or having negligible magnitude, for the highly noisy spectral regions from \(1604\) cm\({}^{-1}\) to \(1712\) cm\({}^{-1}\), and from \(3039\) cm\({}^{-1}\) to \(3810\) cm\({}^{-1}\), which strongly overlaps with the water absorption areas, deemed irrelevant for discrimination. Small discrepancies with the results obtained in the previous section highlight how the proposed method could represent a completely data-driven and application-oriented way to detect uninformative spectral regions. Further inspection of the most relevant wavelengths leads to coherent indications, with respect to those available in the literature (see e.g., Frizzarin et al., 2021). For example, the _fingerprint region_ is again useful to discriminate between diets. Moreover, wavelengths between \(2854\) cm\({}^{-1}\) and \(2977\) cm\({}^{-1}\) seems to have a strong impact on the feeding regimens classification, thus agreeing with the suggestions in De Marchi et al. (2011); Lefevre and Subirade (2000) where it is highlighted that this region is often used to estimated the milk fatty acid composition, which is in turn known to be highly correlated with the dietary treatment.
Concluding the proposed classification tool, while respecting and preserving the functional nature of the data, is able to outperform state-of-the-art discriminative methods. Moreover, the inspection of the estimated coefficients allows us to gain relevant insights from a chemical standpoint, which might deserve further exploration from experts in the field.
## 7 Conclusions and future directions
In this work, we presented an adaptive functional framework for spectroscopy data, that stems from the trend filtering literature. The proposed regression method is characterized by high
flexibility and adaptivity with respect to the complexity of the underlying data generating process. In particular, the method is capable of capturing different degrees of regularity of the slope function, while accounting for the high dimensionality and strong correlation among the wavelengths thanks to the \(\ell_{1}\)-regularization. The estimation is supported by a fast optimization procedure that leverages on the alternating direction method of multipliers framework, with a specialized acceleration scheme which provides superior convergence rates. The method is suitable for both Gaussian and non-Gaussian responses, and allows for the inclusion of scalar covariates, whose addition is often overlooked in the spectroscopy framework even if it might lead to better predictive performances. Moreover, the estimation strategy is enriched by a newly developed inferential procedure which allows to draw formal conclusions both on the functional and the scalar component. These are obtained with a nonparametric bootstrap approach, i.e. the wild bootstrap, that is particularly appropriate in high-dimensional regression models where the noise distribution is unknown.
The high adaptivity and the availability of inferential procedure are key features to enhance not only the interpretability of the results, but also their usability in real world scenarios. Indeed, spectroscopy data present peculiar statistical challenges, in particular intrinsic high-dimensionality of the inputs and strong correlation structures among the wavelengths. It is therefore paramount, from a practical perspective, to have a viable and interpretable tool that allows to carry out inference on specific regions of the spectrum, in order to gain relevant knowledge on specific properties of the samples (e.g. \(\kappa\)-casein content) or to highlight differences due to external factors (e.g. dietary treatments).
The proposed methodology showed satisfactory performance in simulations and, more importantly, very promising results in the two spectroscopy-based data analyses. In terms of prediction accuracy, the results were either superior or comparable to the ones obtained by means of state-of-the-art techniques. In terms of inference, the flexibility of the model allowed a correct identification of the highly-noisy water absorption areas, without the necessity to remove such portions of the data prior to the analysis. Moreover, in both the regression and classification framework, informative peaks (e.g., those in the fingerprint region) were highlighted, providing interesting insights into which spectral regions affect certain properties of milk. The inclusion of covariates has also constituted a relevant advantage that resulted in interesting observations on the effect, for example, of the seasonality. It should be stressed that, even if the proposed methodology has been applied to MIR spectroscopy data, it may be extended to other data sharing similar features.
A first direction for future research might be the development of inferential procedures for the non-Gaussian response cases. This would solidify the interpretability of the proposed methods even further, for instance in the classification framework. Moreover, this represents a particularly stimulating open problem, that might be approached via appropriate generalization of the nonparametric bootstrap procedures. Another possible extension could be the introduction of more complex penalties, that would allow the applicability of the method to a wider range of problems.
## Acknowledgements
This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland under grant number (16/RC/3835).
|
2309.10123 | On the generalization of the Kruskal-Szekeres coordinates: a global
conformal charting of the Reissner-Nordstrom spacetime | The Kruskal-Szekeres coordinates construction for the Schwarzschild spacetime
could be viewed geometrically as a squeezing of the $t$-line associated with
the asymptotic observer into a single point, at the event horizon $r=2M$.
Starting from this point, we extend the Kruskal charting to spacetimes with two
horizons, in particular the Reissner-Nordstr\"om manifold, $\mathcal{M}_{RN}$.
We develop a new method for constructing Kruskal-like coordinates and find two
algebraically distinct classes charting $\mathcal{M}_{RN}$. We pedagogically
illustrate our method by constructing two compact, conformal, and global
coordinate systems labeled $\mathcal{GK_{I}}$ and $\mathcal{GK_{II}}$ for each
class respectively. In both coordinates, the metric differentiability can be
promoted to $C^\infty$. The conformal metric factor can be explicitly written
in terms of the original $t$ and $r$ coordinates for both charts. | Ali Fawzi, Dejan Stojkovic | 2023-09-18T19:56:43Z | http://arxiv.org/abs/2309.10123v2 | On the Generalization of the Kruskal-Szekeres Coordinates: A Global Conformal Charting of the Reissner-Nordstrom Spacetime
###### Abstract
The Kruskal-Szekeres coordinates construction for the Schwarzschild spacetime could be viewed geometrically as a squeezing of the \(t\)-line associated with the asymptotic observer into a single point, at the event horizon \(r=2M\). Starting from this point, we extend the Kruskal charting to spacetimes with two horizons, in particular the Reissner-Nordstrom manifold, \(\mathcal{M}_{RN}\). We develop a new method for constructing Kruskal-like coordinates and find two algebraically distinct classes charting \(\mathcal{M}_{RN}\). We pedagogically illustrate our method by constructing two compact, conformal, and global coordinate systems labeled \(\mathcal{GK}_{\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{II}}\) for each class respectively. In both coordinates, the metric differentiability can be promoted to \(C^{\infty}\). The conformal metric factor can be explicitly written in terms of the original \(t\) and \(r\) coordinates for both charts.
## I Introduction
Reissner-Nordstrom (RN) spacetime is a unique, static, spherically symmetric and asymptotically flat solution to the coupled set of Maxwell equations and Einstein Field equations. It describes the spacetime with the mass \(M\), measured in the asymptotic region, and a static spherical electric field sourced by the charge \(Q\) in the background, with the corresponding non-zero stress-energy tensor. Spherical-like coordinates, \((t,r,\theta,\phi)\), known as the Reissner-Nordstrom coordinates are the natural coordinates to represent the metric tensor \(g_{\mu\nu}\)[1; 2; 3; 4; 5]. This chart could be assigned to an asymptotic observer, named Bob, at \(r\rightarrow\infty\) equipped with a clock measuring the time \(t\). The RN metric in units (\(c=G=1\)) can be written as
\[\mathrm{d}S_{RN}^{2}=-\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)\mathrm{ d}t^{2}+\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)^{-1}\,\mathrm{d}r^{2}+r^{2 }\left(\,\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right). \tag{1}\]
This coordinate system is ill-defined at two hypersurfaces (horizons). Similar to the Schwarzschild spacetime, the coordinate singularity \(g_{tt}=0\) locates the Killing horizons of the spacetime related to the Killing vector \(\partial_{t}\).
\[\begin{gathered} g_{tt}\left(r_{\pm}\right)=0,\\ r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}}.\end{gathered} \tag{2}\]
For the non-extremal case, \(M>Q\), the Reissner-Nordstrom black hole has an inner \(r_{-}\) and outer \(r_{+}\) horizon, which makes its interior somewhat similar to the interior of the Kerr
Spacetime [6; 7]. Further, in the region \(E_{-}=\{r|0<r<r_{-}\}\) the metric will have the same signature as in the region \(E_{+}=\{r|r_{+}<r<\infty\}\). Consequently, the physical point-like singularity at \(r=0\) is timelike in nature, in drastic disagreement with the Schwardchild spacelike singularity. The metric is dynamical in the trapped and anti-trapped regions \(E=\{r|r_{-}<r<r_{+}\}\), since the \(r\) coordinate is timelike due to the flip of the metric signature in these coordinates [4].
One way to illustrate the incompleteness of this chart around the black hole horizons is by examining Bob's clock to time out his girlfriend Alice who is, for unclear reasons, freely falling towards the RN black hole's outer horizon. While Alice measures a finite amount of time, \(\Delta\tau\), using her own clock in her rest frame, Bob measures a significantly dilated duration of time, \(\Delta t\), by timing Alice's worldline. In other words, Bob will never see Alice crossing the outer event horizon in his lifetime. Therefore, better charts are needed there [8].
Finding a new charting system to penetrate null hypersurfaces in different spacetimes is a long-standing business. Novikov coordinates [9], Lemaitre coordinates [10], Gullstrand-Painleve coordinates [11; 12], Eddington-Finkelstein coordinates [13], and Kruskal-Szekeres coordinates [14; 15; 16] are all examples of charts developed to overcome the incompetents of the Schwarzchild coordinates near the horizon. Some of them have been generalized to Reissner-Nordstrom [3] and Kerr spacetimes [17; 18]. Most of them were constructed by studying time-like and null-like geodesic behavior around those black holes. However, here we will be adopting a more algebraic approach based on a geometrical interpretation of the problem analogous to the one found in [8].
Large astrophysical black holes are expected to be electrically neutral, given that our universe is electrically neutral at large scales [19]. One exception may be small primordial black holes that did not live long enough to get neutralized. Another exception might be black holes charged under some other hidden \(U(1)\) gauge group different from electromagnetism [20]. In addition, even a small amount of charge on a large black hole could be important when we encounter certain phenomena such as cosmic rays. This provides enough motivation to study RN black holes not only for academic interests but also from a phenomenological point of view [21]. On the other hand, studying the causal structure of the RN black hole, which is entirely different from the one associated with the Schwardchild spacetime, is important since it shares some generic features with other types of black holes with two horizons, e.g. the Kerr black hole which is much more relevant in astrophysical situations [7; 22].
Since rotating and charged black holes share a similar interior structure, this makes constructing Penrose diagrams for the RN metric a cumbersome task on its own. For example, Klosch and Strobl managed to provide _non-conformal_ global coordinates for the extreme
and non-extreme RN spacetimes [23]. However, the attempts to construct conformal global coordinates were so far based on patching two sets of the Kruskal-Szekeres coordinates \(\mathcal{K}_{\pm}\), where each set is well-behaved on one horizon \(r_{\pm}\) while it fails on the other one \(r_{\mp}\). This makes the region of validity for each chart \(\mathcal{E}_{+}=E_{+}\cup\{r|r=r_{+}\}\) and \(\mathcal{E}_{-}=E_{-}\cup\{r|r=r_{-}\}\) respectively. Switching between the two charts was the key to covering the whole RN manifold and constructing a global Penrose diagram [22; 24; 25]. Such patched Penrose diagrams, found in [4] for example, will still prove inconvenient if we want to study geodesics across the horizons [26]. To overcome this obstacle, we need to construct a global conformal coordinate system.
Recently in [27], Farshid proposed a _smoothing_ technique that could be used to provide a \(C^{2}\)-conformal global chart for the RN spacetime, and pointed out the possibility of generalizing the method to spherically symmetric spacetimes. The method used was reported to be a generalization of one used in[22] aiming to promote the differentiability of the map. One can also find Penrose diagrams constructed using this method in [28]. The central idea of this work was to find coordinates that extrapolate to each of the Kruskal-Szekeres coordinates \(\mathcal{K}_{\pm}\) when approaching the horizon located at \(r=r_{\pm}\). In addition, the smoothing was achieved through the use of the bump functions [29; 30]. A similar technique was used by Schindler in [31; 32], designed to provide a global chart regular for a special class of spherically symmetric spacetimes with multiple horizons. The reader can also find a comprehensive summary of the Penrose diagram theory in chapter one of Schindler's doctoral thesis [33].
In this work, we will define a new procedure that can produce compact, conformal, and global (CCG) charts that are valid at both the inner and outer horizons of RN spacetime, and for which the metric tensor is \(C^{\infty}\). Using this procedure we will provide two CCG coordinate systems for the RN spacetime, which we label as type-I and type-II coordinates, based on their class. Moreover, coordinates provided in [27] could be thought of as coordinates of type-II. Our method makes no underlying assumptions about the nature of the spacetime, other than it possesses two horizons. Therefore, to facilitate future applications of this procedure, we will present here a detailed pedagogical approach.
The structure of this paper is as follows. In section (II.1), we begin by reformulating the core idea of the Kruskal chart, and then revisit the Kruskal charting of the Schwarzchild (II.2) and the RN (II.4) spacetimes. In section (III), the main procedure for constructing generalized Kruksal charts is presented. The type-I and type-II coordinates as well as their relaxed versions for RN spacetime are given in (IV.1) and (IV.2). Finally, we discuss the outcome of the analysis and possible future work in section (V)
Preliminary
### Kruskal-Szekeres coordinates
Kruskal-Szekeres coordinates represent a maximal CCG chart for the Schwarzschild metric and has been studied extensively in the literature [14; 15; 16; 34; 35]. Their global nature is attributed to two features: (i) they can cover the null sphere located at radius \(r=2M\) which Bob will fail to chart, and (ii) it is a maximal extension of the Schwarzchild chart representing two copies of the Schwarzschild universe. The metric written in the spherical-like coordinate known as the Schwarzschild Spacetime \(\left(t,r,\theta,\phi\right)\)1 where \(t\in\mathbb{R}\), \(r\in\mathbb{R}_{+}\backslash\{0\}\), \(\theta\in\left(0,\pi\right)\), and \(\phi\in\left[0,2\pi\right)\) takes the well-known form
Footnote 1: Since examining the behavior and possible problems of the spherical coordinates as \(r\to\infty\) falls beyond the scope of this work, the angular dependence \(\left(\theta,\phi\right)\) will be neglected from now on for simplicity.
\[dS^{2}_{Sch}=\left(\frac{r-2M}{r}\right)\left\{-dt^{2}+dr_{*}^{2}\right\}= \frac{1}{r(r_{*})}\left(r(r_{*})-2M\right)dS^{2}_{Con}, \tag{3}\]
where \(dS^{2}_{Sch}\) and \(dS^{2}_{Con}\) stand for the Schwarzschild and conformal metric respectively. Here, \(r_{*}\) is defined2 as follows
Footnote 2: Usually, the constant of integration in defining the tortoise coordinate, \(r_{*}\), is chosen to be \(-2Mln(2M)\) in order to maintain dimensionless quantity inside the natural logarithm. Here, for simplicity we omit this step.
\[exp\left(r_{*}\right)=exp\left(r\right)\left|r-2M\right|^{2M}. \tag{4}\]
It is worth emphasizing that the map from \(r\)-coordinate to its tortoise version \(r_{*}\) is bijective and its inverse is differentiable on each of \(\mathcal{S}_{+}\) and \(\mathcal{S}_{-}\) separately. This is obviously due to the modulus included in the definition of these coordinates in equation (4).
A rigorous procedure would involve solving the Einstein Field Equations in Kruskal coordinates (which is the _top-down_ approach as in [8; 36]3) by means of null-casting and redefining the null coordinates. Since the Schwarzschild coordinates cover only the regions \(\mathcal{S}_{-}=\{r|0<r<2M\}\) and \(\mathcal{S}_{+}=\{r|2M<r<\infty\}\) of one universe of the Kruskal metric, trying to map the local chart to the global one (i.e. the _bottom-up_ approach) is not quite rigorous, because the map between the two charts as well as the Jacobian, Hessian, and the higher-versions of it will be singular at the event horizon \(r=2M\)[37].
Footnote 3: The conformal factor in these references is written in terms of \(r\), however, it is more instructive to think of \(r(U,V)\) as a function of \(U\) and \(V\), and not as the areal coordinate \(r\).
Nevertheless, we seek a global chart in which the metric is at least \(C^{2}\) everywhere on the manifold in order to satisfy the coupled Field Equations which contain first and second derivatives of the metric. Thus, we can apply this bottom-up approach (as in most of the General Relativity textbooks [2; 38]) by studying the limit at \(r=2M\) and analytically continuing the metric there. At the end, the metric \(g_{\mu\nu}\) must be written explicitly in the Kruskal coordinates \(\left(T,R,\theta,\phi\right)\) only. In this paper, we will follow the bottom-up approach
to find the generalized Kruskal coordinates which chart the whole RNspacetime. Taking the Kruskal charting of the Schwarzschild black hole as our guide, we review the traditional derivation of the Kruskal coordinates.
### Construction of the Kruskal coordinates: Schwardchild Spacetime
We begin by mapping the Schwarzschild coordinates to intermediate null coordinates first, in particular the retarded (\(u\)) and advanced (\(v\)) time coordinates, defined as \(u=t-r_{*}\) and \(v=t+r_{*}\). To handle the coordinate singularity of the former at the horizon, \(r=2M\), the null freedom is used to map the latter set to another set of the null coordinates using \(u\to U\equiv h(u)\) and \(v\to V\equiv k(v)\). This gives
\[dS^{2}_{con}=-dudv=-\frac{dUdV}{\frac{dh}{du}\frac{dk}{dv}}\equiv-\frac{Q(U,V) dUdV}{r(U,V)-2M}, \tag{5}\]
where \(Q(U,V)\) is at least \(C^{2}\)-function \(\mathcal{S}=\mathcal{S}_{+}\cup\mathcal{S}_{-}\cup\{r|r=2M\}\). This is achieved by employing the definition of \(r_{*}\). A sufficient coordinate transformation is given by
\[U\equiv\nu exp\left(\frac{-u}{4M}\right),\ \ \ V\equiv\nu exp\left(\frac{v}{4M} \right), \tag{6}\]
where
\[\nu=\begin{cases}+1&r>2M\\ -1&r<2M\end{cases}, \tag{7}\]
The signs \(\pm\) are included to achieve the maximal analytical extension of the metric. The product \(UV\) is positive in the regions II and III, and negative in the regions I and IV, following the convention given in [2]. The \(r\) coordinate is defined implicitly as
\[UV=exp\left(\frac{r}{2M}\right)(r-2M). \tag{8}\]
This equation can be explicitly solved for \(r\) by employing the multi-valued Lambert function \(W\)[39; 40],
\[r=2M\left[W\left(\frac{UV}{-2Me}\right)+1\right]. \tag{9}\]
Then, the Schwarzchild metric will have the following form in the new double null coordinates
\[dS^{2}_{Sch}=-\frac{16M^{2}e^{-\frac{r(U,V)}{2M}}}{r(U,V)}dUdV+r^{2}(U,V)d\Omega ^{2}. \tag{10}\]
Finally, the Kruskal coordinates \(T_{KS}\) and \(R_{KS}\) are related to the new null coordinates through the following transformations
\[\begin{split} U\equiv\frac{1}{2}\left(T_{KS}-R_{KS}\right),\\ V\equiv\frac{1}{2}\left(T_{KS}+R_{KS}\right).\end{split} \tag{11}\]
It is worth writing the final version of the metric in the Kruskal coordinates as
\[\begin{array}{c}dS_{Sch}^{2}=\frac{8Mexp\left(-W\left(T_{KS},R_{KS} \right)-1\right)}{W\left(T_{KS},R_{KS}\right)+1}(-dT_{KS}^{2}+dR_{KS}^{2})\\ \hskip 14.226378pt+4M^{2}\left(W\left(T_{KS},R_{KS}\right)+1\right)^{2}d\Omega ^{2}.\end{array} \tag{12}\]
As a cross-check, one could verify that the Einstein tensor \(G_{\mu\nu}\) corresponding to the Kruskal metric is zero everywhere on the Schwarzschild manifold, thus confirming that the stress-energy tensor \(T_{\mu\nu}\) is identically zero (as it must be for the Schwarzschild solution). This is true despite the fact that taking the derivatives of the metric with respect to the coordinates \((T,R)\) (using implicit differentiation with respect to \((t,r)\)) will be ill-defined at the event horizon. One could also verify that the maps between the Kruskal and the Schwarzschild chart are diffeomorphic in the regions \(S_{+}\) and \(S_{-}\)[37].
### A geometric picture of the Kruskal charting
The procedure of constructing Kruskal coordinates for Schwarzschild spacetime outlined in the previous section becomes limited when applied to spacetimes with more than one horizon. To be able to resolve this obstacle, we re-interpret the main premise of the construction. If Bob lived in a four-dimensional Minkowski spacetime, his clock would be able to properly time the events taking place there globally. However, once the spacetime is only asymptotically Minkowskian, the chart will fail near the null hypersurfaces. _But what if we start with a chart in the flat spacetime which is ill-defined at the locations defining these null hypersurfaces?_ For example, we can define a "bad" chart \(\mathcal{Z}\) in the conformal spacetime with the metric \(g_{\mu\nu}^{Con}\), in which any given time duration \(\Delta\tau\) of Alice's trip to the \(r=2M^{4}\) is mapped to \(\Delta\tilde{t}\to 0\).
Apparently, there is a family of these "_bad_" charts \(\mathcal{Z}\) that would be well defined on the physical spacetime, with the metric \(g_{\mu\nu}=\omega^{2}(x)g_{\mu\nu}^{Con}\), where \(\omega(x)\) is the conformal factor. They are only conditioned to contract the time interval \(\Delta\tau\) at the same rate as the dilation of time in Bob's frame. One can find an equivalent argument in [8] that we quote here "_A better coordinate system, one begins to believe, will take these two points at infinity and spread them out into a line in a new (\(r_{new},t_{new}\))-plane; and will squeeze the line (\(r=2M,t\) from \(-\infty\) to \(\infty\)) into a single point in the (\(r_{new},t_{new}\))-plane_".
As we will show here later, applying this simple argument to spacetimes with more than one horizon would be a tedious algebraic task. Mathematically, the fundamental premise of the construction is to find conformal coordinates \(\mathcal{Z}\) that generate poles of the same rank as zeros of the conformal factor. Then as the zeros and poles cancel out, the physical metric in
\(\mathcal{Z}\) will be CCG, in light of the bottom-up approach. In the next subsections, we will review the Kruskal charting of the RN spacetime following the notation in [3; 22].
### Outer and inner Kruskal coordinate: Reissner Nordstrom spacetime
One example where the standard Schwarzchild-like Kruskal charting will fail in constructing CCG is the RN spacetime.
\[\begin{split} dS_{RN}^{2}=\frac{\left(r-r_{+}\right)\left(r-r_{-} \right)}{r^{2}}\left\{-dt^{2}+dr_{*}^{2}\right\}=\frac{\left(r-r_{+}\right) \left(r-r_{-}\right)}{r^{2}}dS_{Con}^{2},\\ dS_{Con}^{2}=-dudv,\end{split} \tag{13}\]
where \(dS_{RN}^{2}\) stands for RN metric, while \(\left(u,v\right)\) represents the double null coordinates constructed in the same manner as in the Schwardchild case. The RN radial tortoise coordinate \(r_{*}\) is defined as
\[\begin{split} exp\left(r_{*}\right)=exp\left(r\right)\left|r-r_{ +}\right|^{\frac{\alpha_{+}}{2}}\left|r-r_{-}\right|^{\frac{\alpha_{-}}{2}}, \\ \alpha_{+}\equiv\frac{2r_{+}^{2}}{r_{+}-r_{-}},\\ \alpha_{-}\equiv\frac{2r_{-}^{2}}{r_{+}-r_{-}},\end{split} \tag{14}\]
where \(\alpha_{-}\) and \(\alpha_{+}\) are the surfaces gravity at \(r_{-}\) and \(r_{+}\) respectively.
Similar to the Schwardchild tortoise coordinate, \(r_{*}(r)\) is bijective and its inverse is differentiable on \(E_{+}\), \(E\), and \(E_{-}\) separately. However, there is a potential to solve explicitly for \(r\) by employing generalized Lambert functions \(\mathcal{W}\)[41; 42; 43; 44]. Since this is a tedious task on its own, we confine our analysis to the main objective, while this step could be addressed in future work.
By examining the tortoise coordinate definition, it is obvious that a zero at \(r_{\pm}\) is always coupled with a pole at \(r_{\mp}\), hence it is not straightforward to factor out a product of simple poles at \(r_{+}\) and \(r_{-}\) in the conformal metric. Nevertheless, it remains possible to construct regular charts at one horizon that is ill-defined at the other. These coordinates are regular in the domain \(\mathcal{E}_{+}\) and \(\mathcal{E}_{-}\), respectively. The outer \(\mathcal{K}_{+}\) and inner \(\mathcal{K}_{-}\) Kruskal coordinates are simply related to the "\(-\)" null-chart \((\mathcal{U}_{-},\mathcal{V}_{-})\) and "\(+\)" null-chart \((\mathcal{U}_{+},\mathcal{V}_{+})\) following the same definition in (11). We will work with the following sign convention
\[\begin{split}\mathcal{U}_{+}&=\nu_{+}U_{+}, \mathcal{V}_{+}=\nu_{+}V_{+}\\ U_{+}&=exp\left(\frac{-u}{\alpha_{+}}\right), \mathcal{V}_{+}=exp\left(\frac{v}{\alpha_{+}}\right),\end{split} \tag{15}\]
where
\[\nu_{+}=\begin{cases}+1&r>r_{+}\\ -1&r<r_{+}\end{cases}, \tag{16}\]
to represent the maximal analytical extension of these coordinates. Then the \(t\) and \(r\) coordinates are characterized by the following curves in the \(\left(\mathcal{U}_{+},\mathcal{V}_{+}\right)\)-plane:
\[\begin{split}\mathcal{U}_{+}\mathcal{V}_{+}&=exp \left(\frac{r}{2\alpha_{+}}\right)\left(r-r_{+}\right)\left|r-r_{-}\right|^{- \alpha},\\ &\frac{\mathcal{V}_{+}}{\mathcal{U}_{+}}=\pm exp\left(+\frac{2t}{ \alpha_{+}}\right).\end{split} \tag{17}\]
Similarly,
\[\begin{split}\mathcal{U}_{-}&=\nu_{-}U_{-},\qquad \qquad\mathcal{V}_{-}=\nu_{-}V_{-},\\ U_{-}&=exp\left(\frac{u}{\alpha_{-}}\right),\qquad V _{-}=exp\left(\frac{-v}{\alpha_{-}}\right)\end{split} \tag{18}\]
where
\[\nu_{-}=\begin{cases}+1&r>r_{-}\\ -1&r<r_{-}.\end{cases} \tag{19}\]
The \(t\) and \(r\) curves in the \(\left(\mathcal{U}_{-},\mathcal{V}_{-}\right)\)-plane are defined as
\[\begin{split}\mathcal{U}_{-}\mathcal{V}_{-}&=exp \left(-\frac{r}{2\alpha_{-}}\right)\left(r-r_{-}\right)\left|r-r_{+}\right|^{ -\bar{\alpha}},\\ &\frac{\mathcal{V}_{-}}{\mathcal{U}_{-}}=\pm exp\left(-\frac{2t} {\alpha_{-}}\right),\end{split} \tag{20}\]
Consequently, the metric in these "\(+\)" or "-" null-charts becomes
\[\begin{split} dS_{RN}^{2}&=-\alpha_{\pm}\frac{\left( r-r_{+}\right)\left(r-r_{-}\right)}{r^{2}}\frac{d\mathcal{U}_{\pm}d\mathcal{V}_{\pm}}{ \mathcal{U}_{\pm}\mathcal{V}_{\pm}}\\ &=-\alpha_{+}\frac{exp\left(-\frac{2r}{\alpha_{+}}\right)}{r^{2}} \left(r-r_{-}\right)^{1+\alpha}d\mathcal{U}_{+}d\mathcal{V}_{+}\\ &=-\alpha_{-}\frac{exp\left(\frac{2r}{\alpha_{-}}\right)}{r^{2}} \left(r_{+}-r\right)^{1+\bar{\alpha}}d\mathcal{U}_{-}d\mathcal{V}_{-},\end{split} \tag{21}\]
where5
Footnote 5: The extreme cases (\(Q=M\) and \(Q>M\)) of the RN metric are not considered here.
\[\begin{split}\alpha&\equiv\frac{\alpha_{-}}{\alpha _{+}}=\left(\frac{r_{-}}{r_{+}}\right)^{2}\rightarrow\ \ \ 0<\alpha<1,\\ \bar{\alpha}&\equiv\frac{\alpha_{+}}{\alpha_{-}}= \left(\frac{r_{+}}{r_{-}}\right)^{2}\rightarrow\ \ \ \ \ \ \ \ 1<\bar{\alpha}.\end{split} \tag{22}\]
It is easy to check that the metric in "\(+\)" ("\(-\)") null-coordinates is regular at the outer (inner) horizon \(r_{+}(r_{-})\). However, the coordinates fail 6 at the inner (outer) horizon \(r_{-}\) (\(r_{+}\)). Moreover, the metric in the "\(+\)"-null coordinates is not asymptotically flat in agreement with the Schwarzschild induced metric defined on the hypersurfaces with fixed \(\theta\) and \(\phi\) in equation (12), where the conformal factor approaches zero as \(r\rightarrow\infty\). Nevertheless, global Kruskal coordinates could be built by combining these two definitions in (15, 18) together (see e.g. the work of Carter[24], Hamilton[22] Schindler[31], and Farshid [27]). Although they all managed to find regular metric across the horizon, still the metric is only \(C^{2}\) in the former case.
Footnote 6: \(dS_{RN}^{2}=0\) at \(r=r_{-}\) (\(r_{+}\)) in the “\(+\)” (“\(-\)”) coordinates according to equation (21).
## III Global conformal chart criteria
We start our analysis by studying the conditions needed for a valid conformal global chart. We want to map the double null coordinates \((u,v)\) to the global double null coordinates \((\tilde{u},\tilde{v})\), while still maintaining the light-cone invariance in the new coordinates \((\tilde{u},\tilde{v})\). The most direct way to achieve that will be to use only the null-freedom as
\[\begin{array}{ccc}\tilde{u}\equiv h(u)&\rightarrow&du=\frac{1}{ \frac{dh}{du}}d\tilde{u},\\ \tilde{v}\equiv k(v)&\rightarrow&dv=\frac{1}{dk}d\tilde{v}.\end{array} \tag{23}\]
To construct a well-defined chart on the entire Reissner-Nordstrom manifold, we identify three distinct possibilities with reference to the singularity structure of the term \(\frac{dh}{du}\frac{dk}{dv}\), focusing on its behavior at \(r=r_{-}\) and \(r=r_{+}\). The three options are:
1. _Type-O_: \(\frac{dh}{du}\frac{dk}{dv}\) has a zero either at \(r=r_{-}\) or \(r=r_{+}\). The regularity of the metric in the new \((\tilde{u},\tilde{v})\) coordinates would be achieved at \(r=r_{-}\) or \(r=r_{+}\) but not simultaneously. "\(\pm\)" null coordinates are examples for this case. However, generating nontrivial coordinates out of \((U_{\pm},V_{\pm})\) is possible7 Footnote 7: The transformations that lead to such coordinates are expected to be more complicated as they are restricted by the requirement to leave the singularity structure invariant or to generate a decoupled zero at the other horizon. This condition could be formulated as follows \[\frac{dh}{du}\frac{dk}{dv}=\left(r-r_{\pm}\right)\zeta\left(r_{*},t\right)\] (24)
2. _Type-I_: \(\frac{dh}{du}\frac{dk}{dv}\) has a product of zeros at \(r=r_{-}\) and \(r=r_{+}\). If we manage to factor out this product of zeros while keeping the associated poles decoupled, then we will have a conformal global coordinate for the RN spacetime. We will illustrate this case with an example in IV.1. This condition can be formulated as \[\frac{dh}{du}\frac{dk}{dv}=\left(r-r_{+}\right)\left(r-r_{-}\right)\gamma\left( r_{*},t\right)\] (25)
3. _Type-II_: a sum of decoupled simple zeros at \(r=r_{+}\) and \(r=r_{-}\), each coupled to a pole, and possibly zeros of constrained rank at \(r=r_{-}\) for former and \(r=r_{+}\) for later. In principle, this mixture of poles and zeros might be easier to find compared to _Type-I_, however, the metric is expected to be more complicated form-wise. We will illustrate this case with an example in IV.2. This condition can be formulated as \[\frac{dh}{du}\frac{dk}{dv}=(r-r_{+})M_{+}(r_{*},t)+(r-r_{-})M_{-}(r_{*},t)+ \beta(r_{*},t),\] (26)
The three differential equations listed above are sufficient to construct the desired singularity structure in each case, while the constraints are encoded within \(\zeta\), \(\gamma\), \(M_{\pm}\) and \(\beta\).
Constructing CCGS for Reissner-Nordstrom Spacetime
### Type-I CCG Global Chart
As we mentioned before, just by looking at the definition of \(r_{*}\), there is no simple way of factorizing the zeros \((r-r_{+})\) or \((r-r_{-})\) without invoking poles at \(r_{-}\) or \(r_{+}\). Still, we can consider combining equations (20) and (17)
\[\mathcal{U}_{+}\mathcal{U}_{-}\mathcal{V}_{+}\mathcal{V}_{-}=\frac{(r-r_{+})}{ \left|r-r_{+}\right|^{\alpha}}\frac{(r-r_{-})}{\left|r-r_{-}\right|^{\alpha}} exp\left(\frac{r}{2\alpha_{+}}\right)exp\left(-\frac{r}{2\alpha_{-}}\right) \tag{27}\]
This may give us a hint for how to find \((\tilde{u},\tilde{v})\) with the desired map to fulfill the singularity structure of type-I. For example, we can start with the following definitions of \(\mathcal{GK}_{\mathcal{I}}\)
\[\frac{dh}{du}=\frac{\mu}{U_{+}^{-1}+U_{-}^{-1}},\hskip 28.452756pt\frac{dk}{ dv}=\frac{\mu}{V_{+}^{-1}+V_{-}^{-1}}, \tag{28}\]
where
\[\mu=\begin{cases}+1&r>r_{+}\mid\mid r<r_{-}\\ -1&r_{-}<r<r_{+}.\end{cases} \tag{29}\]
The definition we give in (28) reduces to evaluating the \(I_{1}\)-integration given here
\[I_{1}=\int\frac{1}{x^{q}+1}dx, \tag{30}\]
where \(q>1\). This integration has an upper and lower bound, hence the sign convention we use here will locate the inner horizon \(r_{-}\), outer horizon \(r_{+}\) and the asymptotically flat region \(r\rightarrow\infty\) according to the choice of the reference point. We choose that point to be the outer horizon \(u\rightarrow-\infty\) (\(v\rightarrow\infty\)). Accordingly, we have a monotonic map from \(u(v)\), defined in any of the regions \(E_{\pm}\) and \(E\); to \(\tilde{u}\) (\(\tilde{v}\)). Moreover, the map from \((t,r)\) to the later coordinates is also _globally monotonic_. Moreover, those coordinates have a built-in _compact_ domain and, hence, could be used directly to construct Penrose diagrams for the RN spacetime. The choice of signs \(\mu\) does not harm the continuity or differentiability of the map, still, it will result in a uniform signature of the Generalized Kruskal Coordinate of type-I. Nevertheless, it is sufficient if we have only a semi-positive conformal factor. Accordingly, the metric in the CCG type-I coordinates will take the following form.
\[\begin{split} dS_{RN}^{2}&=-\frac{1}{r^{2}}\left\{ \left|r-r_{-}\right|^{\alpha+1}exp\left(-\frac{2r}{\alpha_{+}}\right)+\left|r -r_{+}\right|^{\tilde{\alpha}+1}exp\left(\frac{2r}{\alpha_{-}}\right)\right. \\ &\left.+2\cosh\left[t\left(\frac{1}{\alpha_{+}}+\frac{1}{\alpha_{ -}}\right)\right]exp\left(r\left[\frac{-1}{\alpha_{+}}+\frac{1}{\alpha_{-}} \right]\right)\left|r-r_{+}\right|^{\frac{\alpha+1}{2}}\left|r-r_{-}\right|^{ \frac{\alpha+1}{2}}\right\}d\tilde{u}d\tilde{v}\end{split} \tag{31}\]
The metric possesses a conformal factor resembling the sum of the conformal factors of the \(\mathcal{K}_{\pm}\) in addition to a new time-dependent term that vanishes on both horizons. The conformal factor shown in equation (31) is a semi-positive function over the domain of
the \(r\)-coordinate, hence the metric is well-behaved on both of the horizons and takes the following asymptotic behavior as \(r\to r_{+}\),
\[dS_{RN}^{2}(r\to r_{+})\rightarrow-\frac{exp\left(-\frac{2r}{\alpha_{+}} \right)}{r^{2}}\left(r-r_{-}\right)^{1+\alpha}d\tilde{u}d\tilde{v}, \tag{32}\]
Similarly as \(r\to r_{-}\),
\[dS_{RN}^{2}(r\to r_{-})\rightarrow-\frac{exp\left(\frac{2r}{\alpha_{-}} \right)}{r^{2}}\left(r_{+}-r\right)^{1+\tilde{\alpha}}d\tilde{u}d\tilde{v}, \tag{33}\]
Executing the integration in \(I_{1}\) might be doable through the use of Hypergeometric functions \({}_{2}F_{1}(a,b;c;x)\)[45; 46; 47] or generically through expanding the function under the integration sign and then integrating.
However, this is not the end of the story. Similar to the Schwarzschild case, the Jacobian and its higher relatives will be undefined at the horizons, thus it is not reliable to take the derivatives of the conformal factor implicitly. One can also argue that there should be two kinks present at \(r=r_{+}\) and \(r=r_{-}\), due to the existence of the modules in the conformal factor as well as the monotonic map between \(r\) and \((\tilde{u},\tilde{v})\). However, we can get rid of these kinks, for instance, through the use of relaxation functions.
In short, another set of coordinates \((\tilde{u},\tilde{v})\) can be introduced which will inherit the properties mentioned above and possess a relaxed (well-behaved and smooth) conformal factor at \(r_{+}\) and \(r_{-}\). As a consequence, the metric will be guaranteed to be \(C^{\infty}\) in these new coordinates. We choose the function \(\tanh x\) to do this job. The relaxed coordinate transformation is
\[\begin{split}\frac{dh}{du}&=\frac{\mu}{\tanh\left[U_ {-}^{2}\right]U_{+}^{-1}+\tanh\left[U_{+}^{2}\right]U_{-}^{-1}},\\ \frac{dk}{dv}&=\frac{\mu}{\tanh\left[V_{-}^{2} \right]V_{-}^{-1}+\tanh\left[V_{+}^{2}\right]V_{-}^{-1}}.\end{split} \tag{34}\]
The metric now becomes
\[\begin{split} dS_{RN}^{2}&=-\frac{1}{r^{2}}\left\{ Q_{-}(r,t)\left|r-r_{-}\right|^{\alpha+1}exp\left(-\frac{2r}{\alpha_{+}} \right)+Q_{+}(r,t)\left|r-r_{+}\right|^{\tilde{\alpha}+1}exp\left(\frac{2r}{ \alpha_{-}}\right)\right.\\ &\left.+\tilde{Q}(r,t)exp\left(r\left[\frac{-1}{\alpha_{+}}+ \frac{1}{\alpha_{-}}\right]\right)\left|r-r_{+}\right|^{\frac{\alpha+1}{2}} \left|r-r_{-}\right|^{\frac{\alpha+1}{2}}\right\}d\tilde{u}d\tilde{v},\end{split} \tag{35}\]
where \(Q_{-}\), \(Q_{+}\), and \(\tilde{Q}\) are defined as
\[\begin{split} Q_{-}(r,t)&=\tanh\left[exp\left( \frac{2(t-r)}{\alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+} \right|^{\tilde{\alpha}}}\right]\tanh\left[exp\left(\frac{-2(t+r)}{\alpha_{-} }\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{\alpha}}} \right]\\ Q_{+}(r,t)&=\tanh\left[exp\left(\frac{2(r-t)}{\alpha_ {+}}\right)\frac{\left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}} \right]\tanh\left[exp\left(\frac{-2(-t+r)}{\alpha_{+}}\right)\frac{\left|r-r_ {+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}}\right]\\ \tilde{Q}(r,t)&=Q_{1}(r,t)exp\left(t\left(\frac{1}{ \alpha_{+}}+\frac{1}{\alpha_{-}}\right)\right)+Q_{2}(r,t)exp\left(-t\left( \frac{1}{\alpha_{+}}+\frac{1}{\alpha_{-}}\right)\right)\\ Q_{1}(r,t)&=\tanh\left[exp\left(\frac{2(t-r)}{ \alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{ \alpha}}}\right]\tanh\left[exp\left(\frac{-2(-t+r)}{\alpha_{+}}\right)\frac{ \left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\alpha}}\right]\\ Q_{2}(r,t)&=\tanh\left[exp\left(\frac{-2(t+r)}{ \alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{ \alpha}}}\right]\tanh\left[exp\left(\frac{2(r-t)}{\alpha_{+}}\right)\frac{ \left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}}\right]\end{split} \tag{36}\]
This relaxed version of the conformal factor is guaranteed to be smooth and semi-positive everywhere in coordinates \((\tilde{u},\tilde{v})\). Before we move to construct the type-II coordinates \(\mathcal{GK_{I\!I}}\), there are three features of the metric worth commenting on. First, the metric is not asymptotically flat and is different from the \(\mathcal{K_{\pm}}\) coordinates where the induced metric on the submanifold \(M_{2}=\mathcal{M}\backslash SO(3)\) is asymptotically vanishing. In \(\mathcal{GK_{I\!I}}\) coordinates the induced metric on \(M_{2}\) blows up. This is completely natural as the coordinates are compact, hence the proper distance is invariant. Second, the \(\mathcal{GK_{I\!I}}\) coordinates are dynamically casting the metric since the conformal factor includes explicit time dependence after and before the relaxation. This prevents the \(r\) and \(t\) from being related to \((\tilde{u},\tilde{v})\) by simple transformation similar to (17,20). Third, the integral \(I_{2}\) defining \((\tilde{u},\tilde{v})\) is given by
\[I_{2}=\int\frac{dx}{\tanh\left(x^{2}\right)x^{q+1}+\tanh\left(x^{-2q}\right)}, \tag{37}\]
The \(q>1\) cases could be evaluated numerically, however, analytical methods could still be helpful in studying the relation between \(\mathcal{K_{\pm}}\) and \(\mathcal{GK_{I\!I}}\) at any point. This could be achieved for example by employing series expansion, as mentioned earlier. Moreover, if we manage to invert equations (34) to solve explicitly for the null coordinates in terms of \(\mathcal{GK_{I\!I}}\), then we could employ the generalized Lambert function to solve for \((t,r)\) explicitly as well. Such an expansion is expected to recover equations (20) and (17) near to the horizons \(r=r_{-}\) and \(r=r_{+}\), respectively.
### Type-II Global Chart
While constructing \(\mathcal{GK_{I\!I}}\), a simple zero at each horizon \(r_{\pm}\) was a coupled one at the other horizon \(r_{\mp}\). This product of zeros had a semi-positive regular amplitude everywhere as shown in equation (31) or (35). However, for \(\mathcal{GK_{I\!I}}\) we will have a different singularity structure that serves the same purpose: sum of two zeros at each horizon \(r_{\pm}\), each coupled to a semi-positive singular amplitude at the other horizon \(r_{\mp}\) that is singular at the other horizon. In principle, this class of charts should contain families of coordinates at which the coordinates themselves are extrapolation between two Outer and Inner Kruksal coordinates. In light of this statement, the chart given in [27] plausibly belongs to that class.
The conformal metric will have a simple pole at \(r=r_{\pm}\) coupled to \(M_{\pm}(r_{*},t)\), while \(\beta\left(r_{*},t\right)\) is effectively a residual term for completeness. \(M_{\pm}(r_{*},t)\) and \(\beta(r_{*},t)\) are satisfying the following constraints. As \(r\to r_{\pm}\)
\[\begin{split} M_{\pm}\left(r_{*},t\right)&\to constant,\\ M_{\mp}\left(r_{*},t\right)&\to 0,\\ \beta\left(r_{*},t\right)&\to 0.\end{split} \tag{38}\]
Alternatively, we can restate the first constraint as: \(M_{\pm}\) must have no overall pole at \(r_{+}\) (\(r_{-}\)). Later, through this analysis, we will learn that \(\beta\) will be the key to finding the global conformal charts in this procedure for the type-II coordinates. Given equation (14), we can rewrite this in terms of the \(\mathcal{K}_{\pm}\) or the double null coordinates as follows
\[\begin{split}\frac{dh}{du}\frac{dk}{dv}=\frac{exp\left(\frac{2(r_ {*}-r_{+})}{\alpha_{+}}\right)}{\left|r-r_{-}\right|^{\alpha}}M_{+}(r_{*},t)+ \frac{exp\left(-\frac{2(r_{*}-r_{-})}{\alpha_{-}}\right)}{\left|r-r_{+} \right|^{\tilde{\alpha}}}M_{-}(r_{*},t)+\beta(r_{*},t),\end{split} \tag{39}\]
Revisiting the condition in equation (38), \(M_{+}\) (\(M_{-}\)) must have zeros at \(r=r_{-}\) (\(r=r_{+}\)) of rank higher than \(\alpha\) (\(\bar{\alpha}\)) respectively. Searching for solutions for equation (39) could be more fruitful if we were able to find functions \(M_{\pm}\) and \(\beta\) with (\(r_{*}\pm t\)) dependence. Accordingly, the residual term \(\beta\) could be used to easily factorize the right-hand side of the equation (39) into a product of \(u\)- and \(v\)-dependent functions. The task of generating a solution to equation (39) is not trivial, but if we find \(M_{\pm}(u,v)\) and \(\beta(u,v)\), this will boost our progress towards achieving this task. The easiest hint we can get from the form of that equation is to try to construct \(M_{+}(M_{-})\) from the \(\mathcal{K}_{+}\) (\(\mathcal{K}_{-}\)). Following this logic, using the trial and error method, we learn that if we define \(M_{\pm}\) as
\[M_{+}\equiv\frac{\mu\mu}{\left(1+U_{+}^{1+2\tilde{\alpha}}\right)\left(1+V_{+ }^{1+2\tilde{\alpha}}\right)},\hskip 28.452756ptM_{-}\equiv\frac{\mu\mu}{ \left(1+U_{-}^{2}\right)\left(1+V_{-}^{2}\right)}, \tag{40}\]
we can find \(\beta\) that can do the factorization for us
\[\beta\equiv\frac{\mu\mu U_{+}V_{-}}{\left(1+U_{+}^{1+2\tilde{\alpha}}\right) \left(1+V_{-}^{2}\right)}+\frac{\mu\mu U_{-}V_{+}}{\left(1+V_{+}^{1+2\tilde{ \alpha}}\right)\left(1+U_{-}^{2}\right)}. \tag{41}\]
This will leave us eventually with the following choices for \(\frac{dh}{du}\) and \(\frac{dk}{dv}\)
\[\begin{split}\frac{dh}{du}&=\mu\left[\frac{U_{+}}{1 +U_{+}^{1+2\tilde{\alpha}}}+\frac{U_{-}}{1+U_{-}^{2}}\right]\\ \frac{dk}{dv}&=\mu\left[\frac{V_{+}}{1+V_{+}^{1+2 \tilde{\alpha}}}+\frac{V_{-}}{1+V_{-}^{2}}\right]\end{split} \tag{42}\]
One more time, the choice we made for \(\mathcal{GK_{TI}}\) coordinates is naturally compact which means we can use those coordinates directly to build the Penrose diagrams. Nonetheless, the integration in terms of \((u,v)\) is significantly simpler than the one used in the example we give for the \(\mathcal{GK_{I}}\). Again, our choice of integration reference point will be the outer horizon \(r_{+}\). We can now write the metric
\[dS_{RN}^{2}=-\frac{1}{r^{2}}\left\{\right.\left.\left.\left.A_{+}^{-1}(r,t)+A_{ -}^{-1}(r,t)+A^{-1}(r,t)\right\}^{-1}d\tilde{u}d\tilde{v}\right. \tag{43}\]
where \(A_{\pm}(r,t)\) and \(A(r,t)\) are defined as follows
\[\begin{split} A_{+}(r,t)&\equiv exp\left(-\frac{2r}{ \alpha_{+}}\right)\left|r-r_{-}\right|^{\alpha+1}+exp\left(\frac{2r}{\alpha_{-} }\right)\left|r-r_{+}\right|^{1+2\bar{\alpha}}\left|r-r_{-}\right|^{-1}\\ &\qquad 2\cosh\left[t\left(\frac{1}{\alpha_{+}}+\frac{2}{\alpha_{-}} \right)\right]exp\left(r\left[-\frac{1}{\alpha_{+}}+\frac{2}{\alpha_{-}}\right] \right)\left|r-r_{+}\right|^{\bar{\alpha}+\frac{1}{2}}\left|r-r_{-}\right|^{ \frac{\alpha}{2}}\\ A_{-}(r,t)&\equiv exp\left(\frac{2r}{\alpha_{-}} \right)\left|r-r_{+}\right|^{\bar{\alpha}+1}+exp\left(-\frac{2r}{\alpha_{-}} \right)\left|r-r_{-}\right|^{2}\left|r-r_{+}\right|^{-\bar{\alpha}+1}\\ &+2\cosh\left[\frac{2t}{\alpha_{-}}\right]\left|r-r_{-}\right| \left|r-r_{+}\right|\\ A(r,t)&\equiv 2\cosh\left[\kappa t\right]\exp(-\bar{ \kappa}r)\left|r-r_{+}\right|^{\frac{1+\bar{\alpha}}{2}}\left|r-r_{-}\right|^{ \frac{1+\bar{\alpha}}{2}}\\ &+2\cosh\left[\frac{-t}{\alpha_{-}}\right]exp\left(\frac{3r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\frac{2+3\bar{\alpha}}{2}}\left|r-r_{ -}\right|^{\frac{1}{2}}\\ &+2\cosh\left[\frac{-3t}{\alpha_{-}}\right]exp\left(\frac{r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\frac{2+4\bar{\alpha}}{2}}\left|r-r_{ -}\right|^{\frac{1}{2}},\end{split} \tag{44}\]
with the following limits
\[\begin{split} A_{+}\left(r\to r_{+},t\right)& \to exp\left(-\frac{2r}{\alpha_{+}}\right)\left|r-r_{-}\right|^{ \alpha+1},\\ A_{+}\left(r\to r_{\pm},t\right)&\to exp\left(\frac{2r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\bar{\alpha}+1},\\ A_{\pm}\left(r\to r_{\mp},t\right)&\to\infty,\\ A\left(r_{\to}r_{\pm},t\right)&\to\infty.\end{split} \tag{45}\]
Consequently, the metric will take the following asymptotic limit
\[\begin{split} dS_{RN}^{2}(r\to r_{+})&=-\frac{e^{ \frac{-2r}{\alpha_{+}}}|r-r_{-}|^{1+\alpha}}{r^{2}}d\bar{u}d\bar{v},\\ dS_{RN}^{2}(r\to r_{-})&=-\frac{e^{\frac{2r}{ \alpha_{-}}}|r-r_{+}|^{1+\bar{\alpha}}}{r^{2}}d\bar{u}d\bar{v}.\end{split} \tag{46}\]
## V Discussion and Conclusion
After reinterpreting the premises of the Kruskal charting of the Schwarzschild spacetime, we were able to provide a new approach to chart the Reissner-Nordstrom spacetime featuring two horizons. The technique itself showed to be employable in two distinctive ways, resulting in two families of charting systems: conformal global _type-I_ and _type-II_ charts. In both cases, the asymptotic form of the metric approaches the form of metric written in terms of _Type-O_ charts. We illustrated the success of the provided technique by constructing compact conformal global coordinates of type-I \(\mathcal{GK}_{\mathcal{I}}\) and of type-II \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) for the RN spacetime. The price we pay for covering the whole spacetime with two horizons with only one chart is time dependence.
After the construction, one could conclude that the metric is only \(C^{1}\) since the map between \(r\) and each of type-I \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) and of type-II \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) is monotonic and smooth, and
also due to the behavior of the conformal map written in terms of \(t\) and \(r\) functions. Consequently, for both type-I and type-II charts, the metric becomes \(C^{\infty}\) only if we add an extra step in the procedure. This additional step aims to promote differentiability if and only if the metric has kinks and was applied by employing relaxing functions to modify the charts at the kinks' locations. As expected, it is complicated to write the generalized Kruskal coordinates \((\tilde{u},\tilde{v})\) explicitly in terms of the RN coordinates \((t,r)\), related through equation \((\ref{eq:201},\ref{eq:202})\). However, we hinted that this could be achieved by utilizing the generalized Lambert function \(\mathcal{W}\) in similar manner to the use of the Lambert function \(W\) in Schwarzchild case.
We proved that the domain of \(r\) can be globally and monotonically mapped to \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) for any curve of constant \(t\). We analyzed some aspects of the integral equation relating the null coordinates to \(\mathcal{GK}_{\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\). Finally, we demonstrated that the smoothing technique developed in [27] could be thought of as a special case of the type-II family of coordinates. Since the Kerr and RN spacetimes share some similarities (i.e. two horizons), our technique might be applicable to the former case given that there is no underlying assumption about the type of spacetime in hand.
###### Acknowledgements.
We wish to thank Mahmoud Mansour9 and Wei-Chen10 Lin for the useful discussions on various aspects of the analysis provided in this article. We are also grateful to Sam Powers for many valuable comments on the draft. D.S. is partially supported by the US National Science Foundation, under Grants No. PHY-2014021 and PHY-2310363.
Footnote 9: [email protected]
Footnote 10: [email protected]
|
2309.16234 | Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata
for Sentiment Analysis | Sentiment analysis using big data from YouTube videos metadata can be
conducted to analyze public opinions on various political figures who represent
political parties. This is possible because YouTube has become one of the
platforms for people to express themselves, including their opinions on various
political figures. The resulting sentiment analysis can be useful for political
executives to gain an understanding of public sentiment and develop appropriate
and effective political strategies. This study aimed to build a sentiment
analysis system leveraging YouTube videos metadata. The sentiment analysis
system was built using Apache Kafka, Apache PySpark, and Hadoop for big data
handling; TensorFlow for deep learning handling; and FastAPI for deployment on
the server. The YouTube videos metadata used in this study is the video
description. The sentiment analysis model was built using LSTM algorithm and
produces two types of sentiments: positive and negative sentiments. The
sentiment analysis results are then visualized in the form a simple web-based
dashboard. | Danendra Athallariq Harya Putra, Arief Purnama Muharram | 2023-09-28T08:15:55Z | http://arxiv.org/abs/2309.16234v1 | # Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata for Sentiment Analysis
###### Abstract
Sentiment analysis using big data from YouTube videos metadata can be conducted to analyze public opinions on various political figures who represent political parties. This is possible because YouTube has become one of the platforms for people to express themselves, including their opinions on various political figures. The resulting sentiment analysis can be useful for political executives to gain an understanding of public sentiment and develop appropriate and effective political strategies. This study aimed to build a sentiment analysis system leveraging YouTube videos metadata. The sentiment analysis system was built using Apache Kafka, Apache PySpark, and Hadoop for big data handling; TensorFlow for deep learning handling; and FastAPI for deployment on the server. The YouTube videos metadata used in this study is the video description. The sentiment analysis model was built using LSTM algorithm and produces two types of sentiments: positive and negative sentiments. The sentiment analysis results are then visualized in the form a simple web-based dashboard.
sentiment analysis, big data, politics, YouTube
## I Introduction
General Elections (Pemilu) is one of the concrete manifestations of a democratic system. Through Pemilu, the public has the opportunity to participate in governance by electing their representatives who will represent them in the government structure [1]. Among the various types of elections, the Presidential Election (Pilpres) is always a highly anticipated moment and dubbed as the largest "democratic party". In 2024, Indonesia will hold a Pilpres to determine the candidate for the presidency who will lead Indonesia for the next 5 years.
Welcoming the Pilpres 2024, every political party is competing to determine the best presidential and vice-presidential candidate to be endorsed. For political parties, Pilpres is not only about the positions of President and Vice President, but also determines their seats in the future government structure. Therefore, it is crucial for political parties to devise the best political campaign strategies to win the hearts of the public. One of the efforts that political parties can undertake to evaluate the quality of their political figures is through public sentiment analysis.
Sentiment analysis, also known as opinion mining, is a field that studies the analysis of opinions, sentiments, evaluations, judgments, attitudes, and emotions of people towards entities such as products, services, organizations, individuals, issues, events, and others related [2]. Public sentiment analysis can be used as a tool for political parties to gain a better understanding of the opinions and views of the public towards their endorsed political candidates. With public sentiment analysis, political parties can design effective and responsive campaign strategies to meet the needs of the public.
In the current digital era, social media has become a platform for the public to express various things, including their views towards various political figures. Such expressions can be in the form of support or rejection, and can be expressed in various media such as text, audio, or video. Such expressions can be used as indicators of public sentiment for political parties to assess the quality of their endorsed political figures.
This research aims to build a'real-time' sentiment analysis system for political figures in Pilpres 2024 from various videos on YouTube through its video description as the metadata. The selection of YouTube as a source of big data is due to the fact that YouTube has become one of the means of political expression in the form of videos with various purposes [3, 4, 5]. The system is designed to be'real-time' in order to perform sentiment analysis on various YouTube videos metadata in'real-time'. The resulting system is intended for political executives to help gain an understanding of public sentiment towards their endorsed political figures so that they can devise appropriate and effective political strategies.
## II Methodology
### _System Design_
The sentiment analysis system was built using Apache Kafka, Apache PySpark, and Hadoop for handling big data; TensorFlow for deep learning; and FastAPI for deployment on the server. In terms of architecture, the system was built using a module-based approach and consists of modules including producer, streamer, HDFS, inference, and visualizer (Table
I). The system's workflow involves data retrieval (crawling) through the YouTube API by the producer module; internal data streaming by the streamer module and storing it into the Hadoop Distributed File System (HDFS) by the HDFS module; sentiment inference by the inference module; and displaying the sentiment inference results in a simple web dashboard by the visualizer module (Figure 1). The producer module can be set to perform data crawling on a scheduled and regular basis and then store the results into HDFS to ensure that the data in the system is always up-to-date in real-time.
#### Iii-B1 **Data Gathering via Youtube API**
To retrieve data streams from the YouTube API, a Kafka producer with a topic is needed that will send the metadata of a YouTube video. This task will then be performed by the producer module. Metadata is obtained using the search method provided by the YouTube search API, and the final result is in JSON format.
The search method requires a keyword that will be used to perform the search. This kind of keyword is similar to what we do when typing keywords for searching videos on YouTube. The keywords used will be adjusted based on each political figure, so that one political figure can use many keywords according to the wishes of the user. In this study, only the name of the political figure was used as a keyword ('anies' for Anies Rasyid Baswedan, 'ganjar' for Ganjar Pranowo, 'prabowo' for Prabowo Subianto, and 'puan' for Puan Maharani).
When using the search method from the YouTube API, the results obtained are not all videos related to the given keywords. Instead, these related videos will be divided into several sections (pages). Each page contains a maximum of 50 related videos, so an iterative method is needed to continue the results that have been obtained before. This can be done by passing the pageToken parameter when searching. The pageToken parameter is obtained from the metadata of the search method, specifically the nextPage section. Therefore, a keyword will also be iterated as long as nextPage from the previous metadata is not None.
The metadata properties taken for this project from the response data are videoId, channelId, title, description, and uri. Videold was used as a differentiator between videos, so there is no duplicate data saved. When saving to HDFS, the metadata will be combined with the political figures and also the keywords data that were searched. Although there are several items of metadata information saved, only the description will be further used for sentiment analysis. Figure 2 illustrates this entire process.
#### Iii-B2 **Storing Stream Data to HDFS**
The output of the previous process (by the producer module) will be captured using Kafka in the form of dstream data. First, this dstream data must be converted into a Spark DataFrame format. Next, the Spark DataFrame is then saved in the form of a parquet file in the HDFS folder. This task is performed by the streamer module.
#### Iii-B3 **Sentiment Analysis Model**
To perform sentiment inference on each YouTube video metadata, a sentiment analysis model is required. The metadata used is only the video description. The model used in this research is based on the Long Short-Term Memory (LSTM) algorithm [6] built using the TensorFlow library. LSTM is used because it is one of the popular methods for solving text-related problems. This is because LSTM is a type of Recurrent Neural Network
Fig. 1: System design
(RNN) that is suitable for text-related problems due to its consideration of the previous input context for the next input. This property is in line with the natural property of text, which needs to consider the context of previous words to understand the meaning of the current word in a sentence.
The model is then trained using the sentiment dataset from the 2017 Jakarta gubernatorial election (Pilkada) [7]. The dataset was taken from social media Twitter related to the 2017 Jakarta gubernatorial election and consists of two types of sentiment, positive and negative, with an equal number of each type. The dataset used has undergone two preprocessing steps, namely replacing emojis with special markers and removing stopwords. The dataset will be divided into two: training data and validation data. The training data is used to train the sentiment analysis model, while the validation data is used to test the performance of the model on data that has not been seen before by the model. We used a training to validation data ratio of 0.8:0.2 and proceeded to train our model on Google Colab.
Before training or inference on the model, the data used needs to undergo preprocessing first. The necessary preprocessing includes cleaning the text by removing URLs, hashtags, mentions, and emojis. The cleaned data will then be tokenized using text vectorization from TensorFlow. In addition to text preprocessing, label conversion to one-hot encoding is also required. The cleaned data will then be fed into the model for training.
The architecture of the sentiment analysis model used is as follows: First, the data will enter an embedding layer to convert the tokenized text into an embedding vector. Then, this embedding vector will be passed through an LSTM layer and two dense layers. The last dense layer will be used for classification.
Fig. 3: Dashboard design
Fig. 2: Data gathering
#### Iii-A4 **The Visualization of Sentiment Inference Results**
The aggregation of sentiment inference results is displayed through a simple web dashboard by the visualizer module. The visualizer module consists of two submodules, namely backend and frontend. The backend module is responsible for preparing and aggregating the required data by communicating directly with HDFS, while the frontend module is responsible for visualizing the data that has been prepared by the backend into the dashboard page. The backend module is developed using FastAPI, while the frontend module is developed using Bootstrap and Chart.js. On the dashboard page, the sentiment results will be displayed in the form of a doughnut chart comparing the number of positive and negative sentiments to facilitate readability (Figure 3). In the production stage implementation, the frontend can be placed on the app server, while the backend can be placed on the big data server (Figure 4).
### _Evaluation Strategy_
#### Iii-B1 **Sentiment Model Evaluation**
The evaluation of the sentiment model is performed using the validation dataset. The evaluation metrics used are precision (1), recall (2), and F1-score (3) for each type of sentiment, and accuracy (4) for overall performance.
\[precision=\frac{TP}{FP+TP} \tag{1}\]
\[recall=\frac{TP}{FN+TP} \tag{2}\]
\[F1score=\frac{2\times precision\times recall}{precision+recall} \tag{3}\]
\[accuracy=\frac{TP+TN}{TP+FP+FN+TN} \tag{4}\]
Fig. 4: Visualizer implementation design
Fig. 5: Dashboard page testing results using production data
#### Ii-C2 **System Evaluation**
The evaluation of the developed system is conducted through a usability testing approach. In this evaluation, the system is locally deployed and assessed for its functionality and potential weaknesses.
## III Result
Using validation data from the 2017 Jakarta Gubernatorial Election dataset, the F1-Score was obtained as shown in Table 3. The LSTM model algorithm training with 8 epochs was able to provide overall accuracy performance of 0.7056. In terms of sentiment label evaluation, the resulting model has the same precision value between the two labels. However, the recall value for the negative label is better than the positive label. The difference in recall values has an impact on the F1-score, where the negative label has a better F1-score than the positive label.
We then tested the system by deploying it locally. The results of sentiment inference are displayed in the form of aggregated numbers of negative and positive sentiments for each political figure up to the date of the page request. This information is then presented in the form of a doughnut chart. Figure 5 illustrates the system. Positive sentiment is given a green color, while negative sentiment is given a red color.
## IV Discussion
The sentiment analysis model used in this study is LSTM, trained with the Pilkada DKI 2017 sentiment dataset [5]. The testing results with that dataset were able to produce a model performance of 0.7056. In the analysis of the sentiment label, although both labels have the same precision value (0.72), there is a significant difference between the recall values for negative sentiment (0.82) and positive sentiment (0.59). Based on the recall formula (2), the higher the recall value, the more the model can classify correctly. The high recall value for negative sentiment implies a tendency for the model to classify negative sentiment more than positive sentiment.
However, despite the acceptable performance of the model, further studies are needed to improve the model's performance, as there are several other factors that might affect the system's performance and become limitations in this study.
* The video search process is highly influenced by the selected keywords, so it is necessary to choose the appropriate keywords to increase the expected number of relevant video searches.
* The video search process is limited by the YouTube API call limitations on the free tier, which is 2,000 calls per day.
* The model inference only used the video description, assuming there is a correspondence between the content and the video description (not clickbait).
## V Conclusion
YouTube, as the largest video platform, has been used as a political expression medium for the public. This study has successfully developed a political sentiment analysis system that leverages YouTube as a big data source. Using the LSTM algorithm, the built-in inference model can provide accuracy performance up to 0.7056. The keyword selection and the use of other model algorithms (such as deep learning) can be considered for future research to improve the resulting inference model performance.
## Acknowledgment
We would like to express our gratitude to the lecturer of IF5270 Big Data Systems course at the Master of Informatics Study Program, School of Electrical Engineering and Informatics, Institut Teknologi Bandung, who has taught us the fundamental concepts and technologies of big data systems. This research was initiated as a task for the course. We also would like to thank our colleagues who have supported our learning process during the class.
|
2307.16798 | Forster-Warmuth Counterfactual Regression: A Unified Learning Approach | Series or orthogonal basis regression is one of the most popular
non-parametric regression techniques in practice, obtained by regressing the
response on features generated by evaluating the basis functions at observed
covariate values. The most routinely used series estimator is based on ordinary
least squares fitting, which is known to be minimax rate optimal in various
settings, albeit under stringent restrictions on the basis functions and the
distribution of covariates. In this work, inspired by the recently developed
Forster-Warmuth (FW) learner, we propose an alternative series regression
estimator that can attain the minimax estimation rate under strictly weaker
conditions imposed on the basis functions and the joint law of covariates, than
existing series estimators in the literature. Moreover, a key contribution of
this work generalizes the FW-learner to a so-called counterfactual regression
problem, in which the response variable of interest may not be directly
observed (hence, the name ``counterfactual'') on all sampled units, and
therefore needs to be inferred in order to identify and estimate the regression
in view from the observed data. Although counterfactual regression is not
entirely a new area of inquiry, we propose the first-ever systematic study of
this challenging problem from a unified pseudo-outcome perspective. In fact, we
provide what appears to be the first generic and constructive approach for
generating the pseudo-outcome (to substitute for the unobserved response) which
leads to the estimation of the counterfactual regression curve of interest with
small bias, namely bias of second order. Several applications are used to
illustrate the resulting FW-learner including many nonparametric regression
problems in missing data and causal inference literature, for which we
establish high-level conditions for minimax rate optimality of the proposed
FW-learner. | Yachong Yang, Arun Kumar Kuchibhotla, Eric Tchetgen Tchetgen | 2023-07-31T16:05:57Z | http://arxiv.org/abs/2307.16798v4 | # Forster-Warmuth Counterfactual Regression: A Unified Learning Approach
###### Abstract
Series or orthogonal basis regression is one of the most popular non-parametric regression techniques in practice, obtained by regressing the response on features generated by evaluating the basis functions at observed covariate values. The most routinely used series estimator is based on ordinary least squares fitting, which is known to be minimax rate optimal in various settings, albeit under fairly stringent restrictions on the basis functions and the distribution of covariates. In this work, inspired by the recently developed Forster-Warmuth (FW) learner (Forster and Warmuth, 2002), we propose an alternative series regression estimator that can attain the minimax estimation rate under strictly weaker conditions imposed on the basis functions and the joint law of covariates, than existing series estimators in the literature. Moreover, a key contribution of this work generalizes the FW-learner to a so-called counterfactual regression problem, in which the response variable of interest may not be directly observed (hence, the name "counterfactual") on all sampled units, and therefore needs to be inferred in order to identify and estimate the regression in view from the observed data. Although counterfactual regression is not entirely a new area of inquiry, we propose the first-ever systematic study of this challenging problem from a unified pseudo-outcome perspective. In fact, we provide what appears to be the first generic and constructive approach for generating the pseudo-outcome (to substitute for the unobserved response) which leads to the estimation of the counterfactual regression curve of interest with small bias, namely bias of second order. Several applications are used to illustrate the resulting FW-learner including many nonparametric regression problems in missing data and causal inference literature, for which we establish high-level conditions for minimax rate optimality of the proposed FW-learner.
Introduction
### Nonparametric regression
Nonparametric estimation plays a central role in many statistical contexts where one wishes to learn conditional distributions by means of say, a conditional mean function \(\mathbb{E}[Y|X=x]\) without a priori restriction on the model. Several other functionals of the conditional distribution can likewise be written based on conditional means, which makes the conditional mean an important problem to study. For example, the conditional cumulative distribution function of a univariate response \(Y\) given \(X=x\) can be written as \(\mathbb{E}[\mathbf{1}\{Y\leq t\}|X=x].\) This, in turn, leads to conditional quantiles. In general, any conditional function defined via \(\theta^{\star}(x)=\arg\min_{\theta\in\mathbb{R}}\mathbb{E}[\rho((X,Y);\theta) |X=x]\) for any loss function \(\rho(\cdot;\cdot)\) can be learned using conditional means.
Series, or more broadly, sieve estimation provides a solution by approximating an unknown function based on \(k\) basis functions, where \(k\) may grow with the sample size \(n\), ideally at a rate carefully tuned in order to balance bias and variance to the extent possible. The most straightforward approach to construct a series estimator is by the method of least squares, large sample properties of which have been studied extensively both in statistical and econometrics literature in nonparametric settings. To briefly describe the standard least squares series estimator, let \(m^{\star}(x):=\mathbb{E}[Y|X=x]\) denote the true conditional expectation where \(m^{\star}(\cdot)\) is an unrestricted unknown function of \(x\). Also consider a vector of approximating basis functions \(\bar{\phi}_{k}(x)=(\phi_{1}(x),\ldots,\phi_{k}(x))^{\top}\), which has the property that any square integrable \(m^{\star}(\cdot)\) can be approximated arbitrarily well, with sufficiently large \(k\), by some linear combination of \(\bar{\phi}_{k}(\cdot)\). Let \((X_{i},Y_{i}),i=1,\ldots,n\) denote an observed sample of data. The least squares series estimator of \(m^{\star}(x)\) is defined as \(\widehat{m}(x)=\bar{\phi}_{k}^{\top}(x)\widehat{\beta}\), where \(\widehat{\beta}=(\Phi_{k}^{\top}\Phi_{k})^{-1}\Phi_{k}^{\top}\mathbf{Y}\), and \(\Phi_{k}\) is the \(n\times k\) matrix \([\bar{\phi}_{k}(X_{1}),\ldots,\bar{\phi}_{k}(X_{n})]^{\top}\) with \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\top}\). Several existing works in the literature provide sufficient conditions for consistency, corresponding convergence rates, and asymptotic normality of this estimator, along with illustrations of these conditions in the case of polynomial series and regression splines, see, for example, Chen (2007), Newey (1997), Gyorfi et al. (2002). Under these conditions, the optimal rate of convergence are well-established for certain bases functions, such as the local polynomial kernel estimator (Chapter 1.6 of Tsybakov (2009)) and the local polynomial partition series (Cattaneo and Farrell (2013)). Belloni et al. (2015) relaxed some of these assumptions while applying this estimation procedure to statistical estimation problems and
provided uniform convergence rates. For instance, they weakened the requirement in Newey (1997) that the number \(k\) of approximating functions has to satisfy \(k^{2}/n\to 0\) to \(k/n\to 0\) for bounded (for example Fourier series) or local bases (such as splines, wavelets or local polynomial partition series), which was previously established only for splines (Huang (2003)) and local polynomial partitioning estimators (Cattaneo and Farrell (2013)); therefore presumably allowing for improved approximation of the function in view by using a larger number of basis functions to estimate the latter. One important limitation of least squares series estimator is that the rate of convergence heavily depends on stringent assumptions imposed on the bases functions. To be specific, a key quantity that plays a crucial role in all of these previous works, is given by \(\xi_{k}:=\sup_{x\in\mathcal{X}}\|\phi_{k}(x)\|\), where \(\mathcal{X}\) is the support of the covariates \(X\) and \(\|\cdot\|\) denote the \(l_{2}\) norm of a vector. They require \(\xi_{k}^{2}\log k/n\to 0\), so that for bases functions such as Fourier, splines, wavelets, and local polynomial partition series, \(\xi_{k}\leq\sqrt{k}\), yielding \(k\log k/n\to 0\). For other bases functions such as polynomial series, \(\xi_{k}\lesssim k\) corresponds to \(k^{2}\log k/n\to 0\), which is more restrictive.
In this paper, we develop a new type of series regression estimator that in principle can attain well-established minimax nonparametric rates of estimation in settings where covariates and outcomes are fully observed, under weaker conditions compared to existing literature (e.g. Belloni et al. (2015)) on the distribution of covariates and bases functions. The approach builds on an estimator we refer to as _Forster-Warmuth Learner_ (FW-Learner) originating in the online learning literature, which is obtained via a careful modification of the renowned non-linear Vovk-Azoury-Warmuth forecaster (Vovk, 2001; Forster and Warmuth, 2002). In particular, our method is optimal in that its error matches the well-established minimax rate of estimation for a large class of smooth nonparametric regression functions, provided that \(\mathbb{E}[Y^{2}|X]\) is bounded almost surely, regardless of the basis functions used, as long as the approximation error/bias with \(k\) bases decays optimally; see Theorem 1 for more details. This result is more general than the current literature whose rate of convergence depends on the type of basis. For example, Belloni et al. (2015) established that using the polynomials basis would imply a slower convergence rate compared to using a wavelet basis, although both have the same approximation error decay rate for the common Holder/Sobolev spaces. Theorem 1 provides the expected \(L_{2}\)-error of our FW-Learner under the full data setting, which is a non-trivial extension of the vanilla Forster-Warmuth estimator and is agnostic to the underlying choice of bases functions. The sharp upper bound on the error rate matches the minimax lower bound of this problem, demonstrating the optimality of the FW-Learner.
### Counterfactual regression
Moving beyond the traditional conditional mean estimation problem, we also develop a unified approach to study a more challenging class of problems we name nonparametric _counterfactual regression_, where the goal is still to estimate \(m^{\star}(x)=\mathbb{E}[Y|X=x]\) but now the response \(Y\) may not be fully/directly observed.
Prominent examples include nonparametric regression of an outcome prone to missingness, a canonical problem in missing data literature, as well as nonparametric estimation of the so-called conditional average treatment effect (CATE) central to causal inference literature. Thus, the key contribution of this work, is to deliver a unified treatment of such counterfactual regression problems with a generic estimation approach which essentially consists of two steps: (i) generate for all units a carefully constructed pseudo-outcome of the counterfactual outcome of interest; (ii) apply the FW-Learner directly to the counterfactual pseudo-outcome, in order to obtain an estimator of the counterfactual regression in view. The counterfactual pseudo-outcome in step (i) is motivated by modern semiparametric efficiency theory and may be viewed as an element of the orthogonal complement of the nuisance tangent space for the statistical model of the given counterfactual regression problem, see, e.g., Bickel et al. (1993), Van Der Vaart (1991), Newey (1990), Tsiatis (2006) for some references; as such the pseudo-outcome endows the FW-Learner with a "small bias" property that its bias is at most of a second order. In some key settings, the bias of the pseudo-outcome might be sufficiently small, occasionally it might even be exactly zero, so that it might altogether be ignored without an additional condition. This is in fact the case if the outcome were a priori known to be missing completely at random, such as in some two-stage sampling problems where missingness is by design, e.g. (Breslow and Cain, 1988); or if estimating the CATE in a randomized experiment where the treatment mechanism is known by design. More generally, the pseudo-outcome often requires estimating certain nuisance functions nonparametrically, however, for a large class of such problems considered in this paper, the bias incurred from such estimation is of product form, also known as mixed bias (Rotnitzky et al. (2021)). In this context, a key advantage of the mixed bias is that one's ability to estimate one of the nuisance functions well, i.e. relatively "fast rates", can potentially make up for slower rates in estimating another, so that, estimation bias of the pseudo-outcome can be negligible relative to the estimation risk of an oracle with ex ante knowledge of nuisance functions. In such cases, the FW-Learner is said to be _oracle optimal_ in the sense that its risk matches that of the oracle (up to a
multiplicative constant).
Our main theoretical contribution is a unified analysis of the FW-Learner described above, hereby establishing that it attains the oracle optimality property, under appropriate regularity conditions, in several important counterfactual regression problems, including (1) nonparametric regression under outcome missing at random, (2) nonparametric CATE estimation under unconfoundedness, (3) nonparametric regression under outcome missing not at random leveraging a so-called shadow variable (Li et al., 2021; Miao et al., 2023), (4) nonparametric CATE estimation in the presence of residual confounding leveraging proxies using the proximal causal inference framework (Miao et al., 2018; Tchetgen Tchetgen et al., 2020).
### Literature review, organization, and notation
Organization.The remainder of the paper is organized as follows. Section 1.4 introduces the notation that is going to be used throughout the paper. Section 2 formally defines our estimation problem and the Forster-Warmuth estimator, where Section 2.2 builds upon Section 2.1 going beyond the full data problem to counterfactual settings where the outcome of interest may not be fully observed. Section 3 applies the proposed methods to the canonical nonparametric regression problem subject to missing outcome data, where in Section 3.1 the outcome is assumed to be Missing At Random (MAR) given fully observed covariates Robins et al. (1994); while in Section 3.2 the outcome may be Missing Not At Random (MNAR) and identification hinges upon having access to a fully observed shadow variable (Miao et al., 2023; Li et al., 2021). Both of these examples may be viewed as nonparametric counterfactual regression models, whereby one seeks to estimate the nonparametric regression function under a hypothetical intervention that would in principle prevent missing data. Section 4 presents another application of the proposed methods to a causal inference setting, where the nonparametric counterfactual regression parameter of interest is the Conditional Average Treatment Effect (CATE); Section 4.1 assumes the so-called ignorability or unconfoundedness given fully observed covariates, while Section 4.2 accommodates unmeasured confounding for which proxy variables are observed under the recently proposed proximal causal inference framework (Miao et al., 2018; Tchetgen Tchetgen et al., 2020). Section 5 reports results from a simulation study comparing our proposed FW-Learner to a selective set of existing methods under a range of conditions, while Section 6 illustrates FW-Learner for the CATE in an analysis of the SUPPORT observational study (Conners et al. (1996)) to estimate the causal effect of right heart catheterization (RHC) on 30-day survival, as a function of a
continuous baseline covariate which measures a _patient's potential survival probability at hospital admission_, both under standard unconfoundedness conditions assumed in prior causal inference papers, including Tan (2006), Vermeulen and Vansteelandt (2015) and Cui and Tchetgen Tchetgen (2019), and proximal causal inference conditions recently considered in Cui et al. (2023) in the context of estimating marginal treatment effects.
Literature Review.There is growing interest in nonparametric/semiparametric regression problems involving high dimensional nuisance functions. Notable general frameworks recently proposed to address rich classes of such problems include Ai and Chen (2003) and Foster and Syrgkanis (2019), with the latter providing an oracle inequality for empirical risk minimization under the condition that an estimated loss function uniquely characterizing a nonparametric regression function of interest satisfies a form of orthogonality property, more precisely, that the estimated loss function admits second order bias. In another strand of work related to nonparametric regression with missing data on the outcome, Muller and Schick (2017) investigated the efficiency of a complete-case nonparametric regression under an outcome missing at random assumption (MAR); relatedly, Efromovich (2011) proposed a nonparametric series estimator that is shown to be minimax when predictors are missing completely at random (MCAR), and Wang et al. (2010) proposed an augmented inverse probability weighted nonparametric regression kernel estimator using parametric specifications of nuisance functions in the setting of an outcome missing at random. In the context of CATE estimation for causal inference, in a setting closely related to ours, Kennedy (2020) proposed a doubly robust two-stage CATE estimator, called the DR-Learner, and provided a general oracle inequality for nonparametric regression with estimated outcomes. In the same paper, he also proposed a local polynomial adaptation of the R-Learner (Nie and Wager (2021), Robinson (1988)), and characterized its (in-probability) point-wise error rate. He referred to this new estimator as Local Polynomial R-Learner (lp-R-Learner). Notably, the lp-R-Learner was shown to attain the corresponding oracle rate under weaker smoothness conditions for nuisance functions and the CATE than analogous estimators in Nie and Wager (2021) and Chernozhukov et al. (2017). The recent work of Kennedy et al. (2022) studied the minimax lower bound for the rate of estimation of the CATE under unconfoundedness (in terms of mean squared error) and proposed higher order estimators using recent estimation theory of higher-order influence functions (Robins et al., 2008, 2017) that is minimax optimal provided the covariate distribution is sufficiently smooth that it can be estimated at fast enough rates so that
estimation bias is negligible. Another related strand of work has focused on so-called meta-Learners based on generic machine learning estimators. For instance, Kunzel et al. (2019) proposed two learners (X-Learner and U-Learner) for CATE estimation through generic machine learning. In Section 5, we provide a simulation study comparing our proposed method to the X-Learner, the DR-Learner and to an oracle DR-Learner which uses the _Oracle pseudo-outcome_ with known nuisance functions in the second-stage regression.
In Section 4 we apply our method to estimating CATE, the average effect of the treatment for individuals who have specific values of a set of baseline covariates. By inferring CATE, researchers can potentially identify subgroups of the population that may benefit most from the treatment; information that is crucial for designing effective interventions tailored to the individual. Similar to Kennedy (2020) and Kennedy et al. (2022), we study this problem under the unconfoundedness assumption in Section 4.1. While their proposed lp-learner, which leverages the careful use of local polynomials to estimate the CATE, was shown to match an oracle estimator with complete knowledge of all nuisance parameters under certain smoothness conditions, our proposed FW-Learner is shown to match the oracle estimator for more general bases functions under minimal conditions on the latter. Therefore, in this light, our estimator affords the analyst with the freedom to use an arbitrary bases functions of choice to model the CATE.
In many non-experimental practical settings, un-confoundedness may not be credible on the basis of measured covariates, in which case, one may be concerned that residual confounding due to hidden factors may bias inferences about the CATE using the above methods. To address such concerns, the recent so-called "proximal causal inference" approach acknowledges that measured covariates are unlikely to fully control for confounding and may at best be viewed as proxies of known but unmeasured sources of confounding, see, e.g., Miao et al. (2018) and Tchetgen Tchetgen et al. (2020), where they formally leverage proxies for nonparametric identification of causal effects in the presence of hidden confounders. In Section 4.2, we develop an FW-proximal learner of the CATE using the proposed pseudo-outcome approach in which we leverage a characterization of the ortho-complement to the nuisance tangent space for the underlying proximal causal model derived in Cui et al. (2023), also see Ghassami et al. (2022). It is worth mentioning that recent concurrent work Sverdrup and Cui (2023) also estimates CATE under the proximal causal inference context with what they call a P-Learner using a two-stage loss function approach inspired by the R-Learner proposed in Nie and Wager (2021), which, in order to be oracle optimal, requires that the nuisance functions are estimated
at rates faster than \(n^{-1/4}\), a requirement we do not impose.
### Notation
We define some notation we use throughout the paper: \(a\lesssim b\) means \(a\leq Cb\) for a universal constant \(C\), and \(a\sim b\) means \(a\lesssim b\) and \(b\lesssim a\). We call a function \(\alpha\)-smooth if it belongs to the class of Holder smoothness order \(\alpha\), which will be introduced using similar language as Belloni et al. (2015): For \(\alpha\in(0,1]\), the Holder class of smoothness order \(\alpha,\Sigma_{\alpha}(\mathcal{X})\), is defined as the set of all functions \(f:\mathcal{X}\rightarrow\mathbb{R}\) such that for \(C>0\),
\[|f(x)-f(\widetilde{x})|\leq C\Bigl{(}\sum_{j=1}^{d}\bigl{(}x_{j}-\widetilde{x }_{j}\bigr{)}^{2}\Bigr{)}^{\alpha/2}\]
for all \(x=\left(x_{1},\ldots,x_{d}\right)^{\top}\) and \(\widetilde{x}=\left(\widetilde{x}_{1},\ldots,\widetilde{x}_{d}\right)^{\top}\) in \(\mathcal{X}\). The smallest \(C\) satisfying this inequality defines a norm of \(f\) in \(\Sigma_{\alpha}(\mathcal{X})\), which we denote by \(\|f\|_{s(\alpha)}.\) For \(\alpha>1,\Sigma_{\alpha}(\mathcal{X})\) can be defined as follows. For a \(d\)-tuple \(\bar{\alpha}=\left(\alpha_{1},\ldots,\alpha_{d}\right)\) of non-negative integers, let \(D^{\bar{\alpha}}=\partial_{x_{1}}^{\alpha_{1}}\ldots\partial_{x_{d}}^{\alpha_ {d}}\) be the multivariate partial derivative operator. Let \(\lfloor\alpha\rfloor\) denote the largest integer strictly smaller than \(\alpha\). Then \(\Sigma_{\alpha}(\mathcal{X})\) is defined as the set of all functions \(f:\mathcal{X}\rightarrow\mathbb{R}\) such that \(f\) is \(\lfloor\alpha\rfloor\) times continuously differentiable and for some \(C>0\),
\[\bigl{|}D^{\bar{\alpha}}f(x)-D^{\bar{\alpha}}f(\widetilde{x})\bigr{|}\leq C \Bigl{(}\sum_{j=1}^{d}\bigl{(}x_{j}-\widetilde{x}_{j}\bigr{)}^{2}\Bigr{)}^{ \left(\alpha-\lfloor\alpha\rfloor\right)/2}\text{ and }\bigl{|}D^{\bar{\beta}}f(x) \bigr{|}\leq C\]
for all \(x=\left(x_{1},\ldots,x_{d}\right)^{\prime}\) and \(\widetilde{x}=\left(\widetilde{x}_{1},\ldots,\widetilde{x}_{d}\right)^{\prime}\) in \(\mathcal{X}\) and for all \(d\)-tuples \(\bar{\alpha}=\left(\alpha_{1},\ldots,\alpha_{d}\right)\) and \(\bar{\beta}=\left(\beta_{1},\ldots,\beta_{d}\right)\) of non-negative integers satisfying \(\alpha_{1}+\cdots+\alpha_{d}=\lfloor\alpha\rfloor\) and \(\beta_{1}+\cdots+\beta_{d}\leq\lfloor\alpha\rfloor.\) Again, the smallest \(C\) satisfying these inequalities defines a norm of \(f\) in \(\Sigma_{\alpha}(\mathcal{X})\), denoted as \(\|f\|_{s(\alpha)}\). For any integer \(k\geq 2\), let \(\|f(\cdot)\|_{k}\) denote the function \(L_{k}\) norm such that \(\|f(O)\|_{k}:=(\mathbb{E}_{O}[f^{k}(O)])^{1/k}\), where \(O\) is any data that is the input of \(f\).
## 2 The Forster-Warmuth Nonparametric Counterfactual Regression Estimator
We introduce the Forster-Warmuth learner, which is a nonparametric extension of an estimator first proposed in the online learning literature (Forster and Warmuth, 2002). In Section 2.1, we study
the properties of FW-Learner in the standard nonparametric regression setting where data are fully observed, before considering the counterfactual setting of primary interest in Section 2.2 where the responses may only be partially observed.
### Full data nonparametric regression
Suppose that one observes independent and identically distributed observations \(\left(X_{i},Y_{i}\right),1\leq i\leq n\) on \(\mathcal{X}\times\mathbb{R}\). Let \(\mu\) be a base measure on the covariate space \(\mathcal{X}\); this could, for example, be the Lebesgue measure or the countable measure. The most common nonparametric regression problem aims to infer the conditional mean function \(m^{\star}(x):=\mathbb{E}\left[Y_{i}\mid X_{i}=x\right]\) as a function of \(x\). Let \(\Psi:=\{\phi_{1}(\cdot)\equiv 1,\phi_{2}(\cdot),\phi_{3}(\cdot),\ldots\}\) be a fundamental sequence of functions in \(L_{2}(\mu)\) i.e., linear combinations of these functions are dense in \(L_{2}(\mu)\)(Lorentz, 1966; Yang and Barron, 1999). Note that a fundamental sequence of functions need not be orthonormal.
For any \(f\in L_{2}(\mu)\) and any \(J\geq 1\), let
\[E_{J}^{\Psi}(f)\ :=\ \min_{a_{1},a_{2},\ldots,a_{J}}\left\|f-\sum_{k=1}^{J}a_{k }\phi_{k}\right\|_{L_{2}(\mu)}\]
denote the \(J\)-th degree approximation error of the function \(f\) by the first \(J\) functions in \(\Psi\). By definition of the fundamental sequence, \(E_{J}^{\Psi}(f)\to 0\) as \(J\to 0\) for any function \(f\in L_{2}(\mu)\). This fact is the motivation of the traditional series estimators of \(m^{\star}\) which estimate the minimizing coefficients \(a_{1},\ldots,a_{J}\) using ordinary least squares linear regression. Motivated by an estimator in the linear regression setting studied in Forster and Warmuth (2002), we define the FW-Learner of \(m^{\star}(\cdot)\), which we denote \(\widehat{m}_{J}(\cdot)\), trained on data \(\left\{\left(X_{i},Y_{i}\right),1\leq i\leq n\right\}\), using the first \(J\) elements of the fundamental sequence \(\bar{\phi}_{J}(x)=\big{(}\phi_{1}(x),\ldots,\phi_{J}(x)\big{)}^{\top}\):
\[\widehat{m}_{J}(x):=\big{(}1-h_{n}(x)\big{)}\bar{\phi}_{J}^{\top}(x)\Big{(} \sum_{i=1}^{n}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J} (x)\bar{\phi}_{J}^{\top}(x)\Big{)}^{-1}\sum_{i=1}^{n}\bar{\phi}_{J}(X_{i})Y_{ i}, \tag{1}\]
where
\[h_{n}(x):=\bar{\phi}_{J}^{\top}(x)\Big{(}\sum_{i=1}^{n}\bar{\phi}_{J}(X_{i}) \bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J}(x)\bar{\phi}_{J}^{\top}(x)\Big{)} ^{-1}\bar{\phi}_{J}(x)\ \in\ [0,1]. \tag{2}\]
The following result provides a finite-sample result on the estimation error of \(\widehat{m}_{J}\) as a function of \(J\).
**Theorem 1**.: _Suppose \(\mathbb{E}\big{[}Y^{2}|X\big{]}\leq\sigma^{2}\) almost surely \(X\) and suppose \(X\) has a density with respect to \(\mu\)
_that is upper bounded by \(\kappa\). Then the FW-Learner satisfies_
\[\big{\|}\widehat{m}_{J}-m^{\star}\big{\|}_{2}^{2}=\mathbb{E}\Big{[}\big{(} \widehat{m}_{J}(X)-m^{\star}(X)\big{)}^{2}\Big{]}\ \leqslant\ \frac{2\sigma^{2}J}{n}+\kappa(E_{J}^{\Psi}(m^{\star}))^{2}.\]
_Moreover, if \(\Gamma=\{\gamma_{1},\gamma_{2},\ldots\}\) is a non-increasing sequence and if \(m^{\star}\in\mathcal{F}(\Psi,\Gamma)=\{f\in L_{2}(\mu):\mathbb{E}_{k}^{\Psi}(f )\leqslant\gamma_{k}\,\forall\,k\geqslant 1\}\), then for \(J_{n}:=\min\{k\geqslant 1:\,\gamma_{k}^{2}\leqslant\sigma^{2}k/n\}\), we obtain_
\[\|\widehat{m}_{J_{n}}-m^{\star}\|_{2}^{2}\ \leqslant\ (2+\kappa)\frac{\sigma^{2}J_{n}}{n}.\]
See Section S.2 of the supplement for proof of this result. Note that Belloni et al. (2015, Theorem 4.1) established a similar result for the least squares series estimator implying that it yields the same oracle risk under more stringent conditions imposed on the bases functions as discussed in the introduction. The sets of functions \(\mathcal{F}(\Psi,\Gamma)\) are called _full approximation sets_ in Lorentz (1966) and Yang and Barron (1999, Section 4). If the sequence \(\Gamma\) also satisfies the condition \(0<c^{\prime}\leqslant\gamma_{2k}/\gamma_{k}\leqslant c\leqslant 1\) for all \(k\geqslant 1\), then Theorem 7 of Yang and Barron (1999) proves that the minimax rate of estimation of functions in \(\mathcal{F}(\Psi,\Gamma)\) is given by \(k_{n}/n\), where \(k_{n}\) is chosen so that \(\gamma_{k}^{2}\asymp k/n\). The upper bound in Theorem 1 matches this rate under the assumption \(c^{\prime}\leqslant\gamma_{2k}/\gamma_{k}\leqslant c\). This can be proved as follows: by definition of \(J_{n}\), \(\gamma_{J_{n}-1}^{2}\geqslant\sigma^{2}(J_{n}-1)/n\). Then using \(J_{n}-1\geqslant J_{n}/2\) and \(\gamma_{J_{n}-1}\leqslant\gamma_{J_{n}/2}\leqslant\gamma_{J_{n}}/c^{\prime}\), we get \(\gamma_{J_{n}}^{2}\geqslant(c^{\prime})^{2}\sigma^{2}J_{n}/(2n)\). Hence, \(\gamma_{J_{n}}\asymp\sigma^{2}J_{n}/n\). Therefore, Theorem 1 proves that the FW-Learner with a properly chosen \(J\) is minimax optimal for approximation sets.
Note that Theorem 1 does not require the fundamental sequence of functions \(\Psi\) to form an orthonormal bases. This is a useful feature when considering sieve-based estimators (Shen and Wong, 1994, Example 3), or partition-based estimators (Cattaneo and Farrell, 2013) or random kitchen sinks (Rahimi and Recht, 2008) or neural networks (Klusowski and Barron, 2018), just to name a few.
As a special case of Theorem 1 that is of particular interest for Holder or Sobolev spaces, suppose \(\gamma_{J}\leqslant C_{m}J^{-2\alpha_{m}/d}\) for some constant \(C_{m},\alpha_{m}>0\), and \(d\) is the intrinsic dimension1 of the covariates \(X\), then choosing \(J=\big{[}(n\alpha_{m}\kappa C_{m}/(d\sigma^{2}))^{d/(2\alpha_{m}+d)}\big{]}\) gives
Footnote 1: We say intrinsic dimension rather than the true dimension of covariates because some bases can take into account of potential manifold structure of the covariates to yield better decay depending on the manifold (or intrinsic) dimension.
\[\|\widehat{m}_{J}-m^{\star}\|_{2}^{2}\leqslant C\left(\frac{\sigma^{2}}{n} \right)^{2\alpha_{m}/(2\alpha_{m}+d)}, \tag{3}\]
where \(C\) is a constant; See S.2 for a proof. The decay condition \(\gamma_{J}\leqslant C_{m}J^{-2\alpha_{m}/d}\) is satisfied by functions in Holder and Sobolev spaces for the classical polynomial, Fourier/trigonometric bases (DeVore and Lorentz, 1993; Belloni et al., 2015)
From the discussion above, it is clear that the choice of the number of functions \(J\) used is crucial for attaining the minimax rate. In practice, we propose the use of split-sample cross-validation to determine \(J\)(Gyorfi et al., 2002, Section 7.1). Our simulations presented in Section 5 shows good performance of such an approach. We refer interested readers to Gyorfi et al. (2002, Chapter 7) and Vaart et al. (2006) for the theoretical properties of the split-sample cross-validation. The application of these results to FW-Learner is beyond the scope of the current paper and will be explored elsewhere.
### Forster-Warmuth Counterfactual Regression: The Pseudo-Outcome Approach
In many practical applications in health and social sciences it is not unusual for an outcome to be missing on some subjects, either by design, say in two-stage sampling studies where the outcome can safely be assumed to be missing at random with known non-response mechanism, or by happenstance, in which case the outcome might be missing not at random. An example of the former type might be a study (Cornelis et al., 2009) in which one aims to develop a polygenic risk prediction model for type-2 diabetes based on stage 1 fully observed covariate data on participants including high dimensional genotype (i.e., SNPs), age, and gender, while costly manual chart review by a panel of physicians yield reliable type-2 diabetes labels on a subset of subjects with known selection probability based on stage-1 covariates. In contrast, an example of the latter type might be a household survey in Zambia (Marden et al., 2018) in which eligible household members are asked to test for HIV, however, nearly 30% decline the test and thus have missing HIV status. The concern here might be that participants who decline to test might not be a priori exchangeable with participants who agree to test for HIV with respect to key risk factors for HIV infection, even after adjusting for fully observed individual and household characteristics collected in the household survey. Any effort to build an HIV risk regression model that generalizes to the wider population of Zambia requires carefully accounting for HIV status possibly missing not at random for a non-negligible fraction of the sample.
Beyond missing data, counterfactual regression also arises in causal inference where one might be interested in the CATE, the average causal effect experienced by a subset of the population defined
in terms of observed covariates. Missing data, in this case, arises as the causal effect defined at the individual level as a difference between two potential outcomes - one for each treatment value - can never be observed. This is because under the consistency assumption (Hernan and Robins, 2010, Section 3.4) the observed outcome for subjects who actually received treatment matches their potential outcome under treatment, while their potential outcome under no treatment is missing, and vice-versa for the untreated.
A major contribution of this paper is to propose a generic construction of a so-called pseudo-outcome which, as its name suggests, replaces the unobserved outcome with a carefully constructed response variable that (i) only depends on the observed data, possibly involving high dimensional nuisance functions that can nonetheless be identified from the observed data (e.g. propensity score), and therefore can be evaluated for all subjects in the sample and; (ii) has conditional expectation given covariates that matches the counterfactual regression of interest if as for an oracle, nuisance functions were known. The proposed pseudo-outcome approach applies to a large class of counterfactual regression problems including the missing data and causal inference problems described above. The proposed approach recovers in specific cases such as the CATE under unconfoundedness, previously proposed forms of pseudo-outcomes (Kennedy, 2020, Section 4.2), while offering new pseudo-outcome constructions in other examples (e.g., Proximal CATE estimation in Section 4.2). See Section 2.3 for details on constructing pseudo-outcomes.
Before describing the explicit construction of the pseudo-outcome, we first provide a key high-level corollary (assuming that a pseudo-outcome is given) which is the theoretical backbone of our approach. Suppose \(\widetilde{O}_{i},1\leq i\leq n\) represents independent and identically distributed random vectors of unobserved data of primary interest that include fully observed covariates \(X_{i},1\leq i\leq n\) as subvectors. Let \(O_{i},1\leq i\leq n\) be the observed data which are obtained from \(\widetilde{O}_{i},1\leq i\leq n\) through some coarsening operation. For concrete examples of \(\widetilde{O}_{i}\) and \(O_{i}\) in missing data and causal inference, see Table 1; more examples can be found in Sections 3 and 4. The quantity of interest is \(m^{\star}(x)=\mathbb{E}[\widetilde{f}(\widetilde{O}_{i})|X_{i}=x]\) for some known function \(\widetilde{f}(\cdot)\) operating on \(\widetilde{O}_{i}\). For example, in the context of missing data, we could be interested in \(\mathbb{E}[Y_{i}|X_{i}]\) so that \(\widetilde{f}(\widetilde{O}_{i})=f(X_{i},Z_{i},Y_{i})=Y_{i}\). Because \(\widetilde{O}_{i},1\leq i\leq n\) are unobserved, \(\widetilde{f}(\widetilde{O}_{i})\) may not be fully observed. The pseudo-outcome approach that we propose involves two steps:
* Find some identifying conditions such that that quantity of interest \(m^{\star}(x)=\mathbb{E}[\widetilde{f}(\widetilde{O}_{i})|X_{i}=x]\) can be rewritten as \(m^{\star}(x)=\mathbb{E}[f(O_{i})|X_{i}=x]\) for some (estimable) unknown function \(f(\cdot)\) applied to the observations \(O_{i}\). There may be several such \(f\) under the identifying
assumptions and the choice of \(f\) plays a crucial role in the rate of convergence of the estimator proposed; see Section 2.3 for more details on finding a "good" \(f\).
* Split \(\{1,2,\ldots,n\}\) into two (non-overlapping) parts \(\mathcal{I}_{1},\mathcal{I}_{2}\). From \(O_{i},i\in\mathcal{I}_{1}\), obtain an estimator \(\widehat{f}(\cdot)\) of \(f(\cdot)\). Now, with the fundamental sequence of functions \(\Psi\), create the data \((\bar{\phi}_{J}(X_{i}),\widehat{f}(O_{i})),i\in\mathcal{I}_{2}\) and obtain the FW-Learner: \[\widehat{m}_{J}(x):=(1-h_{\mathcal{I}_{2}}(x))\bar{\phi}_{J}^{\top}(x)\left( \sum_{i\in\mathcal{I}_{2}}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+ \bar{\phi}_{J}(x)\bar{\phi}_{J}^{\top}(x)\right)^{-1}\sum_{i\in\mathcal{I}_{2} }\bar{\phi}_{J}(X_{i})\widehat{f}(O_{i}),\] with \[h_{\mathcal{I}_{2}}(x)=\bar{\phi}_{J}^{\top}(x)\left(\sum_{i\in\mathcal{I}_{2 }}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J}(x)\bar{\phi }_{J}^{\top}(x)\right)^{-1}\bar{\phi}_{J}(x),\] defined, similarly, as in (2).
The following lemma (proved in Section S.2) states the error bound of the FW-Learner \(\widehat{m}_{J}\) that holds for any pseudo-outcome \(\widehat{f}\).
**Corollary 1**.: _Let \(\sigma^{2}\) be an upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)|X,\widehat{f}]\) almost surely \(X\), and suppose \(X\) has a density with respect to \(\mu\) that is bounded by \(\kappa\). Define \(H_{f}(x)=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}]\). Then the FW
\begin{table}
\begin{tabular}{l l l} \hline & \(\widetilde{O}_{i}\) & \(O_{i}\) \\ \hline \hline \multirow{4}{*}{Missing data} & \((X_{i},Z_{i},Y_{i})\) & \multirow{4}{*}{\((X_{i},Z_{i},R_{i},Y_{i}R_{i})\)} \\ & \(Y_{i}\) is the response of interest, & \(R_{i}=1\) if \(Y_{i}\) is observed, \\ & \(Z_{i}\) is an additional covariate & and \(R_{i}=0\) if \(Y_{i}\) is unobserved. \\ & vector of no scientific interest. & \\ \hline \multirow{4}{*}{Causal inference} & \((X_{i},A_{i},Y_{i}^{1},Y_{i}^{0})\) & \multirow{4}{*}{\((X_{i},A_{i},Y_{i})\)} \\ & \(A_{i}\) is the treatment assignment, & \((X_{i},A_{i},Y_{i})\) \\ \cline{1-1} & \(Y_{i}^{1}\) is the counterfactual response & \(Y_{i}=A_{i}Y_{i}^{1}+(1-A_{i})Y_{i}^{0}\) \\ \cline{1-1} & if subject is in treatment group, and & is the observed response \\ \cline{1-1} & \(Y_{i}^{0}\) is the counterfactual response & given the observed treatment \(A_{i}\). \\ \cline{1-1} & if subject is in control group. & \\ \hline \end{tabular}
\end{table}
Table 1: Examples of unobserved full data and observed data.
_Learner \(\widehat{m}_{J}\) satisfies_
\[\Big{(}\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}]\Big{)}^{1/2 }\leq\sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{\Psi}(m ^{\star})+\sqrt{6}\left(\mathbb{E}[(H_{f}(X)-m^{\star}(X))^{2}|\widehat{f}] \right)^{1/2}. \tag{4}\]
The first two terms of (4) represent the upper bound on the error of the FW-Learner that had access to the data \((X_{i},f(O_{i})),i\in\mathcal{I}_{2}\). The last term of (4), \(H_{f}-m^{\star}\), is the bias incurred from estimating the oracle pseudo-outcome \(f\) with the empirical pseudo-outcome \(\widehat{f}\). Here the choice of estimator of the oracle pseudo-outcome is key to rendering this bias term negligible relative to the leading two terms of equation (4). We return to this below.
If \(|\mathcal{I}_{1}|=|\mathcal{I}_{2}|=n/2\), \(m^{\star}\in\mathcal{F}(\Psi,\Gamma)\), the full approximation set discussed in Theorem 1, and we set \(J=J_{n}=\min\{k\geq 1:\,\gamma_{k}^{2}\leq\sigma^{2}k/n\}\), then Corollary 1 implies that \(\|\widehat{m}_{J}-m^{\star}\|_{2}\leq 2(1+\sqrt{\kappa})\sqrt{\sigma^{2}J_{n}/n }+\sqrt{6}\|H_{f}-m^{\star}\|_{2}.\) Because \(\sqrt{J_{n}/n}\) is the minimax rate in \(L_{2}\)-norm for functions in \(\mathcal{F}(\Psi,\Gamma)\), we get the FW-Learner with pseudo-outcome \(\widehat{f}(O)\) is minimax rate optimal as long as \(\|H_{f}-m^{\star}\|_{2}=O(\sqrt{J_{n}/n})\). In such a case, we call \(\widehat{m}_{J}\)_oracle minimax_ in that it matches the minimax rate achieved by the FW-Learner that has access to \(f(\cdot)\).
**Remark 2.1** Section 3 of Kennedy (2020) provides a result similar to Corollary 1 but with a more general regression procedure \(\widehat{\mathbb{E}}_{n}(\cdot)\) in the form of a weighted linear estimator, but the assumptions that the weights of the estimator must satisfy require a case by case basis analysis, which may not be straightforward; whereas our result is tailored to the Forster-Warmuth estimator which applies more broadly under minimal conditions. \(\diamond\)
**Remark 2.2** It is worth noting that cross-fitting rather than simple sample splitting can be used to improve efficiency. Specifically, by swapping the roles of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) in (Step B), we can obtain two pseudo-outcomes \(\widehat{f}_{1}(\cdot),\widehat{f}_{2}(\cdot)\), and also two FW-Learners \(\widehat{m}_{J}^{(1)}(\cdot),\widehat{m}_{J}^{(2)}(\cdot)\). Instead of using only one of \(\widehat{m}_{J}^{(j)},j=1,2\), one can consider \(\widehat{m}_{J}(x)=2^{-1}\sum_{j=1}^{2}\widehat{m}_{J}^{(j)}\) and by Jensen's inequality, we obtain
\[\|\widehat{m}_{J}-m^{\star}\|_{2}\leq\sqrt{\frac{2\sigma^{2}J}{n}}+\sqrt{2 \kappa}E_{J}^{\Psi}(m^{\star})+\sqrt{\frac{3}{2}}\Big{(}\|H_{f_{1}}-m^{\star} \|_{2}+\|H_{f_{2}}-m^{\star}\|_{2}\Big{)},\]
where \(H_{f_{j}}(x)=\mathbb{E}[\widehat{f}_{j}(O)|X=x,\widehat{f}_{j}].\) A similar guarantee also holds for the average estimator obtained by repeating the sample splitting procedure.
### Construction of Pseudo-outcome (Step A)
For a given counterfactual regression problem, we construct the counterfactual pseudo-outcome using the efficient influence function (more precisely the non-centered gradient) of the functional formally defined as the "marginal" instance of the non-parametric counterfactual regression model in view, under given identifying assumptions. For instance, in the missing data regression problem, our quantity of interest is \(m^{\star}(x)=\mathbb{E}[\![Y|X=x]\!]\) and so, the marginal functional is simply \(\psi=\mathbb{E}[\![Y]\!]\), the mean outcome in the underlying target population; both conditional and marginal parameters are identified from the observed data under MAR or the shadow variable model assumptions. Likewise, in the case of the CATE, our quantity of interest is \(m^{\star}(x)=\mathbb{E}[\![Y^{1}-Y^{0}|X=x]\!]\) and so, the marginal functional is simply \(\psi=\mathbb{E}[\![Y^{1}-Y^{0}]\!]\), the population average treatment effect, both of which are identified under unconfoundedness, or the proximal causal inference assumptions. Importantly, although the nonparametric regression of interest \(m^{\star}(x)\) might not generally be pathwise-differentiable (see the definition in Section S.5 of the supplement), and therefore might not admit an influence function, under our identifying conditions and additional regularity conditions, the corresponding marginal functional \(\psi\) is a well-defined pathwise-differentiable functional that admits an influence function. Note that a non-parametric regression function that is absolutely continuous with respect to the Lebesgue measure will in general fail to be pathwise-differentiable without an additional modeling restriction (Bickel et al., 1993, Chapter 3).
Influence functions for marginal functionals \(\psi\) are in fact well-established in several semiparametric models. Furthermore, unless the model is fully nonparametric, there are infinitely many such influence functions and there is one efficient influence function that has the minimum variance. For example, in the setting of missing data with \(O=(X,Z,R,YR)\), under only missing at random (MAR) assumption (i.e., \(R_{i}\perp Y_{i}|(X_{i},Z_{i})\)), the model is well-known to be fully nonparametric in the sense that the assumption does not restrict the observed data tangent space, formally the closed linear span of the observed data scores of the model. The efficient influence function is given by
\[\mathrm{IF}(O;\psi):=\frac{R}{\pi^{\star}(X,Z)}Y-\left(\frac{R}{\pi^{\star}(X,Z )}-1\right)\mu^{\star}(X,Z)-\psi,\]
where \(\pi^{\star}(X,Z):=\mathbb{P}(R=1|X,Z)\) and \(\mu^{\star}(X,Z):=\mathbb{E}[\![Y|X,Z,R=1]\!]\). An estimator of \(\psi\) can be obtained by solving the empirical version of the estimating equation \(\mathbb{E}[\mathrm{IF}(O;\psi)]=0\). Interestingly, this influence function also satisfy \(m^{\star}(x)=\mathbb{E}[(\mathrm{IF}(O;\psi)+\psi)|X=x]\). Because \(\mathrm{IF}(O;\psi)+\psi\) is
only a function of \(O\), it can be used as \(f(O)\) for counterfactual regression. In this setting, one can easily construct other pseudo-outcomes. Namely, \(f_{1}(O):=RY/\pi^{\star}(X,Z)\) and \(f_{2}(O):=\mu^{\star}(X,Z)\), both satisfy \(\mathbb{E}[f_{j}(O)|X=x]=m^{\star}(x)\). The oracle pseudo-outcome \((\mathrm{IF}(O;\psi)+\psi)\) is the only one from those discussed that yields mixed bias and has double robustness property. This is our general strategy for constructing pseudo-outcome that has a smaller "bias" \(H_{f}-m^{\star}\). Spelled out the steps for finding a "good" pseudo-outcome for estimating \(m^{\star}(x)=\mathbb{E}[\widehat{f}(\widehat{O})|X=x]\) are:
1. Derive an influence function \(\mathrm{IF}(O;\eta^{\star},\psi)\) for the marginal functional \(\psi=\mathbb{E}[\widehat{f}(\widehat{O})]\). Here \(\eta^{\star}\) represents a nuisance component under a given semiparametric model for which identification of the regression curve is established. Note that by definition of influence function \(\mathbb{E}[\mathrm{IF}(O;\eta^{\star},\psi)]=0\).
2. Because \(\mathrm{IF}(O;\eta^{\star},\psi)+\psi\) is only a function of \(O\) and \(\eta^{\star}\). We set \(f(O)=\mathrm{IF}(O;\eta^{\star},\psi)+\psi\). Clearly, \(\mathbb{E}[f(O)]=\psi\). Verify that \(\mathbb{E}[f(O)|X=x]=m^{\star}(x)\). This holds true in a large class of semiparametric models; see Theorem 2 below.
3. Construct \(\widehat{f}(O)=\widehat{\Gamma}(O;\widehat{\eta},\psi)+\psi\), an estimate of the uncentered influence function based on the first split of the data.
The influence functions for both the marginal outcome mean and average treatment effect under MAR and unconfoundedness conditions, respectively, are well-known, the former is given above and studied in Section 3; while the latter is given and studied in Section 4 along with their analogs under MNAR with a shadow variable and unmeasured confounding using proxies, respectively. A more general result which formalizes the approach for deriving a pseudo-outcome in a given counterfactual regression problem is as follows.
**Theorem 2**.: _Suppose that the counterfactual regression function of interest \(m^{\star}(x)=\mathbb{E}[\widehat{f}(\widehat{O})\,|X=x]\) is identified in terms of the observed data \(O\) (distributed as \(F^{\ast}\in\mathcal{M}\)) by \(n^{\star}\left(x;\eta\right)=\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X=x \right]\) for a known function \(r\left(\cdot;\eta\right)\) in \(L^{2}\) indexed by an unknown, possibly infinite dimensional, nuisance parameter \(\eta\in\mathcal{B}\) (for a normed metric space \(\mathbb{B}\) with norm \(\|\cdot\|\)). Furthermore, suppose that there exists a function \(R(\cdot;\eta,n^{\star}\left(\eta\right)):O\mapsto R(O;\eta,n^{\star}\left( \eta\right))\) in \(L^{2}\) such that for any regular parametric submodel
\(F_{t}\) in \(\mathcal{M}\) with parameter \(t\in\left(-\varepsilon,\varepsilon\right)\) satisfying \(F_{0}=F^{*}\) and corresponding score \(S(\cdot)\), the following holds:_
\[\left.\frac{\partial\mathbb{E}\left[r\left(O;\eta_{t}\right)\left|X=x\right. \right]}{\partial t}\right|_{t=0}=\mathbb{E}\left[R(O;\eta,n^{*}\left(\eta \right))S\left(O\right)\left|X=x\right],\lx@note{footnote}{We also assume that this derivative is continuous in $t$.}\]
_with \(\mathbb{E}\left[R(O;\eta,n^{*}\left(\eta\right))|X\right]=0\), then_
\[\left.\left\|\mathbb{E}\left[R(O;\eta^{\prime},n^{*}\left(\eta^{\prime}\right) )+r\left(O;\eta^{\prime}\right)\left|X\right]-n^{*}\left(X;\eta\right)\right\| _{2}=O\left(\left\|\eta^{\prime}-\eta\right\|_{2}^{2}\right),\right.\]
_for any \(\eta^{\prime}\in\mathbb{B}\), and_
\[R(O;\eta,n^{*}\left(\eta\right))+r\left(O;\eta\right)-\psi\left(\eta\right)\]
_is an influence function of the functional \(\psi\left(\eta\right)=\mathbb{E}\left[r\left(O;\eta\right)\right]\)under \(\mathcal{M}\)._
The proof is in Section S.3 of the supplement.
Theorem 2 formally establishes that a pseudo-outcome for a given counterfactual regression \(\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X=x\right]\), can be obtained by effectively deriving an influence function of the corresponding marginal functional \(\psi=E_{X}\{\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X\right]\}\) under a given semiparametric model \(\mathcal{M}\). The resulting influence function is given by \(R(O;\eta)+r(O;\eta)-\psi\) and the oracle pseudo-outcome may appropriately be defined as \(f(O)=R(O;\eta)+r(O;\eta).\) Theorem 2 is quite general as it applies to the most comprehensive class of non-parametric counterfactual regressions studied to date. The result thus provides a unified solution to the problem of counterfactual regression, recovering several existing methods, and more importantly, providing a number of new results. Namely, the theorem provides a formal framework for deriving a pseudo-outcome which by construction is guaranteed to satisfy so-called "Neyman Orthogonality" property, i.e. that the bias incurred by estimating nuisance functions is at most of the second order (Chernozhukov et al., 2017). In the following sections, we apply Theorem 2 to key problems in missing data and causal inference for which we give a precise characterization of the resulting second-order bias. The four use-cases we discuss in detail below share a common structure in that the influence function of the corresponding marginal functional is linear in the regression function of interest, and falls within a broad class of so-called mixed-bias functionals introduced by Ghassami et al. (2022).
To further demonstrate broader applicability of Theorem 2, we additionally apply our approach to problems for which the counterfactual regression curve of interest operates on a "non-linear" scale
in Appendix S.1, in the sense that the influence function for the corresponding marginal functional depends on the counterfactual regression of interest on a nonlinear scale, and and as a result, might not strictly belong to the mixed-bias class. Nonetheless, as guaranteed by our theorem, the bias of the resulting pseudo-outcome is indeed of second order albeit not of mixed-bias form. These additional applications include the conditional quantile causal effect under confoundedness conditions, the CATE for generalized nonparametric regressions incorporating a possibly nonlinear link function such as the log or logit links, to appropriately account for the restricted support of count and binary outcomes respectively; The CATE for the treated, the compliers, and for the overall population each of which can be identified uniquely in the presence of unmeasured confounding under certain conditions by the so-called conditional Wald estimand, by carefully leveraging a binary instrumental variable (Wang and Tchetgen Tchetgen, 2018); and the nonparametric counterfactual outcome mean for a continuous treatment both under unconfoundedness and proximal causal identification conditions, respectively.
The pseudo-outcomes mentioned in Theorem 2 have several attractive statistical properties as they naturally account for the first-stage estimation of nuisance parameters in a manner that minimizes their impact on the second-stage FW-Learner. Specifically, the proposed pseudo-outcomes have product/mixed or second-order bias. In some cases with two or more nuisance functions, they can also have double/multiple robustness with respect to the estimated nuisance functions. An important class of such influence functions for \(\psi\) that includes the four examples considered in detail in the main text of the paper is the mixed-bias class studied in Ghassami et al. (2022). Specifically, hereto after, we will assume that the influence function of the marginal functional \(\psi\), corresponding to our counterfactual regressions is of the form
\[\text{IF}_{\psi}(O)=q^{\star}(O_{q})h^{\star}(O_{h})g_{1}(O)+q^{\star}(O_{q})g _{2}(O)+h^{\star}(O_{h})g_{3}(O)+g_{4}(O)-\psi, \tag{5}\]
where \(O_{q}\) and \(O_{h}\) are (not necessarily disjoint) subsets of the observed data vector \(O\) and \(g_{1},g_{2},g_{3}\), and \(g_{4}\) are known functions and \(\eta^{\star}=(h^{\star},q^{\star})\) represents nuisance functions that need to be estimated. Then, we can set the oracle pseudo-outcome function as \(f(O)=q^{\star}(O_{q})h^{\star}(O_{h})g_{1}(O)+q^{\star}(O_{q})g_{2}(O)+h^{ \star}(O_{h})g_{3}(O)+g_{4}(O)\), and empirical pseudo-outcome \(\widehat{f}(O)=\widehat{q}(O_{q})\widehat{h}(O_{h})g_{1}(O)+\widehat{q}(O_{ q})g_{2}(O)+\widehat{h}(O_{h})g_{3}(O)+g_{4}(O)\), where \(\widehat{h},\widehat{q}\) are estimators of the nuisance functions \(h^{\star}\) and \(q^{\star}\) using any nonparametric method; see Appendix S.4 for some nonparametric estimators that can adapt to the low-dimensional structure of \(\eta^{\star}\), when it is a conditional expectation. Using the similar proof that
shows Theorem 2 of Ghassami et al. (2022), it can be shown that conditioning on the training sample used to estimate the nuisance functions \(h^{\star}\) and \(q^{\star}\) with \(\widehat{h}\) and \(\widehat{q}\), the bias term \(H_{f}-m^{\star}\) above is equal to
\[\mathbb{E}\big{\{}g_{1}(O)(q^{\star}-\widehat{q})(O_{q})(h^{\star}- \widehat{h})(O_{h})|X,\widehat{q},\widehat{h}\big{\}}, \tag{6}\]
and therefore the bias term is of second order with product form. The proof is in Section S.5 of the supplement. The following sections elaborate these results in the four specific applications of interest.
## 3 FW-Learner for Missing Outcome
In this section, we suppose that a typical observation is given by \(O=(YR,R,X,Z)\), where \(R\) is a nonresponse indicator with \(R=1\) if \(Y\) is observed, otherwise \(R=0\). Here \(Z\) are fully observed covariates not directly of scientific interest but may be helpful to account for selection bias induced by the missingness mechanism. Specifically, Section 3.1 considers the MAR setting where the missingness mechanism is assumed to be completely accounted for by conditioning on the observed covariates \((X,Z)\)4, while Section 3.2 relaxes this assumption, allowing for outcome data missing not at random (MNAR) leveraging a shadow variable for identification.
Footnote 4: In the special case where assumption (**MAR**) holds upon conditioning on \(X\) only, complete-case estimation of \(m^{\star}\) is known to be minimax rate optimal (Efromovich, 2011, 2014; Müller and Schick, 2017).
### FW-Learner under MAR
Here, we make the MAR assumption that \(Y\) and \(R\) are conditionally independent given \((X,Z)\), and we aim to estimate the conditional mean of \(Y\) given \(X\), which we denote \(m^{\star}(x):=\mathbb{E}[Y\mid X=x]\).
**(MAR)**: \(O_{i}=(X_{i},Z_{i},R_{i},Y_{i}R_{i}),1\leq i\leq n\) are independent and identically distributed random vectors satisfying \(R_{i}\perp Y_{i}\mid(X_{i},Z_{i})\).
Under the missing at random assumption (**MAR**), the well-known efficient influence function that leads to the augmented inverse probability weighted (AIPW) estimator for the marginal function \(\psi=\mathbb{E}[Y]\), see e.g. Robins et al. (1994). Following (Step B), we now define empirical pseudo-outcome as follows. Split \(\{1,2,\ldots,n\}\) into two parts: \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Use the first split to estimate the nuisance
functions based on data \(\{(Y_{i}R_{i},R_{i},X_{i},Z_{i}),i\in\mathcal{I}_{1}\}\), denoted as \(\widehat{\pi}\) and \(\widehat{\mu}\). Use the second split and define the empirical pseudo-outcome
\[\widehat{f}(O)=\widehat{f}(YR,R,X,Z) :=\frac{R}{\widehat{\pi}(X,Z)}(YR)-\left(\frac{R}{\widehat{\pi}(X,Z)}-1\right)\widehat{\mu}(X,Z), \tag{7}\] \[=\frac{R}{\widehat{\pi}(X,Z)}Y-\left(\frac{R}{\widehat{\pi}(X,Z) }-1\right)\widehat{\mu}(X,Z),\]
Note that this corresponds to a member of the DR class of influence function (5) with \(h_{0}(O_{h})=1/\pi^{\star}(X,Z)\), \(q_{0}(O_{q})=\mu^{\star}(X,Z),g_{1}=-R,g_{2}=1,g_{3}=RY\) and \(g_{4}=0\). Recall \(\pi^{\star}(X,Z)=\mathbb{P}(R=1|X,Z)\) and \(\mu^{\star}(X,Z)=\mathbb{E}[Y|X,Z]\).
Let \(\widehat{m}_{J}(\cdot)\) represent the FW-Learner computed from the dataset \(\{(\bar{\phi}_{J}(X_{i}),\widehat{f}(O_{i})),i\in\mathcal{I}_{2}\}\), as in (Step B) and Corollary 1 guarantees the following result
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{\Psi}(m^{ \star})+\sqrt{6}(\mathbb{E}[(H_{f}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}, \tag{8}\]
where \(\sigma^{2}\) is an upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)\mid X,\widehat{f}]\) and \(H_{f}(x):=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}].\) The following lemma states the mixed bias structure of \(H_{f}-m^{\star}\).
**Lemma 1**.: _With (7) as the empirical pseudo-outcome, under (\(\mathtt{MAR}\)), we have_
\[H_{f}(x)-m^{\star}(x)=\mathbb{E}\bigg{\{}R\left(\frac{1}{\widehat{\pi}(X,Z)}- \frac{1}{\pi^{\star}(X,Z)}\right)\left(\mu^{\star}(X,Z)-\widehat{\mu}(X,Z) \right)\bigg{|}\,X=x,\widehat{\pi},\widehat{m}\bigg{\}}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022); also see Rotnitzky et al. (2021) and Robins et al. (2008); for completeness, we provide a direct proof in Section S.6.2 of the supplement. Lemma 1 combined with (8) gives the following error bound for the FW-Learner computed with pseudo-outcome (7).
**Theorem 3**.: _Let \(\sigma^{2}\) denote an almost sure upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)|X,\widehat{\pi},\widehat{\mu}]\). Then, under (\(\mathtt{MAR}\)),_
the FW-Learner \(\widehat{m}_{J}(x)\) satisfies_
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}\leqslant \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{ \Psi}(m^{\star}) \tag{9}\] \[+\sqrt{6}\mathbb{E}^{1/4}\left[\left(\frac{\pi^{\star}(X,Z)}{ \widehat{\pi}(X,Z)}-1\right)^{4}|\widehat{\pi}\right]\mathbb{E}^{1/4}[(\mu^{ \star}(X,Z)-\widehat{\mu}(X,Z))^{4}|\widehat{\mu}]\] \[\leqslant\sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2 \kappa}E_{J}^{\Psi}(m^{\star})+\sqrt{6}\mathbb{E}^{1/4}\Big{[}\big{(}\frac{1}{ \widehat{\pi}}-\frac{1}{\pi^{\star}}\big{)}^{4}(X,Z)\big{|}\widehat{\pi}\Big{]} \ \mathbb{E}^{1/4}\big{[}(\mu^{\star}-\widehat{\mu})^{4}(X,Z)\big{|}\widehat{\mu} \Big{]}.\]
The proof of this result is in Section S.6.2 of the supplement. Note that, because \(\widehat{f}(O)\) involves \(\widehat{\pi}\) in the denominator, the condition that \(\sigma^{2}\) is finite requires \(\widehat{\mu}\) and \(1/\widehat{\pi}\) to be bounded.
**Corollary 2**.: _Let \(d\) denote the intrinsic dimension of \((X,Z)\), if_
1. _The propensity score_ \(\pi^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\pi}/(2\alpha_{\pi}+d)}\) _rate in the_ \(L_{4}\)_-norm,_
2. _The regression function_ \(\mu^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\mu}/(2\alpha_{\mu}+d)}\) _rate in the_ \(L_{4}\)_-norm, and_
3. _The conditional mean function_ \(m^{\star}(\cdot)\) _with respect to the fundamental sequence_ \(\Psi\) _satisfies_ \(E_{J}^{\Psi}(m^{\star})\leqslant CJ^{-\alpha_{m}/d}\) _for some constant_ \(C\)_,_
_then_
\[\Big{(}\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{ \pi},\widehat{\mu}]\Big{)}^{1/2}\lesssim\sqrt{\frac{\sigma^{2}J}{n}}+J^{- \alpha_{m}/d}+n^{-\frac{\alpha_{\pi}}{2\alpha_{\pi}+d}-\frac{\alpha_{\mu}}{2 \alpha_{\mu}+d}}. \tag{10}\]
When the last term of (10) is smaller than the oracle rate \(n^{-\frac{\alpha_{m}}{2\alpha_{m}+d}}\), the oracle minimax rate can be attained by balancing the first two terms. Therefore, the FW-Learner is oracle efficient if \(\alpha_{\mu}\alpha_{\pi}\geqslant d^{2}/4-(\alpha_{\pi}+\frac{d}{2})(\alpha_{ \mu}+\frac{d}{2})/(1+\frac{2\alpha_{m}}{d})\). In the special case when \(\alpha_{\mu}\) and \(\alpha_{\pi}\) are equal, if we let \(s=\alpha_{\mu}/d=\alpha_{\pi}/d\) and \(\gamma=\alpha_{\tau}/d\) denote the effective smoothness, and when \(s\geqslant\frac{\alpha_{\tau}/2}{\alpha_{m}+d}=\frac{\gamma/2}{\gamma+1}\), the last term in (9) is the bias term that comes from pseudo-outcome, which is smaller than that of the oracle minimax rate of estimation of \(n^{-\alpha_{m}/(2\alpha_{m}+d)}\) and the FW-Learner is oracle efficient.
### FW-Learner under MNAR: shadow variables
In the previous section, we constructed an FW-Learner for a nonparametric mean regression function under MAR. The MAR assumption may be violated in practice, for instance if there are unmeasured
factors that are both predictive of the outcome and nonresponse, in which case outcome data are said to be missing not at random and the regression may generally not be identified from the observed data only. In this section, we continue to consider the goal of estimating a nonparametric regression function, however allowing for outcome data to be missing not at random, by leveraging a so-called shadow variable for identification (Miao et al., 2023). In contrast to the MAR setting, the observed data we consider here is \(O_{i}=(X_{i},W_{i},R_{i},Y_{i}R_{i}),1\leqslant i\leqslant n\), where \(W_{i}\) is the shadow variable allowing identification of the conditional mean. Specifically, a shadow variable is a fully observed variable, that is (i) associated with the outcome given fully observed covariates and (ii) is independent of the missingness process conditional on fully observed covariates and the possibly unobserved outcome variable. Formally, a shadow variable \(W\) has to satisfy the following assumption.
**(SV)**: \(W\perp R\;\big{|}\;(X,Y)\) and \(W\not\perp Y\;\big{|}\;X\).
This assumption formalizes the idea that the missingness process may depend on \((X,Y)\), but not on the shadow variable \(W\) after conditioning on \((X,Y)\) and therefore, allows for missingness not at random.5 Under this condition, it holds (from Bayes' rule) that
Footnote 5: The assumption can be generalized somewhat, by further conditioning on fully observed covariates \(Z\) in addition to \(X\) and \(Y\) in the shadow variable conditional independence statement, as well as in the following identifying assumptions.
\[\mathbb{E}\Big{\{}\frac{1}{\mathbb{P}(R=1|X,Y)}\;\Big{|}\;R=1,X,W\Big{\}}= \frac{1}{\mathbb{P}(R=1|X,W)}. \tag{11}\]
Let \(e^{\star}(X,Y):=\mathbb{P}[R=1|X,Y]\) denote the _extended_ propensity score, which consistent with MNAR, will generally depend on \(Y\). Likewise, let \(\pi^{\star}(X,W):=\mathbb{P}[R=1|X,W]\). Clearly \(e^{\star}(X,Y)\) cannot be estimated via standard regression of \(R\) on \(X,Y\) given that \(Y\) is not directly observed for units with \(R=0\). Identification of the extended propensity score follows from the following completeness condition (Miao et al. (2023), Tchetgen Tchetgen et al. (2023)): define the map \(D:L_{2}\to L_{2}\) by \([Dg](x,w)=\mathbb{E}\big{\{}g(X,Y)|R=1,X=x,W=w\big{\}}\).
**(CC)**: \([Dg](X,W)=0\) almost surely if and only if \(g(X,Y)=0\) almost surely.
Given a valid shadow variable, suppose also that there exist a so-called outcome bridge function that satisfies the following condition (Li et al. (2021), Tchetgen Tchetgen et al. (2023)).
**(BF)**: There exists a function \(\eta^{\star}(x,w)\) that satisfies the integral equation
\[y=\mathbb{E}\{\eta^{\star}(X,W)|Y=y,X=x,R=1\}. \tag{12}\]
The assumption may be viewed as a nonparametric measurement error model, whereby the shadow variable \(W\) can be viewed as an error-prone proxy or surrogate measurement of \(Y\), in the sense that there exists a transformation (possibly nonlinear) of \(W\) which is conditionally unbiased for \(Y\). In fact, the classical measurement model which posits \(W=Y+\epsilon\) where \(\epsilon\) is a mean zero independent error clearly satisfies the assumption with \(\eta^{\star}\) given by the identity map. Li et al. (2021) formally established that existence of a bridge function satisfying the above condition is a necessary condition for pathwise differentiation of the marginal mean \(\mathbb{E}(Y)\) under the shadow variable model, and therefore, a necessary condition for the existence of a root-n estimator for the marginal mean functional in the shadow variable model. From our viewpoint, the assumption is sufficient for existence of a pseudo-outcome with second order bias.
Let \(\widehat{e}(\cdot)\) denote a consistent estimator of \(e^{\star}(\cdot)\) that solves an empirical version of its identifying equation (11). Similarly, let \(\widehat{\eta}(\cdot)\) be an estimator for \(\eta^{\star}(\cdot)\) that solves an empirical version of the integral equation (12); see e.g. Ghassami et al. (2022), Li et al. (2021) and Tchetgen Tchetgen et al. (2023). Following the pseudo-outcome construction of Section 2.2, the proposed shadow variable oracle pseudo-outcome follows from the (uncentered) locally efficient influence function of the marginal outcome mean \(\mathbb{E}(Y)\) under the shadow variable model, given by \(f(O)=RY/e^{\star}(X,Y)-\big{(}R/e^{\star}(X,Y)-1\big{)}\eta^{\star}(X,W)\); see Li et al. (2021), Ghassami et al. (2022), and Tchetgen Tchetgen et al. (2023). It is easily verified that \(\mathbb{E}[f(O)|X=x]=m^{\star}(x)\) under **(SV)**, **(CC)**, and **(BF)**. Note that this pseudo-outcome is a member of the mixed-bias class of influence functions (5) with \(h^{\star}=1/e^{\star}\), \(q^{\star}=\eta^{\star},g_{1}=-R,g_{2}=1,g_{3}=RY\) and \(g_{4}=0\). The corresponding empirical pseudo-outcome is given by
\[\widehat{f}(O)=\frac{R}{\widehat{e}(X,Y)}Y-\left(\frac{R}{\widehat{e}(X,Y)}- 1\right)\widehat{\eta}(X,W), \tag{13}\]
with \(\widehat{e}(\cdot,\cdot)\) and \(\widehat{\eta}(\cdot,\cdot)\) obtained from the first split of the data.
Following (Step B), we obtain the FW-Learner \(\widehat{m}_{J}(X)\). In practice, similar to Algorithm 1, cross-validation may be used to tune the truncation parameter \(J\). Set \(H_{f}(x)=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}]\). The following lemma gives the form of the mixed-bias for \(\widehat{f}(\cdot)\).
**Lemma 2**.: _Under_ **(SV)**_,_ **(CC)**_,_ **(BF)**_, the pseudo-outcome_ (13) _satisfies_
\[H_{f}(x)-m^{\star}(x)\ =\ \mathbb{E}\bigg{\{}R\bigg{(}\frac{1}{\widehat{e}(X,Y)} -\frac{1}{e^{\star}(X,Y)}\bigg{)}(\eta^{\star}-\widehat{\eta})(X,W)\;\Big{|} \;X=x,\widehat{e},\widehat{\eta}\bigg{\}}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022) in the shadow variable nonparametric regression setting. The proof is in Section S.6.3 of the supplement. Plugging this into Corollary 1 leads to the error rate of the FW-Learner \(\widehat{m}_{J}(x)\).
**Theorem 4**.: _Under the same notation as Theorem 3, and under (**SV**), (**CC**), (**BF**), the FW-Learner \(\widehat{m}_{J}(x)\) satisfies_
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{ 1/2}\leq\sqrt{\frac{2\sigma^{2}J}{|\mathcal{L}_{2}|}}+\sqrt{2\kappa}E_{J}^{ \Psi}(m^{\star}) \tag{14}\] \[+\sqrt{6}\min\Bigl{\{}\left\|\frac{1}{\widehat{e}(X,Y)}-\frac{1}{ e^{\star}(X,Y)}\right\|_{4}\ \left\|\mathbb{E}\bigl{[}\bigl{(}\eta^{\star}-\widehat{\eta}\bigr{)}(X,W)\ \bigr{|}\ X,Y\bigr{]}\right\|_{4},\] \[\left\|\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e^{ \star}(X,Y)}\ \big{|}\ X,W\bigr{]}\right\|_{4}\ \left\|\bigl{(}\eta^{\star}- \widehat{\eta}\bigr{)}(X,W)\right\|_{4}\Bigr{\}}\]
The proof of this result is in Section S.6.3 of the supplement. Note that \(\sigma^{2}\) is finite when \(\widehat{\eta}\) and \(1/\widehat{e}\) are bounded. Theorem 4 demonstrates that the FW-Learner performs nearly as well as the Oracle learner with a slack of the order of the mixed bias of estimated nuisance functions for constructing the pseudo-outcome. Unlike the MAR case, the nuisance functions under the shadow variable assumption are not just regression functions and hence, the rate of estimation of these nuisance components is not obvious. In what follows, we provide a brief discussion of estimating these nuisance components. Focusing on the outcome bridge function which solves equation (12), this equation is a so-called Fredholm integral equation of the first kind, which are well known to be ill-posed (Kress et al. (1989)). Informally, ill-posedness essentially measures the extent to which the conditional expectation defining the kernel of the integral equation \(Q\mapsto\mathbb{E}_{Q}\left[\eta\left(X_{i},W_{i}\right)\big{|}\ X_{i}=x,Y_{i}=y\right]\) smooths out \(\eta\). Let \(L_{2}(X)\) denote the class of functions \(\{f:\mathbb{E}_{X}[f^{2}(X)]\leq\infty\}\), and define the operator \(T:L_{2}(X,W)\to L_{2}(X,Y)\) as the conditional expectation operator conditional expectation operator given by
\[[T\eta](x,y):=\mathbb{E}\left[\eta\left(X_{i},W_{i}\right)\big{|}\ X_{i}=x,Y_{i }=y\right].\]
Let \(\Psi_{J}:=\operatorname{clsp}\left\{\psi_{J1},\ldots,\psi_{JJ}\right\}\subset L _{2}(X,W)\) denote a sieve spanning the space of functions of variables \(X,W\). One may then define a corresponding sieve \(L_{2}\) measure of ill-posedness coefficient as in Blundell et al. (2007) as \(\tau_{\eta}:=\sup_{\eta\in\Psi_{J}:\eta\neq 0}\|\eta\|_{L_{2}(X,W)}/\|T\eta\|_{L_{ 2}(X,Y)}\).
**Definition 1** (Measure of ill-posedness).: _Following Blundell et al. (2007), the integral equation (12) with \((X_{i},W_{i})\) of dimension \((d_{x}+d_{w})\) is said to be
1. _mildly ill-posed if_ \(\tau_{\eta}=O\left(J^{\varsigma_{\eta}/(d_{x}+d_{w})}\right)\) _for some_ \(\varsigma_{\eta}>0\)_;_
2. _severely ill-posed if_ \(\tau_{\eta}=O\left(\exp\left(\frac{1}{2}J^{\varsigma_{\eta}/(d_{x}+d_{w})} \right)\right)\) _for some_ \(\varsigma_{\eta}>0\)_._
Under the condition that integral equation (12) is mildly ill-posed and that \(\eta^{\star}\) is \(\alpha_{\eta}\)-Holder smooth, Chen and Christensen (2018) established that the optimal rate for estimating \(\eta^{\star}\) under the sup norm is \((n/\log n)^{-\alpha_{h}/(2(\alpha_{\eta}+\varsigma_{\eta})+d_{x}+d_{w})}\); see Lemma 5 in the supplement for details. Likewise, the integral equation (11) is also a Fredholm integral equation of the first kind with its kernel given by the conditional expectation operator \([T^{\prime}e](x,w):=\mathbb{E}\left[e(X_{i},Y_{i})\mid X_{i}=x,W_{i}=w\right]\) for any function \(u\in L_{2}(X,Y)\), and \(T^{\prime}\) is the adjoint operator of \(T\). Let \(\Psi_{J}^{\prime}:=\operatorname{clsp}\left\{\psi_{J1}^{\prime},\ldots,\psi_ {JJ}^{\prime}\right\}\subset L_{2}(X,Y)\) denote a (different) sieve spanning the space of functions of variables \(X,Y\). Its corresponding sieve \(L_{2}\) measure of ill-posedness may be defined as \(\tau_{e}=\sup_{o\in\Psi_{J}:o\neq 0}\|o\|_{L_{2}(X,Y)}/\|To\|_{L_{2}(X,W)}.\) Thus in the mildly ill-posed case \(\tau_{e}=O\left(J^{\varsigma_{e}/(d_{x}+1)}\right)\) for some \(\varsigma_{e}>0\), the optimal rate with respect to the sup norm for estimating \(e^{\star}\) is \((n/\log n)^{-\alpha_{e}/(2(\alpha_{e}+\varsigma_{e})+d_{x}+1)}\) when \(e^{\star}\) is \(\alpha_{e}\)-smooth and bounded.
Together with (14), this leads to the following characterization of the error of the FW-Learner \(\widehat{m}_{J}(X)\) if \(E_{J}^{\Psi}(m^{\star})\lesssim J^{-\alpha_{m}/d_{x}}\). Without loss of generality, suppose that
\[\min\Bigl{\{} \Bigl{\|}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e^{\star}(X,Y)} \Bigr{\|}_{4}\ \Bigl{\|}\mathbb{E}\bigl{[}(\eta^{\star}-\widehat{\eta})(X,W)\bigm{|}X,Y \bigr{]}\Bigr{\|}_{4}, \tag{15}\] \[\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\ \Bigl{\|}\bigl{(}\eta^{\star}-\widehat{\eta} \bigr{)}(X,W)\Bigr{\|}_{4}\Bigr{\}}\] \[=\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\ \Bigl{\|}\bigl{(}\eta^{\star}-\widehat{\eta} \bigr{)}(X,W)\Bigr{\|}_{4},\]
and suppose that \(\pi^{\star}\) is \(\alpha_{\pi}\)-Holder smooth, such that
\[\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\] \[=\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}\bigm{|}X,W \bigr{]}-\frac{1}{\pi^{\star}(X,W)}\Bigr{\|}_{4}\]
is of the order of \(n^{-\alpha_{\pi}/(2\alpha_{\pi}+d_{x}+d_{w})}\) the minimax rate of estimation of the regression function \(\pi^{\star}\).
**Corollary 3**.: _Under the conditions in Lemma 5 in the supplement and assuming that the linear operator \(T\) is mildly ill-posed with exponent \(\varsigma_{\eta}\); then if \(m^{\star}\) satisfies \(E_{J}^{\Psi}(m^{\star})\lesssim J^{-\alpha_{m}/d_{x}}\), \(\pi^{\star}\) is \(\alpha_{\pi}\)-Holder smooth and \(\eta^{\star}\) is \(\alpha_{\eta}\)-Holder smooth, and equation_ (15) _holds, then the FW-Learner's estimation error
satisfies_
\[\left\|\widehat{m}_{J}(X)-m^{\star}(X)\right\|_{2}\lesssim\sqrt{\frac{\sigma^{2}J }{n}}+J^{-\alpha_{m}/d_{x}}+(n/\log n)^{-\alpha_{\eta}/(2(\alpha_{\eta}+\varsigma _{\eta})+d_{x}+d_{w})}n^{-\alpha_{\pi}/(2\alpha_{\pi}+d_{x}+d_{w})}. \tag{16}\]
**Remark 3.1** A few remarks on Corollary 3: (1) If the mixed bias term incurred for estimating nuisance functions is negligible relative to the first two terms in (16), then the order of the error of the FW-Learner matches that of the oracle with access to missing data; (2) In settings where operators \(T_{\eta},T_{e}\), say, \(T_{\eta}\), are severely ill-posed, i.e. where \(\tau_{\eta}=O\left(\exp\left(\frac{1}{2}J^{\varsigma_{\eta}/(d_{x}+d_{w})} \right)\right)\) for some \(\varsigma_{\eta}>0\), Theorem 3.2 of Chen and Christensen (2018) established that the optimal rate of estimating \(\eta^{\star}\) with respect to the sup norm is of the order \((\log n)^{-\alpha_{\eta}/\varsigma_{\eta}}\) which would likely dominate the error \(\left\|\widehat{m}_{J}-m^{\star}\right\|_{2}\). In this case, the FW-Learner may not be able to attain the oracle rate. In this case, whether the oracle rate is at all attainable remains an open problem in the literature. \(\diamond\)
## 4 FW-Learner of the CATE
Estimating the conditional average treatment effect (CATE) plays an important role in health and social sciences where one might be interested in tailoring treatment decisions based on the person's characteristics, a task that requires learning whether and the extent to which the person may benefit from treatment; e.g. personalized treatment in precision medicine (Ashley, 2016).
Suppose that we have observed i.i.d data \(O_{i}=(X_{i},A_{i},Y_{i}),1\leq i\leq n\) with \(A_{i}\) representing the binary treatment assignment, \(Y_{i}\) being the observed response, and covariates \(X_{i}\). The CATE is formally defined as \(m^{\star}(x)=\mathbb{E}\left(Y^{1}-Y^{0}|X=x\right)\), where \(Y^{a}\) is the potential outcome or counterfactual outcome, had possibly contrary to fact, the person taken treatment \(a\). The well-known challenge of causal inference is that one can at most observe the potential outcome for the treatment the person took and therefore, the counterfactual regression defining the CATE is in general not identified outside of a randomized experiment with perfect compliance, without additional assumptions. The next section describes the identification and FW-Learner of the CATE under standard unconfoundedness conditions, while the following Section 4.2 presents analogous results for the proximal causal inference setting which does not make the unconfoundedness assumption. Throughout, we make the assumption of consistency, that \(Y=AY^{1}+(1-A)Y^{0}\); and positivity, that \(\mathbb{P}(A=a|X,U)>0\) almost surely for all \(a\), where \(U\) denotes unmeasured confounders, and therefore is empty under unconfoundedness.
### FW-Learner for CATE under Ignorability
In this section, we make the additional assumption of unconfoundedness, so that the treatment mechanism is ignorable.
**No unmeasured confounding Assumption:**\((Y^{0},Y^{1})\perp A|X\). Under this condition, the CATE is nonparametrically identified by \(\tau^{\star}(x)=\mu_{1}(x)-\mu_{0}(x)\), where for \(a\in\{0,1\}\),
\[\mu^{\star}_{a}(x):=\mathbb{E}[Y|X=x,A=a];\]
Let \(\pi^{\star}(x):=\mathbb{P}(A=1|X=x)\). We will now define the Forster-Warmuth estimator for CATE. Split \(\{1,2,\ldots,n\}\) into two parts \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Based on \((X_{i},A_{i},Y_{i}),i\in\mathcal{I}_{1}\), estimate \(\pi^{\star},\mu^{\star}_{0},\mu^{\star}_{1}\) with \(\widehat{\pi},\widehat{\mu}_{0},\widehat{\mu}_{1}\), respectively. For \(i\in\mathcal{I}_{2}\), define the pseudo-outcome
\[\widehat{I}_{1}(X_{i},A_{i},Y_{i})=\frac{A_{i}-\widehat{\pi}(X_{i})}{\widehat {\pi}(X_{i})(1-\widehat{\pi}(X_{i}))}(Y_{i}-\widehat{\mu}_{A_{i}}(X_{i}))+ \widehat{\mu}_{1}(X_{i})-\widehat{\mu}_{0}(X_{i}),\]
which is an estimator of well-known (uncentered) efficient influence function of the marginal average treatment effect \(\mathbb{E}(Y^{1}-Y^{0})\), evaluated at preliminary estimates of nuisance functions, and is in our general mixed-bias class of influence functions given by (5) with \(h_{0}(O_{h})=\mu^{\star}_{W}(X),q_{0}(O_{q})=1/\pi^{\star}(X),g_{1}(O)=-1\{A= a\},g_{2}(O)=1\,\{A=a\}Y,g_{3}(O)=1\) and \(g_{4}(O)=0\). Write
\[H_{I_{1}}(x)=\mathbb{E}\Big{[}\widehat{I}_{1}(X,A,Y)|X=x\Big{]}.\]
We first provide a characterization of the conditional bias of the pseudo-outcome in the following lemma.
**Lemma 3**.: _The conditional bias of the pseudo outcome \(\widehat{I}_{1}(X_{i},A_{i},Y_{i})\)_
\[H_{I_{1}}(x)-\tau^{\star}(x) =\pi^{\star}(x)\Big{(}\frac{1}{\widehat{\pi}(x)}-\frac{1}{\pi^{ \star}(x)}\Big{)}\big{(}\widehat{\mu}_{1}(x)-\mu^{\star}_{1}(x)\big{)}\] \[\quad-(1-\pi^{\star}(x))\Big{(}\frac{1}{1-\widehat{\pi}(x)}- \frac{1}{1-\pi^{\star}(x)}\Big{)}\big{(}\widehat{\mu}_{0}(x)-\mu^{\star}_{0}( x)\big{)}.\]
This result directly follows from the mixed bias form (6) which recovers a well-know result in the literature, originally due to Robins and colleagues; also see Kennedy (2020). For convenience, the proof is reproduced in Section S.7.2 of the supplement. Let \(\widehat{\tau}_{J}(x)\) be the Forster-Warmuth estimator
computed from \(\{(\bar{\phi}_{J}(X_{i}),\widehat{I}_{1}(X_{i},A_{i},Y_{i})),i\in\mathcal{I}_{2}\}\).
We establish our first oracle result of the FW-Learner of the CATE.
**Theorem 5**.: _Under the assumptions given above, including unconfoundedness, suppose that \(\sigma^{2}\) is an upper bound for \(\mathbb{E}[\widehat{I}_{1}^{2}(X,A,Y)\mid X]\), then FW-Learner \(\widehat{\tau}_{J}(x)\) satisfies the error bound_
\[\big{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\big{\|}_{2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2}\big{\|}\sum_{j=J+1}^{ \infty}\theta_{j}^{\star}\phi_{j}(X)\big{\|}_{2}\\ +(1+\sqrt{2})\bigg{(}\Big{\|}\frac{\pi^{\star}(X)}{\widehat{\pi} (X)}-1\Big{\|}_{4}\Big{\|}\widehat{\mu}_{1}(X)-\mu_{1}^{\star}(X)\Big{\|}_{4}+ \Big{\|}\frac{1-\pi^{\star}(X)}{1-\widehat{\pi}(X)}-1\Big{\|}_{4}\Big{\|} \widehat{\mu}_{0}(X)-\mu_{0}^{\star}(X)\Big{\|}_{4}\bigg{)}.\]
See Section S.7.2 in the supplement for a formal proof of this result. Note that the condition that \(\sigma^{2}\) is bounded requires \(\widehat{\mu}_{0}\), \(\widehat{\mu}_{1},1/\widehat{\pi}\) and \(1/(1-\widehat{\pi})\) to be bounded.
**Corollary 4**.: _Let \(d\) denote the intrinsic dimension of \(X\). If_
1. _The propensity score_ \(\pi^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\pi}/(2\alpha_{\pi}+d)}\) _rate in the_ \(L_{4}\)_-norm;_
2. _The regression functions_ \(\mu_{0}^{\star}\) _and_ \(\mu_{1}^{\star}\) _are estimated at the rate of_ \(n^{-2\alpha_{\mu}/(2\alpha_{\mu}+d)}\) _in the_ \(L_{4}\)_-norm._
3. _The CATE_ \(\tau^{\star}\) _with respect to the fundamental sequence_ \(\Psi\) _satisfies_ \(E_{J}^{\Psi}(\tau^{\star})\leq CJ^{-\alpha_{\tau}/d}\) _for some constant_ \(C\)_,_
_Then, \(\widehat{\tau}_{J}(x)\) satisfies_
\[\Big{(}\mathbb{E}[(\widehat{\tau}_{J}(X)-\tau^{\star}(X))^{2}| \widehat{\pi},\widehat{\mu}]\Big{)}^{1/2}\lesssim\sqrt{\frac{\sigma^{2}J}{n} }+J^{-\alpha_{\tau}/d}+n^{-\frac{\alpha_{\pi}}{2\alpha_{\pi}+d}-\frac{\alpha _{\mu}}{2\alpha_{\mu}+d}}. \tag{17}\]
When the last term of (17) is smaller than the oracle rate \(n^{-\frac{\alpha_{\pi}}{2\alpha_{\tau}+d}}\), the oracle minimax rate can be attained by balancing the first two terms. Therefore, the FW-Learner is oracle efficient if \(\alpha_{\mu}\alpha_{\pi}\geq d^{2}/4-(\alpha_{\pi}+\frac{d}{2})(\alpha_{\mu}+ \frac{d}{2})/(1+\frac{2\alpha_{\pi}}{d})\). In the special case when \(\alpha_{\mu}\) and \(\alpha_{\pi}\) are equal, if we let \(s=\alpha_{\mu}/d=\alpha_{\pi}/d\) and \(\gamma=\alpha_{\tau}/d\) denote the effective smoothness, and when \(s\geq\frac{\alpha_{\tau}/2}{\alpha_{\tau}+d}=\frac{\gamma/2}{\gamma+1}\), the last term in (9) is the bias term that comes from the pseudo-outcome, which is smaller than that of the oracle minimax rate of estimation of \(n^{-\alpha_{\tau}/(2\alpha_{\tau}+d)}\), in which case, the FW-Learner is oracle efficient.
This method using split data has valid theoretical properties under minimal conditions and is similar to Algorithm 1 for missing outcome described in Appendix S.6, and cross-fitting can be applied
as discussed before in Section 2.2. We also provide an alternative methodology that builds upon the split data method. It uses the full data for both training and estimation, which is potentially more efficient by avoiding sample splitting. The procedure is similar to what we described in Algorithm 1 and is deferred to Algorithm 2 in the supplementary material.
Kennedy (2020) and Kennedy et al. (2022) studied the problem of estimating CATE under ignorability quite extensively-the latter paper derived the minimax rate for CATE estimation where distributional components are Holder-smooth, along with a new local polynomial estimator that is minimax optimal under some conditions. In comparison, our procedure is not necessarily minimax optimal in some regimes considered there, with the advantage that it is more general with minimum constraints on the bases functions.
**Remark 4.1** Note that although Theorem 5 and Corollary 4 continue to hold for modified CATE which marginalizes over some confounders, and therefore conditions on a subset of measured confounders, say \(\mathbb{E}\left(Y^{1}-Y^{0}\mid V=v\right)\) where \(V\) is a subset of covariates in \(X\), with the error bound of Corollary modified so that the second term of the bound (17) is replaced with \(J^{-\alpha_{\tau_{v}}/d_{v}}\), where \(\alpha_{\tau_{v}}/d_{v}\) is the effective smoothness of the modified CATE. The application given in Section 5 illustrates our methods for such marginalized CATE function which is particularly well-motivated from a scientific perspective. \(\diamond\)
### FW-Learner for CATE under proximal causal inference
Proximal causal inference provides an alternative approach for identifying the CATE in presence of unobserved confounding,provided that valid proxies of the latter are available (Miao et al., 2018; Tchetgen Tchetgen et al., 2020). Throughout, recall that \(U\) encodes (possibly multivariate) unmeasured confounders. The framework requires that observed proxy variables \(Z\) and \(W\) satisfy the following conditions.
**Assumption 1.**
* \(Y^{(a,z)}=Y^{(a)}\) almost surely, for all a and \(z\).
* \(W^{(a,z)}=W\) almost surely, for all a and \(z\).
* \(\left(Y^{(a)},W\right)\perp(A,Z)\mid(U,X)\), for \(a\in\{0,1\}\).
Note that Assumption 1 implies that \(Y\perp Z\mid A,U,X\) and \(W\perp(A,Z)\mid U,X\) as illustrated with the
causal diagram in Figure 1 which describes a possible setting where these assumptions are satisfied (the gray variable \(U\) is unobserved)and Cui et al. (2023) for identifiability.
A key identification condition of proximal causal inference is that exists an outcome confounding bridge function \(h^{\star}(w,a,x)\) that solves the following integral equation (Miao et al., 2018; Tchetgen Tchetgen et al., 2020)
\[\mathbb{E}[Y\mid Z,A,X]=\mathbb{E}\left[h^{\star}(W,A,X)\mid Z,A,X\right], \text{almost surely}. \tag{18}\]
Miao et al. (2023) then established sufficient conditions under which the CATE is nonparametrically identified by \(\mathbb{E}(h(W,1,X)-h(W,0,X)|X)\).
Alternatively, Cui et al. (2023) considered an alternative identification strategy based on the following condition. There exists a treatment confounding bridge function \(q^{\star}(z,a,x)\) that solves the following integral equation
\[\mathbb{E}\left[q^{\star}(Z,a,X)\mid W,A=a,X\right]=\frac{1}{\mathbb{P}(A=a \mid W,X)},\text{almost surely}. \tag{19}\]
Also see Deaner (2018) for a related condition. Cui et al. (2023) then established sufficient conditions under which the CATE is nonparametrically identified by \(\mathbb{E}(Y(-1)^{1-A}q(Z,A,X)|X)\). Let \(O=(X,Z,W,A,Y)\), Cui et al. (2023) derived the locally semiparametric efficient influence function for the marginal ATE (i.e. \(\mathbb{E}[Y^{(1)}-Y^{(0)}]\)) in a nonparametric model where one only assumes an outcome bridge function exists, at the submodel where both outcome and treatment confounding
Figure 1: A proximal DAG
functions exist and are uniquely identified, but otherwise are unrestricted:
\[\mathrm{IF}_{\psi_{0}}(O;h^{\star},q^{\star})=-1\{A=a\}q^{\star}(Z,A,X)h^{\star}(W,A,X)\] \[+\mathbb{1}\{A=a\}Yq^{\star}(Z,A,X)+h^{\star}(W,a,X)-\psi_{0},\]
which falls in the mixed-bias class of influence functions (5) with \(h_{0}(O_{h})=h^{\star}(W,A,X),q_{0}(O_{q})=q^{\star}(Z,A,X)\), \(g_{1}(O)=-1\{A=a\},g_{2}(O)=\mathbb{1}\{A=a\}Y,g_{3}(O)=1,g_{4}(O)=0\), and motivates the following FW-Learner of the CATE.
Proximal CATE FW-Learner estimator: Split the training data into two parts and train the nuisance functions \(\widehat{q},\widehat{h}\) on the first split and define \(\widehat{\tau}_{J}(x)\) to be the Forster-Warmuth estimator computed based on the data \(\big{\{}(\widehat{\phi}_{J}(X_{i}),\widehat{I}(X_{i},A_{i},Y_{i},Z_{i},W_{i}) ),i\in\mathcal{I}_{2}\big{\}}\), where the pseudo-outcome \(\widehat{I}\) is
\[\widehat{I}(O;\widehat{h},\widehat{q}):=\big{\{}A\widehat{q}(Z,1, X)-(1-A)\widehat{q}(Z,0,X)\big{\}}\{Y-\widehat{h}(W,A,X)\}\] \[+\widehat{h}(W,1,X)-\widehat{h}(W,0,X), \tag{20}\]
for any estimators \(\widehat{h},\widehat{q}\) of the nuisance functions \(h^{\star}\) and \(q^{\star}\).
Write \(H_{I}(X)=\mathbb{E}[\widehat{I}(O;\widehat{h},\widehat{q})|X]\), where the expectation is taken conditional on the first split of the training data. We have the following result.
**Lemma 4**.: _The pseudo-outcome (20) has conditional bias :_
\[H_{I}(x)-\tau^{\star}(x)=\mathbb{E}\bigg{[}A(h^{\star}-\widehat{ h})(W,1,x)\big{(}\widehat{q}(Z,1,x)-q^{\star}(Z,1,x)\big{)}\] \[-(1-A)(h^{\star}-\widehat{h})(W,0,x)\big{(}\widehat{q}(Z,0,x)-q^{ \star}(Z,1,x)\big{)}\;\Big{|}\;X=x\bigg{]}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022); its direct proof is deferred to Section S.7.3 of the supplement. Together with Corollary 1 yields a bound for the error of the FW-Learner \(\widehat{\tau}_{J}\).
**Theorem 6**.: _Let \(\sigma^{2}\) be an upper bound on \(\mathbb{E}[\widehat{I}^{2}(X,Z,W,A,Y)\mid X]\), the FW-Learner \(\widehat{\tau}_{J}(x)\) satisfies:_
\[\big{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\big{\|}_{2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2}\Big{\|} \sum_{j=J+1}^{\infty}\theta_{j}^{\star}\phi_{j}(X)\Big{\|}_{2}\] \[+2(1+\sqrt{2})\min\Bigl{\{} \Big{\|}(\widehat{q}-q^{\star})(Z,1,X)\Big{\|}_{4}\Big{\|} \mathbb{E}\big{[}(\widehat{h}-h^{\star})(W,1,X)|Z,X\big{]}\Big{\|}_{4}\] \[+\Big{\|}(\widehat{q}-q^{\star})(Z,0,X)\Big{\|}_{4}\Big{\|} \mathbb{E}\big{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\big{]}\Big{\|}_{4},\] \[+\Big{\|}\mathbb{E}\big{[}(\widehat{q}-q^{\star})(Z,0,X)\Big{|} \;W,X\big{]}\Big{\|}_{4}\Big{\|}(\widehat{h}-h^{\star})(W,0,X)\Big{\|}_{4} \Bigr{\}}.\]
The proof is in Section S.7.3 of the supplement. Note that the condition that \(\sigma^{2}\) is bounded requires that \(\widehat{h}_{0}\), \(\widehat{h}_{1},\widehat{q}_{0}\) and \(\widehat{q}_{1}\) are bounded. The rest of this section is concerned with estimation of the bridge functions \(h^{\star}\) and \(q^{\star}\).
Estimation of bridge functions \(h^{\star}\) and \(q^{\star}\):Focusing primarily on \(h^{\star}\), we note that integral equation (18) is a Fredholm integral equation of the first kind similar to integral equations of Section 3.2 on shadow variable FW-Learner, with corresponding kernel given by the conditional expectation operator \([T_{h}h](z,a,x)=\mathbb{E}[h(W_{i},A_{i},X_{i})\mid Z_{i}=z,A_{i}=a,X_{i}=x]\).
Thus, minimax estimation of \(h^{\star}\) follows from Chen and Christensen (2018) and Chen et al. (2021) attaining the rate \((n/\log n)^{-\alpha_{h}/(2(\alpha_{h}+\varsigma_{h})+d_{x}+d_{w}+1)}\) assuming \(T_{h}\) is mildly ill-posed with exponent \(\varsigma_{h}\); a corresponding adaptive minimax estimator that attains this rate is also given by the authors which does not require prior knowledge about \(\alpha_{h}\) and \(\varsigma_{h}\). See details given in Lemma 6 in the supplement. Analogous results also hold for \(q^{\star}\) which can be estimated at the minimax rate of \((n/\log n)^{-\alpha_{q}/(2(\alpha_{q}+\varsigma_{q})+d_{x}+d_{z})}\) in the mildly ill-posed case, as established in Lemma 7 of the supplement,
where \(\alpha_{q}\) and \(\varsigma_{q}\) are similarly defined. Without loss of generality, suppose that
\[\min\Bigl{\{}\Bigl{\|}(\widehat{q}-q^{\star})(Z,1,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,1,X)|Z,X\bigr{]}\Bigr{\|}_ {4}\] \[\qquad+\Bigl{\|}(\widehat{q}-q^{\star})(Z,0,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|}_ {4},\] \[\qquad+\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{q}-q^{\star})(Z,0,X) \ \Big{|}\ W,X\bigr{]}\Bigr{\|}_{4}\Bigl{\|}(\widehat{h}-h^{\star})(W,0,X) \Bigr{\|}_{4}\Bigr{\}}\] \[\qquad+\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{q}-q^{\star})(Z,0,X) \ \Big{|}\ W,X\bigr{]}\Bigr{\|}_{4}\Bigl{\|}(\widehat{h}-h^{\star})(W,0,X) \Bigr{\|}_{4}\Bigr{\}}\] \[\qquad+\Bigl{\|}(\widehat{q}-q^{\star})(Z,0,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|}_ {4}.\]
Further suppose that \(\mu^{\star}(X,Z):=\mathbb{E}\bigl{[}h^{\star}(W,0,X)|Z,X\bigr{]}\) is \(\alpha_{\mu}\)-smooth, and \(\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|} _{4}\) matches the minimax rate of estimation for \(\mu^{\star}(X,Z)\) with respect to the \(L_{4}\)-norm given by \(n^{-\alpha_{\mu}/(2\alpha_{\mu}+d_{x}+d_{z})}\). Accordingly, Theorem 6, together with Lemma 6 and 7, leads to the following corollary.
**Corollary 5**.: _Under the above conditions, together with the conditions of Lemma 6 and 7 in the supplement, and assuming that the integral equation with respect to the operator \(T_{q}\) is mildly ill-posed, we have that:_
\[\bigl{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\bigr{\|}_{2}\lesssim\sqrt{ \frac{\sigma^{2}J}{n}}+J^{-\alpha_{\tau}/d_{x}}+(n/\log n)^{-\alpha_{q}/(2( \alpha_{q}+\varsigma_{q})+d_{x}+d_{z})}n^{-\alpha_{\mu}/(2\alpha_{\mu}+d_{x}+d _{z})}.\]
A remark analogous to Remark 3.1 equally applies to Corollary 5. The result thus establishes conditions under which proximal the FW-Learner can estimate the CATE at the same rate as an oracle with access to bridge functions. This result appears to be completely new to the fast-growing literature on proximal causal inference.
## 5 Simulations
In this section, we study the finite sample performance of the proposed estimator focusing primarily on the estimation of the CATE via simulations. We consider a relatively simple data-generating mechanism which includes a covariate \(X\) uniformly distributed on \([-1,1]\), a Bernoulli distributed treatment with conditional mean equal to \(\pi^{\star}(x)=0.1+0.8\times\operatorname{sign}(x)\) and \(\mu_{1}(x)=\mu_{0}(x)\) are equal
to the piece-wise polynomial function defined on page 10 of Gyorfi et al. (2002). Therefore we are simulating under the null CATE model. Multiple methods are compared in the simulation study. Specifically, the simulation includes all four methods described in Section 4 of Kennedy (2020): 1. a plug-in estimator that estimates the regression functions \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) and takes the difference (called the T-Learner by Kunzel et al. (2019), abbreviated as plugin below), 2. the X-Learner from Kunzel et al. (2019) (xl), 3. the DR-Learner using smoothing splines from Kennedy (2020) (drl), and 4. an oracle DR Learner that uses the oracle (true) pseudo-outcome in the second-stage regression (oracle.drl), we compare these previous methods to 5. the FW-Learner with basic spline basis (FW_bs), and 6. the least squares series estimator with basic spline basis (ls_bs), where cross-validation is used to determine the number of basis functions to use for 5. and 6. Throughout, nuisance functions \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) are estimated using smoothing splines, and the propensity score \(\pi^{\star}\) is estimated using logistic regression.
The top part of Figure 2 gives the mean squared error (MSE) for the six CATE estimators at training sample size \(n=2000\), based on 500 simulations with MSE averaged over 500 independent test samples. The bottom part of Figure 2 gives the ratio of MSE of each competing estimator compared to the FW-Learner (the baseline method is FW_bs) across a range of convergence rates for the propensity score estimator \(\widehat{\pi}\). The propensity score estimator is constructed as \(\widehat{\pi}=\text{expit}\left\{\text{logit}(\pi)+\epsilon_{n}\right\}\), where \(\epsilon_{n}\sim N\left(n^{-\alpha},n^{-2\alpha}\right)\) with varying convergence rate controlled by the parameter \(\alpha\), so that \(\text{RMSE}(\widehat{\pi})\sim n^{-\alpha}\). The results demonstrate that, at least in the simulated setting, our FW-Learner attains the smallest mean squared error among all methods, approaching that of the oracle as the propensity score estimation error decreases (i.e., as the convergence rate increases). The performance of the FW-Learner and the least squares series estimator is visually challenging to distinguish in the figure; however closer numerical inspection confirms that the FW-Learner outperforms the least squares estimator.
To further illustrate the comparison between the proposed FW-Learner and the least squares estimator, we performed an additional simulation study focusing on these two estimators using two different sets of basis functions, in a simulation setting similar than the previous simulation, other than the covariate which we instead generate under a heavy-tailed distribution that is an equal probability mixture of a uniform distribution on \([-1,1]\) and a standard Gaussian distribution. The results are reported in Figure 3, for both FW-Learner (FW) and Least Squares (LS) estimators with basic splines (bs), natural splines (ns) and polynomial basis (poly). We report the ratio of MSE of
all estimators against the FW-Learner with basic splines (FW_bs). The sample size for the left-hand plot is \(n=2000\), and \(n=400\) for the right-hand plot. The FW-Learner consistently dominates the least squares estimator for any given choice of bases function in this more challenging setting. This additional simulation experiment demonstrates the robust of the FW-Learner against possible heavy-tailed distribution when compared to least-squares Learner.
Figure 2: A comparison between different estimators, sample size \(n=2000\)—Top figure shows \(n\times\mathrm{MSE}\) of each estimator; The bottom plot shows the ratio of MSE of different estimators compared to the proposed Forster–Warmuth estimator with basic splines (baseline). The MSE is averaged over 500 simulations.
## 6 Data Application: CATE of Right Heart Catherization
We illustrate the proposed FW-Learner with an application of CATE estimation both assuming unconfoundedness and without making the assumption using proximal causal inference. Specifically, we reanalyze the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) with the aim of evaluating the causal effect of right heart catheterization (RHC) during the initial care of critically ill patients in the intensive care unit (ICU) on survival time up to 30 days (Connors et al. (1996)). Tchetgen Tchetgen et al. (2020) and Cui et al. (2023) analyzed this
Figure 3: A comparison between FW and LS estimators with different basis for \(X\) with heavy-tailed distribution, baseline method is the FW-Learner with basic splines (FW_bs); Left: sample size \(n=2000\); Right: \(n=400\). The MSE is averaged over 500 simulations.
dataset to estimate the marginal average treatment effect of RHC, using the proximal causal inference framework, with an implementation of a locally efficient doubly robust estimator, using parametric estimators of the bridge functions. Data are available on 5735 individuals, 2184 treated and 3551 controls. In total, 3817 patients survived and 1918 died within 30 days. The outcome Y is the number of days between admission and death or censoring at day 30. We include all 71 baseline covariates to adjust for potential confounding. To implement the FW-Learner under unconfoundedness, the nuisance functions \(\pi^{\star}\), \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) are estimated using SuperLearner6 that includes both RandomForest and generalized linear model (GLM).
Footnote 6: SuperLearner is a stacking ensemble machine learning approach with uses cross-validation to estimate the performance of multiple machine learners and then creates an optimal weighted average of those models using test data. This approach has been formally established to be asymptotically as accurate as the best possible prediction algorithm that is tested. For details, please refer to Polley and van der Laan (2010).
Variance of the FW-Learner:In addition to producing an estimate of the CATE, one may wish to quantify uncertainty based on this estimate. We describe a simple approach for computing standard error for the CATE at a fixed value of \(x\) and corresponding pointwise confidence intervals. The asymptotic guarantee of the confidence intervals for the least squares estimator is established in Newey (1997) and Belloni et al. (2015) under some conditions. Because the FW-Learner is asymptotically equivalent to the Least-squares estimator, the same variance estimator as that of the least squares series estimator may be used to quantify uncertainty about the FW-Learner. Recall that the Least-squares estimator is given by \(\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{\phi}(X_{i})\bar{\phi}(X_{i})^{\top} \big{]}^{-1}\big{\{}\sum_{i}\bar{\phi}(X_{i})\widehat{I}_{i}\big{\}}\), the latter has variance \(\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{\phi}(X_{i})\bar{\phi}(X_{i})^{\top} \big{]}^{-1}\bar{\phi}(x)\times\sigma^{2}(\widehat{I})\), where \(\sigma^{2}(\widehat{I})\) is the variance of the pseudo-outcome \(\widehat{I}\); where we have implicitly assumed homoscedasticity, i.e. that the variance of \((\widehat{I})\) is independent of \(X\). Hence,
\[\text{var}(\widehat{\tau}(x))\approx\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{ \phi}(X_{i})\bar{\phi}(X_{i})^{\top}\big{]}^{-1}\bar{\phi}(x)\times\sigma^{2}( \widehat{I}).\]
Similar to Tchetgen Tchetgen et al. (2020) and Cui et al. (2023), our implementation of the Proximal FW-Learner specified baseline covariates (age, sex, cat1 coma, cat2 coma, dnr1, surv2md1, aps1) for confounding adjustment; as well as treatment and outcome confounding proxies \(Z=(\text{paf1},\text{paco21})\) and \(W=(\text{ph1},\text{hema1})\). Confounding bridge functions were estimated nonparametrically using the adversarial reproducing kernel Hilbert spaces (RKHS) learning approach of Ghassami et al. (2022). The estimated CATE and corresponding pointwise 95 percent confidence intervals are reported in Figure 4 as a function of the single variable measuring the 2-month model survival prediction
at data 1 (surv2md1), for both approaches, each using both splines and polynomials. Cross-validation was used throughout to select the number of knots for splines and the degree of the polynomial bases, respectively. The results are somewhat consistent for both bases functions, and suggest at least under unconfoundedness conditions that high risk patients likely benefited most from RHC, while low risk patients may have been adversely impacted by RHC. In contrast, The Proximal FW-Learner produced a more attenuated CATE estimate, which however found that RHC was likely harmful for low risk patients. Interestingly, these analyses provide important nuances to results reported in the original analysis of Connors et al. (1996) and the more recent analysis of Tchetgen Tchetgen et al. (2020) which concluded that RHC was harmful on average on the basis of the ATE.
Figure 4: CATE estimation with 95% confidence interval produced by the FW-Learner using polynomial and spline basis. Left: under unconfoundedness; Right: in proximal causal inference setting.
Discussion
This paper has proposed a novel nonparametric series estimator of regression functions that requires minimal assumptions on covariates and bases functions. Our method builds on the Forster-Warmuth estimator, which incorporates weights based on the leverage score \(h_{n}(x)=x^{\top}(\sum_{i=1}^{n}X_{i}X_{i}^{\top}+xx^{\top})^{-1}x\), to obtain predictions that can be significantly more robust relative to standard least-squares, particularly in small to moderate samples. Importantly, the FW-Learner is shown to satisfy an oracle inequality with its excess risk bound having the same order as \(J\sigma^{2}/n\), requiring only the relatively mild assumption of bounded outcome second moment (\(\mathbb{E}[Y^{2}\mid x]\leq\sigma^{2}\)). Recent works (Mourtada (2019), Vaskevicius and Zhivotovskiy (2023)) investigate the potential for the risk of standard least-squares to become unbounded when leverage scores are uneven and correlated with the residual noise of the model. By adjusting the predictions at high-leverage points, which are most likely to lead to an unstable estimator, the Forster-Warmuth estimator mitigates the shortcomings of the least squares estimator and achieves oracle bounds even for unfavorable distributions when least squares estimation fails. In fact, the Forster-Warmuth algorithm leads to the only known exact oracle inequality without imposing any assumptions on the covariates. This is a key strength of the FW-Learner we fully leverage in the context of nonparametric series estimation to obviate imposing unnecessary conditions on the basis functions.
Another major contribution we make is to propose a general method for counterfactual nonparametric regression via series estimation in settings where the outcome may be missing. Specifically, we generalize the FW-Learner using a generic pseudo-outcome that serves as substitution for the missing response and we characterize the extent to which accuracy of the pseudo-outcome can potentially impact the estimator's ability to match the oracle minimax rate of estimation on the MSE scale. We then provide a generic approach for constructing a pseudo-outcome with "small bias" property for a large class of counterfactual regression problems, based on a doubly robust influence functions of the functional obtained via marginalizing the counterfactual regression in view. This insight provides a constructive solution to the counterfactual regression problem and offers a unified solution to several open nonparametric regression problems in both missing data and causal inference literatures. The versatility of the approach is demonstrated by considering estimation of nonparametric regression when the outcome may be missing at random; or when the outcome may be missing not at random by leveraging a shadow variable. As well as by considering estimation of the CATE under standard
unconfoundedness conditions; and when hidden confounding bias cannot be ruled out on the basis of measured covariates, however proxies of unmeasured factors are available that can be leveraged using proximal causal inference framework. While some of these settings such as CATE under unconfoundedness have been studied extensively, others such as the CATE under proximal causal inference have only recently developed.
Overall, this paper brings together aspects of traditional linear models, nonparametric models and modern literature of semiparametric theory, with applications in different contexts. This marriage of classical and modern techniques is in similar spirit as recent frameworks such as Orthogonal Learning (Foster and Syrgkanis, 2019), however our assumptions and approach appear to be fundamentally different in that, at least for specific examples considered herein, our assumptions are somewhat weaker yet lead to a form of oracle optimality. We nevertheless believe that both frameworks open the door to many future exciting directions to explore. A future line of investigation might be to extend the estimator using more accurate pseudo-outcomes of the unobserved response using recent theory on higher order influence functions (Robins et al., 2008, 2017), along the lines of Kennedy et al. (2022) who constructs minimax estimators of the CATE under unconfoundness conditions and weaker smoothness conditions on the outcome and propensity score models, however requiring considerable restrictions on the covariate distribution.Another interesting direction is the potential application of our methods to more general missing data settings, such as monotone or nonmonotone coarsening at random settings (Robins et al., 1994; Laan and Robins, 2003; Tsiatis, 2006), and corresponding coarsening not at random settings, e.g. Robins et al. (2000), Tchetgen Tchetgen et al. (2018), Malinsky et al. (2022). We hope the current manuscript provides an initial step towards solving this more challenging class of problems and generates both interest and further developments in these fundamental directions.
|
2310.00521 | Minimal special degenerations and duality | This paper includes the classification, in a simple Lie algebra, of the
singularities of Slodowy slices between special nilpotent orbits that are
adjacent in the partial order on nilpotent orbits. The irreducible components
of most singularities are (up to normalization) either a simple surface
singularity or the closure of a minimal special nilpotent orbit in a smaller
rank Lie algebra. Besides those cases, there are some exceptional cases that
arise as certain quotients of the closure of a minimal orbit in types $A_2$ and
$D_n$. We also consider the action on the slice of the fundamental group of the
smaller orbit. With this action, we observe that under Lusztig-Spaltenstein
duality, in most cases, a simple surface singularity is interchanged with the
closure of a minimal special orbit of Langlands dual type (or a cover of it
with action). This empirical observation generalizes an observation of Kraft
and Procesi in type $A_n$, where all nilpotent orbits are special. We also
resolve a conjecture of Lusztig that concerns the intersection cohomology of
slices between special nilpotent orbits. | Daniel Juteau, Paul Levy, Eric Sommers | 2023-09-30T23:09:06Z | http://arxiv.org/abs/2310.00521v1 | # Minimal special degenerations and duality
###### Abstract.
This paper includes the classification, in a simple Lie algebra \(\mathfrak{g}\), of the singularities of Slodowy slices between special nilpotent orbits that are adjacent in the partial order on nilpotent orbits. The irreducible components of most singularities are (up to normalization) either a simple surface singularity or the closure of a minimal special nilpotent orbit in a smaller rank Lie algebra. Besides those cases, there are some exceptional cases that arise as quotients of the closure of a minimal orbit in types \(D_{n}\) by \(V_{4}\), in type \(A_{2}\) by \(\mathfrak{S}_{2}\) or in type \(D_{4}\) by \(\mathfrak{S}_{4}\). We also consider the action on the slice of the fundamental group of the smaller orbit. With this action, we observe that under Lusztig-Spalenstein duality, in most cases, a singularity of simple surface singularity is interchanged with the closure of a minimal special orbit of Langlands dual type (or a cover of it with action). Lusztig's canonical quotient helps explain when this duality fails. This empirical observation generalizes an observation of Kraft and Procesi in type \(A_{n}\), where all nilpotent orbits are special. We also resolve a conjecture of Lusztig that concerns the intersection cohomology of slices between special nilpotent orbits.
## 1. Introduction
### Minimal degenerations
Let \(G\) be a simple algebraic group over \(\mathbb{C}\) and \(\mathfrak{g}\) its Lie algebra. Let \(\mathcal{N}_{o}:=\mathcal{N}(\mathfrak{g})/G\) be the set of nilpotent orbits in \(\mathfrak{g}\). The partial order on \(\mathcal{N}_{o}\) is defined so that \(\mathcal{O}^{\prime}{<}\mathcal{O}\) whenever \(\mathcal{O}^{\prime}\subsetneq\overline{\mathcal{O}}\) for \(\mathcal{O},\mathcal{O}\in\mathcal{N}_{o}\), where \(\overline{\mathcal{O}}\) is the closure of \(\mathcal{O}\). A pair \(\mathcal{O}^{\prime}{<}\mathcal{O}\) is called a _degeneration_. If \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are adjacent in the partial order (that is, there is no orbit strictly between them), then the pair is called a _minimal degeneration_. There are two minimal degenerations at either extreme of the poset \(\mathcal{N}_{o}\): the regular and subregular nilpotent orbits give a minimal degeneration, as does the minimal nilpotent orbit and the zero orbit.
Given \(e\in\mathcal{N}_{o}\), let \(\mathfrak{s}\subset\mathfrak{g}\) be an \(\mathfrak{sl}_{2}\)-triple \(\{e,h,f\}\) through \(e\). Then \(\mathcal{S}_{e}:=e+\mathfrak{g}^{f}\), where \(\mathfrak{g}^{f}\) is the centralizer of \(f\) in \(\mathfrak{g}\), is called a Slodowy slice. Associated to any degeneration \(\mathcal{O}^{\prime}{<}\mathcal{O}\) is a smooth equivalence class of singularities \(\operatorname{Sing}(\mathcal{O},\mathcal{O}^{\prime})\)[12], which can be represented by the intersection \(\mathcal{S}_{\mathcal{O},e}:=\mathcal{S}_{e}\cap\overline{\mathcal{O}}\), where \(e\in\mathcal{O}^{\prime}\). We call \(\mathcal{S}_{\mathcal{O},e}\) a _Slodowy slice singularity_.
The singularities \(\mathcal{S}_{\mathcal{O},e}\) of minimal degenerations are known in the classical types by [12] and [12] and in the exceptional types by [11] and [11], up to normalization for a few cases in \(E_{7}\) and \(E_{8}\). These results can be summarized as:
* the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) are pairwise isomorphic;
* if \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), then the normalization of an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\mathbb{C}^{2}/\Gamma\) where \(\Gamma\subset\operatorname{SL}_{2}(\mathbb{C})\) is a finite subgroup, possibly trivial. Such a variety is called a _simple surface singularity_ when \(\Gamma\) is non-trivial.
* if \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), then an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to the closure of a minimal nilpotent orbit in some simple Lie algebra, or else is one of four exceptional cases, denoted \(m^{\prime}\), \(\tau\), \(\chi\), or \(a_{2}/\mathfrak{S}_{2}\) in [11] and each appearing exactly one time.
### Action on slices
A simple surface singularity \(X=\mathbb{C}^{2}/\Gamma\) corresponds to the Dynkin diagram of a simply-laced Lie algebra (e.g., \(A_{n}\), \(D_{n}\), \(E_{n}\)) either by using the irreducible representations of \(\Gamma\) as done by McKay, or by looking at the exceptional fiber of the minimal resolution of \(X\), which is union of projective lines, whose arrangement yields the Dynkin diagram. Slodowy defined an action on \(X\) by using a normalizing subgroup \(\Gamma^{\prime}\) of \(\Gamma\) in \(\mathrm{SL}_{2}(\mathbb{C})\)[16, III.6]. Looking at the image of the action of \(\Gamma^{\prime}\) on the Dynkin diagram, he introduced the notation \(B_{n}\) (resp. \(C_{n}\), \(F_{4}\), \(G_{2}\)) to denote a simple surface singularity \(A_{2n-1}\) (resp. \(D_{n+1}\), \(E_{6}\), \(D_{4}\)) singularity with an "outer" action of \(\mathfrak{S}_{2}\) (resp. \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{3}\)). Here, "outer" refers to the fact that on the corresponding Lie algebra these come from outer automorphisms. It is also possible to do the same thing for the simple surface singularity \(A_{2n}\), where we used the notation \(A_{2n}^{+}\) in [14], when the outer action is included. Note, however, that this arises from a cyclic group of order four acting on \(X\).
The centralizer \(G^{e}\) of \(e\) in \(G\) has a reductive part \(C(\mathfrak{s})\), given by the centralizer of \(\mathfrak{s}\) in \(G\). Then \(C(\mathfrak{s})\) acts on \(\mathcal{S}_{\mathcal{O},e}\) and we are interested in the image of \(C(\mathfrak{s})\) in \(\mathrm{Aut}(\mathcal{S}_{\mathcal{O},e})\). Slodowy [16, IV.8] showed for the regular/subregular minimal degeneration, that \(\mathcal{S}_{\mathcal{O},e}\) with the action induced from \(C(\mathfrak{s})\) is exactly the simple surface singularity denoted by the type of \(\mathfrak{g}\). This explains his choice of notation.
Let \(a_{n},b_{n},\dots,g_{2}\) denote the closure of the minimal nilpotent orbit according to the type of \(\mathfrak{g}\). In [14], we introduced the notation \(a_{n}^{+}\), \(d_{n}^{+}\), \(e_{6}^{+}\), \(d_{4}^{++}\) to denote these varieties with the outer action of \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{3}\), respectively, coming from the outer automorphisms of \(\mathfrak{g}\). In _op. cit._, using these two notions of action, we studied the action of \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\) for all minimal degenerations, where we found that \(C(\mathfrak{s})\) acts transitively on the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) and in some sense acts as non-trivially as possible on \(\mathcal{S}_{\mathcal{O},e}\) given the size of the component group \(A(e):=C(\mathfrak{s})/C^{\circ}(\mathfrak{s})\). In this paper one of our results is to repeat this calculation for the classical groups (see SS5).
### Minimal Special Degenerations
Lusztig defined the notion of special representations of the Weyl group \(W\) of \(G\)[17], which led him to define the special nilpotent orbits, denoted \(\mathcal{N}_{o}^{sp}\), via the Springer correspondence. The regular, subregular, and zero nilpotent orbits are always special, but the minimal nilpotent orbit is only special when \(\mathfrak{g}\) is simply-laced (types \(A_{n}\), \(D_{n}\), or \(E_{n}\)). In the other types, there is always a unique minimal (nonzero) special nilpotent orbit. We denote the closure of the minimal special nilpotent orbits (which are not minimal nilpotent) by \(b_{n}^{sp},c_{n}^{sp},f_{4}^{sp}\), and \(g_{2}^{sp}\), according to the type of \(\mathfrak{g}\).
In this paper, we classify the Slodowy slice singularities \(\mathcal{S}_{\mathcal{O},e}\) when \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are adjacent special orbits, i.e., there is no special orbit strictly between them. We call these _minimal special degenerations_. Since \(\dim(\mathcal{S}_{\mathcal{O},e})=2\) implies the degeneration is already a minimal degeneration, we are left only to classify the cases where \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). Our main result on the classification of **minimal special degenerations** is summarized as:
* the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) are pairwise isomorphic;
* if \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), then the normalization of an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\mathbb{C}^{2}/\Gamma\) where \(\Gamma\subset\mathrm{SL}_{2}(\mathbb{C})\) is a finite, non-trivial subgroup.
* if \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), then an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to the closure of a minimal special nilpotent orbit in some simple Lie algebra, or else is isomorphic to one of the following quotients of the closure of a minimal (special) nilpotent orbit: \(a_{2}/\mathfrak{S}_{2}\), \(d_{n+1}/V_{4}\) or \(d_{4}/\mathfrak{S}_{4}\).
The singularities \(a_{2}/\mathfrak{S}_{2}\) and \(d_{4}/\mathfrak{S}_{4}\) arose in [14] and along with \(d_{n+1}/V_{4}\), they also appear in the physics literature [14].
In the case where \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), the singularities of \(\mathcal{S}_{\mathcal{O},e}\) are mostly controlled by the simple factors of \(\mathfrak{c}(\mathfrak{s})\) (see Corollary 4.12 and the remarks after it), just as occurs for most of the minimal degenerations of dimension four or more.
For dimension two, there is a single slice where one of its irreducible components is known not to be normal, namely the \(\mu\) singularity from [10], which occurs once in \(E_{8}\) (it is irreducible). We expect the other components of slices of dimension two all to be normal in the case of minimal special degenerations, unlike the case of minimal degenerations. The components of slices of dimension at least four are all known to be normal.
The irreducible minimal special degenerations in the classical types \(B\), \(C\), \(D\), are listed in Tables 1 and 2, in analogy with the classification of Kraft and Procesi for minimal degenerations [11, Table 1]. The minimal special degenerations of codimension two are already minimal degenerations and so are contained in [11], except for the action of \(A(e)\). The notation of \([2B_{n}]^{+}\), means that the image of \(C(\mathfrak{s})\) acts by a Klein 4-group \(V_{4}\) on \(\mathcal{S}_{\mathcal{O},e}\), where one generator switches the two components of the exceptional fiber and a second generator preserves both components of type \(A_{2n-1}\), but acts by outer automorphism on each one. The table assumes that \(G\) is the orthogonal group O(2n) for type \(D_{n}\), hence making use of the outer \(\mathfrak{S}_{2}\)-action of \(D_{n}\).
In type \(D_{n}\) without this outer action, we would get these same singularities but without some or all of the action on \(\mathcal{S}_{\mathcal{O},e}\). Specifically, \(D_{k}\) and \(d_{k}\) arise, without the \(\mathfrak{S}_{2}\)-action. The singularity \([2B_{k}]^{+}\) will become \(B_{k}\) for the minimal degenerations where \(\mathcal{O}\) is a very even orbit. We discuss this further in SS8.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Name of singularity & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ Lie algebra & \(\mathfrak{sp}_{2}\) & \(\mathfrak{sp}_{2n}\) & \(\mathfrak{so}_{2n+1}\) & \(\mathfrak{sp}_{4n+2}\) & \(\mathfrak{so}_{4n}\) \\ & \(n\geq 2\) & \(n\geq 1\) & \(n\geq 1\) & \(n\geq 1\) \\ \(l\) rows removed & \(l\equiv\epsilon^{\prime}\) & any & \(l\not\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) \\ \(s\) columns removed & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) \\ \hline \(\lambda\) & \([2]\) & \([2n]\) & \([2n{+}1]\) & \([2n{+}1,2n{+}1]\) & \([2n,2n]\) \\ \(\mu\) & \([1,1]\) & \([2n{-}2,2]\) & \([2n{-}1,1,1]\) & \([2n,2n,2]\) & \([2n{-}1,2n{-}1,1,1]\) \\ \hline Singularity & \(C_{1}\) & \(C_{n}\) & \(B_{n}\) & \(B_{n}\) & \([2B_{n}]^{+}\) \\ \hline \end{tabular}
\end{table}
Table 1. Minimal special degenerations of codimension two
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Name of singularity & \(g_{sp}\) & \(h\) & \(f_{sp}^{1}\) & \(f_{sp}^{2}\) & \(h_{sp}\) \\ Lie algebra & \(\mathfrak{sp}_{2n}\) & \(\mathfrak{so}_{2n}\) & \(\mathfrak{so}_{2n+1}\) & \(\mathfrak{sp}_{4n+2}\) & \(\mathfrak{sp}_{4n}\) \\ & \(n\geq 2\) & \(n\geq 3\) & \(n\geq 2\) & \(n\geq 2\) & \(n\geq 2\) \\ \(l\) rows removed & \(l\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\not\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\not\equiv\epsilon^{\prime}\) \\ \(s\) columns removed & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\not\equiv\epsilon\) & \(s\not\equiv\epsilon\) \\ \hline \(\lambda\) & \([2^{2},1^{2n-4}]\) & \([2^{2},1^{2n-4}]\) & \([3,1^{2n-2}]\) & \([3^{2},2^{2n-2}]\) & \([4,2^{2n-2}]\) \\ \(\mu\) & \([1^{2n}]\) & \([1^{2n}]\) & \([1^{2n+1}]\) & \([2^{2n+1}]\) & \([2^{2n}]\) \\ codimension & \(4n{-}2\) & \(4n{-}6\) & \(4n{-}2\) & \(4n{-}2\) & \(4n{-}2\) \\ Singularity & \(c_{n}^{sp}\) & \(d_{n}^{+}\) & \(b_{n}^{sp}\) & \(b_{n}^{sp}\) & \(d_{n+1}/V_{4}\) \\ \hline \end{tabular}
\end{table}
Table 2. Minimal Special Degenerations of codimension 4 or more
_Remark 1.1_.: The \(h\) singularity for \(n=2\) is \(d_{2}^{+}\), which coincides with the \(e\) singularity for \(n=1\). We use \(d_{2}^{+}\) in the graphs for the classical groups since the action of \(A(e)\) for the \(e\)-singularity with \(n=1\) is actually only by \(\mathfrak{S}_{2}\).
The proof that these tables give the classification of minimal special degenerations is given in SS3. In SS4, we establish that the singularities in classical types are as given in Table 2, and in SS4.4 we complete the story in the exceptional groups. In SS5.1 and SS5.7, we establish the \(A(e)\)-action both for minimal special degenerations and minimal degenerations. The graphs at the end of the paper give the results for the exceptional groups and several examples in the classical groups SS11.
### Duality
Using the Springer correspondence, Lusztig defined two maps, which are order-reversing involutions: \(d:\mathcal{N}_{o}^{sp}\to\mathcal{N}_{o}^{sp}\) and \(d_{LS}:\mathcal{N}_{o}^{sp}\to{}^{L}\mathcal{N}_{o}^{sp}\) (see [10]).
For \(G=GL_{n}\) all nilpotent orbits are special and Kraft and Procesi [11] computed the singularity type of \(\mathcal{S}_{\mathcal{O},e}\) for minimal degenerations (hence, minimal special degenerations). The singularity is either of type \(A_{k}\) or \(a_{k}\) for some \(k\). Kraft and Procesi observed that if the singularity of \((\mathcal{O},\mathcal{O}^{\prime})\) is of type \(A_{k}\) then the singularity of \((d(\mathcal{O}^{\prime}),d(\mathcal{O}))\) is of type \(a_{k}\). In the case of \(GL_{n}\), each orbit is given by a partition and the dualities \(d=d_{LS}\) are given by taking the transpose partition.
Our duality is a generalization of the Kraft-Procesi observation, but with some wrinkles. It says that typically an irreducible component of a simple surface singularity (with \(A(e)\)-action) is interchanged with the minimal special orbit of Langlands dual type (after taking the quotient of the \(A(e)\)-action). More explicitly, \(d_{LS}\) exchanges the following singularities.
\[A_{n} \leftrightarrow a_{n}\] \[B_{n} \leftrightarrow a_{2n-1}^{+}\text{ or }c_{n}^{sp}\] \[C_{n} \leftrightarrow d_{n+1}^{+}\text{ or }b_{n}^{sp}\] \[D_{n} \leftrightarrow d_{n}\] \[G_{2} \leftrightarrow d_{4}^{++}\text{ or }g_{2}^{sp}\] \[F_{4} \leftrightarrow e_{6}^{+}\text{ or }f_{4}^{sp}\] \[E_{n} \leftrightarrow e_{n}\]
The only interchange of dimension two with dimension two is when both slices have irreducible components of type \(A_{1}\). The fact that for each dual pair of orbits, one of the pairs yields a slice of dimension two was observed by Lusztig [14]. For the cases with two options on the right, notice that the first option arises as cover of the second (see e.g. [12]). Indeed we expect this cover to occur intrinsically since in all these cases \(\mathcal{O}\) itself admits such a cover. We could also alternatively say that the second option is a quotient of the first by the \(A(e)\)-action.
There are three families of situations that do not obey this relationship.
1. Sometimes \[C_{n+1}\leftrightarrow c_{n}^{sp}\text{ or }a_{2n-1}^{+}\]
2. When \(d_{n+1}/V_{4}\) or \(d_{4}/\mathfrak{S}_{4}\) occurs in a dual pair of orbits, we always have \[C_{n} \leftrightarrow d_{n+1}/V_{4}\] \[G_{2} \leftrightarrow d_{4}/S_{4}\]
3. For the three exceptional special orbits in \(E_{7}\) and \(E_{8}\), \[A_{2}^{+} \leftrightarrow a_{2}^{+}\text{ or }a_{2}/S_{2}\] \[A_{4}^{+} \leftrightarrow a_{4}^{+}\]
In the first case, Lusztig's canonical quotient of \(A(e)\) is playing a role. Namely, the kernel of the map from \(A(e)\) to the canonical quotient \(\bar{A}(e)\) is acting by outer action on \(\mathcal{S}_{\mathcal{O},e}\). We denote this property by adding a \(*\) to the singularity, \(C_{n+1}^{*}\). This phenomenon is described in SS6. In the second case, there is an impact of the canonical quotient, see again SS6. In the third case, these cases arise because the only representative of an order two element in \(A(e)\) is an order 4 element in \(C(\mathfrak{s})\) (see [10]). We gather the duality results into one theorem in SS10.
### Full automorphism group of \(\mathfrak{g}\)
We also consider, building on work of Slodowy, the case where \(G=\operatorname{Aut}(\mathfrak{g})\). For \(A_{n},E_{6}\), and \(D_{4}\), we leave this for SS8. We find that in type \(A_{n}\), all singularities acquire the expected outer action and thus, for example, \(A_{k}^{+}\leftrightarrow a_{k}^{+}\) for the full automorphism group of \(\mathfrak{g}\).
To get more uniform statements for type \(D_{n}\), we use \(G=\operatorname{O}(2\text{n})\) at the beginning and then explain what changes when \(G=\operatorname{SO}(2\text{n})\) in SS5.6 and SS5.8.
### Three quartets of singularities in classical types \(B\), \(C\), \(D\)
The duality of the last section has a richer structure in types \(B\), \(C\), \(D\). The internal duality \(d\) for \(B_{n}\) and \(C_{n}\), together with \(d_{LS}\) and the composition \(f:=d\circ d_{LS}\), yield 4 related special orbits (see Figure 1). Applying these 3 maps to a minimal special degeneration, we find there are only three possible outputs for the four singularities that arise (see Figure 2).
There is also a story that involves \(D_{n}\). As mentioned above, we work with \(G=\operatorname{O}(2\text{n})\). Then there is a subset of the nilpotent orbits in type \(C_{n}\) that is a slight modification of the special orbits, by changing the parity condition in the definition. We call these the alternative special nilpotent orbits in type \(C\) and denote them by \(\mathcal{N}_{o}^{C,asp}\) in SS2.2. Its minimal element is the minimal orbit in type \(C_{n}\) of dimension \(2n\). There is a bijection between \(\mathcal{N}_{o}^{D,sp}\) and \(\mathcal{N}_{o}^{C,asp}\), also denoted \(f\), that preserves the partial order and codimensions (more precisely, it sends an orbit of dimension \(N\) to one of dimension \(N+2n\)). This bijection, together with \(d_{LS}\) and \(d=f\circ d_{LS}\), also gives rise to the same three quartets of singularities as in Figure 2. An example is given in Figure 3. This is also the first case where all three quartets arise.
Figure 1. Dualities
### Lusztig's Weyl group conjecture
In [12, SS0.4], Lusztig attached a Weyl group \(W^{\prime}\) to each minimal special degeneration. He then made a conjecture relating the exponents of \(W^{\prime}\) to what amounts to the \(C(\mathfrak{s})\)-invariant part of the intersection homology \(\operatorname{IH}^{*}(\mathcal{S}_{\mathcal{O},e})\) when \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). In SS9 we prove his conjecture, which is open in types \(B\), \(C\), \(D\), although we have to modify slightly the \(W^{\prime}\) that he attaches to those minimal degenerations \((\lambda,\mu)\) in type \(D_{n}\) where there is a single odd part in \(\mu\).
### Acknowledgments
This work, major parts of which were sketched in 2012, is a continuation of the papers [13], [14] that were jointly authored with Baohua Fu. We thank him for his vital contribution to the project from its inception.
## 2. Background material in the classical group case
### Notation on partitions
In the classical groups it will be helpful to have a description of the elements of \(\mathcal{N}_{o}\) and the map \(d\) in terms of partitions. We introduce that notation following the references [13], [12], [2].
Let \(\mathcal{P}(N)\) denote the set of partitions of \(N\). For \(\lambda\in\mathcal{P}(N)\), we write \(\lambda=[\lambda_{1},\dots,\lambda_{k}]\), where \(\lambda_{1}\geq\dots\geq\lambda_{k}>0\) and \(|\lambda|:=\sum\lambda_{j}\) is equal to \(N\). Define
\[m_{\lambda}(s)=\#\{j\ |\ \lambda_{j}=s\},\]
Figure 2. The three quartets of possible singularities in classical groups
the multiplicity of the part \(s\) in \(\lambda\). We use \(m(s)\) if the partition is clear. Sometimes we write \([\dots,s^{m(s)},\dots]\) instead of
\[[\dots,\overbrace{s,s,\dots,s}^{m(s)},\dots]\]
for a part \(s\) in \(\lambda\). The set of nilpotent orbits \(\mathcal{N}_{o}\) in \(\mathfrak{g}=\mathfrak{sl}_{n}\) under the adjoint action of \(G=SL_{n}\) is in bijection with \(\mathcal{P}(n)\).
For \(\epsilon\in\{0,1\}\), let \(V=V_{\epsilon}\) be a vector space, of dimension \(N\), with a nondegenerate bilinear form satisfying \(\langle v,v^{\prime}\rangle=(-1)^{\epsilon}\langle v^{\prime},v\rangle\) for \(v,v^{\prime}\in V\). Let \(\mathfrak{g}(V)\) be the Lie algebra associated to the form on \(V\), so that \(\mathfrak{g}(V)=\mathfrak{so}_{N}\) when \(\epsilon=0\) and \(\mathfrak{g}(V)=\mathfrak{sp}_{N}\) when \(\epsilon=1\) and \(N\) is even.
Let
\[\mathcal{P}_{\epsilon}(N):=\{\lambda\in\mathcal{P}(N)\ |\ m(s)\equiv 0\ \text{whenever}\ s\equiv\epsilon\},\]
where all congruences are modulo \(2\). Then the set of nilpotent orbits \(\mathcal{N}_{o}\) in \(\mathfrak{g}(V)\) under the group \(G=G(V)\) preserving the form is given by \(\mathcal{P}_{1}(2n)\) when \(\mathfrak{g}\) is of type \(C_{n}\); by \(\mathcal{P}_{0}(2n+1)\) when \(\mathfrak{g}\) is of type \(B_{n}\); and by \(\mathcal{P}_{0}(2n)\) when \(\mathfrak{g}\) is of type \(D_{n}\), except that those partitions with all even parts correspond to two orbits in \(\mathcal{N}_{o}\) (called the very even orbits, where there are two orbits interchanged by the orthogonal group). We will also refer to \(\mathcal{P}_{1}(2n)\) as \(\mathcal{P}_{C}(2n)\); to \(\mathcal{P}_{0}(2n+1)\) as \(\mathcal{P}_{B}(2n+1)\); and to \(\mathcal{P}_{0}(2n)\) as \(\mathcal{P}_{D}(2n)\). We sometimes call a partition \(\lambda\in\mathcal{P}_{\epsilon}(N)\) an \(\epsilon\)-partition. For \(\lambda\in\mathcal{P}(N)\) or \(\lambda\in\mathcal{P}_{\epsilon}(N)\), we denote by \(\mathcal{O}_{\lambda}\) the corresponding nilpotent orbit in \(\mathfrak{g}\).
Define the height of a part \(s\) in \(\lambda\) to be the number
\[h_{\lambda}(s):=\#\{\lambda_{j}\,|\,\lambda_{j}\geq s\}.\]
We write \(h(s)\) if the partition is clear. In terms of Young diagrams, the position \((s,h(s))\) is a corner of the diagram, writing each part \(\lambda_{i}\) as the boxes with upper right corner \((1,i),\dots(\lambda_{i},i)\). In other words, we have \(\lambda_{h(s)}=s\) and \(\lambda_{h(s)+1}<\lambda_{h(s)}\).
The dual or transpose partition of \(\lambda\), denoted \(\lambda^{*}\), is defined by
\[(\lambda^{*})_{i}=\#\{j\ |\ \lambda_{j}\geq i\}.\]
If we set \(j=h(s)\), then \(\lambda^{*}\) is the partition with part \(h(s)\) occurring \(\lambda_{j}-\lambda_{j+1}=s-\lambda_{j+1}\) times.
The set \(\mathcal{P}(N)\) is partially ordered by the dominance order on partitions, where \(\mu\preceq\lambda\) whenever \(\sum_{i=1}^{k}\mu_{i}\leq\sum_{i=1}^{k}\lambda_{i}\) for all \(k\). This induces a partial ordering on the sets \(\mathcal{P}_{C}(2n)\), \(\mathcal{P}_{B}(2n+1)\), and \(\mathcal{P}_{D}(2n)\) and these partial orderings coincide with the partial ordering on nilpotent orbits given by the closure ordering. We will refer to nilpotent orbits and partitions interchangeably in the classical groups (with the caveat mentioned earlier for the very even orbits in type \(D\)).
Let \(X=B\), \(C\), or \(D\). Let \(N\) be even (resp. odd) if \(X\) is of type \(C\) or \(D\) (resp. \(B\)). The \(X\)-collapse of \(\lambda\in\mathcal{P}(N)\) is the partition \(\lambda_{X}\in\mathcal{P}_{X}(N)\) satisfying \(\lambda_{X}\preceq\lambda\) and such that if \(\mu\in\mathcal{P}_{X}(N)\) and \(\mu\preceq\lambda\), then \(\mu\preceq\lambda_{X}\). The \(X\)-collapse always exists and is unique.
### Special partitions and the duality maps
The special nilpotent orbits were defined by Lusztig [10]. Denote by \(\mathcal{N}_{o}^{sp}\) the special nilpotent orbits in \(\mathfrak{g}=\mathfrak{g}(V)\). All nilpotent orbits are special in type \(A\). Here we describe the special nilpotent orbits in types \(B\), \(C\), and \(D\), as well as introduce a second subset of \(\mathcal{N}_{o}\), which behaves like special orbits. We define four sets of partitions, with \(\epsilon^{\prime}\in\{0,1\}\), as follows
\[\mathcal{P}_{\epsilon,\epsilon^{\prime}}(N):=\{\lambda\in\mathcal{P}_{\epsilon }(N)\ |\ h(s)\equiv\epsilon^{\prime}\ \text{whenever}\ s\equiv\epsilon\}. \tag{1}\]
Because of the \(s=0\) case, for \(N\) odd, the set is nonempty only when \((\epsilon,\epsilon^{\prime})=(0,1)\). For \(N\) even, the set is nonempty for \((\epsilon,\epsilon^{\prime})\in\{(0,0),(1,0),(1,1)\}\). Then the partitions for the special orbits in type \(B_{n},C_{n},D_{n}\) are given by \(\mathcal{P}_{B}^{sp}(2n+1):=\mathcal{P}_{0,1}(2n+1)\), \(\mathcal{P}_{C}^{sp}(2n):=\mathcal{P}_{1,0}(2n)\), and \(\mathcal{P}_{D}^{sp}(2n):=\mathcal{P}_{0,0}(2n)\). The fourth case leads to a second subset of \(\mathcal{P}_{C}(2n)\), which is \(\mathcal{P}_{C}^{asp}(2n):=\mathcal{P}_{1,1}(2n)\). We refer to these nilpotent orbits in type \(C\) as the alternative special nilpotent orbits.
Each of \(\mathcal{P}_{B}^{sp}(2n+1)\), \(\mathcal{P}_{C}^{sp}(2n)\), \(\mathcal{P}_{D}^{sp}(2n)\), and \(\mathcal{P}_{C}^{asp}(2n)\) inherits the partial order from the set of all partitions and this agrees with the one coming from inclusion of closures of the corresponding nilpotent orbits.
The sets \(\mathcal{P}_{B}^{sp}(2n\!+\!1)\) and \(\mathcal{P}_{C}^{sp}(2n)\) are in bijection (see [10], [11]), and also the sets \(\mathcal{P}_{D}^{sp}(2n)\) and \(\mathcal{P}_{C}^{asp}(2n)\) are in bijection, as we now describe.
Given \(\lambda=[\lambda_{1}\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}>0]\), let
\[\lambda^{-}=[\lambda_{1}\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}-1]\]
and
\[\lambda^{+}=[\lambda_{1}+1\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}].\]
Then the bijections are given as follows, using \(f\) for each map:
\[\begin{split}& f_{BC}:\mathcal{P}_{B}^{sp}(2n+1)\to\mathcal{P}_{C}^{ sp}(2n)\text{ given by }f(\lambda)=(\lambda^{-})_{C}\\ & f_{CB}:\mathcal{P}_{C}^{sp}(2n)\to\mathcal{P}_{B}^{sp}(2n+1) \text{ given by }f(\lambda)=(\lambda^{+})_{B}\\ & f_{DC}:\mathcal{P}_{D}^{sp}(2n)\to\mathcal{P}_{C}^{asp}(2n) \text{ given by }f(\lambda)=((\lambda^{+})^{-})_{C}\\ & f_{CD}:\mathcal{P}_{C}^{asp}(2n)\to\mathcal{P}_{D}^{sp}(2n) \text{ given by }f(\lambda)=\lambda_{D}\end{split} \tag{2}\]
Note that in general \(f\) maps \(\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) to \(\mathcal{P}_{1-\epsilon,1-\epsilon^{\prime}}\).
Each of these maps respects the partial order. The first two maps are dimension preserving (and codimension preserving). The second two maps are codimension preserving (as we shall see). More precisely, the \(f_{DC}\) map sends an orbit of dimension \(N\) to one of dimension \(N+2n\). The shift is because the minimal orbit in \(C_{n}\) is the minimal element in \(\mathcal{P}_{C}^{asp}(2n)\).
Write \(d(\lambda)\) for \(\lambda^{*}\). It is known (or easy to show) that \(d\) determines bijections between the following sets:
\[\begin{split}\mathcal{P}_{B}^{sp}(2n\!+\!1)\stackrel{{ \mathrm{d}}}{{\longrightarrow}}\mathcal{P}_{B}^{sp}(2n\!+\!1)\\ \mathcal{P}_{C}^{sp}(2n)\stackrel{{\mathrm{d}}}{{ \longrightarrow}}\mathcal{P}_{C}^{sp}(2n)\\ \mathcal{P}_{D}^{sp}(2n)\stackrel{{\mathrm{d}}}{{ \longleftarrow}}\mathcal{P}_{C}^{asp}(2n)\end{split} \tag{3}\]
It is order-reversing since this holds for all partitions. We refer to \(d\) as the _internal duality_ or just _transpose_.
It is known that \(d\circ f=f\circ d\) and the duality \(d_{LS}\) of Lusztig-Spaltenstein is given by \(d_{LS}=d\circ f=f\circ d\). We have squares relating the three kinds of maps between the orbits (or their corresponding partitions) as shown in Figure 1.
### Explicit description of \(f\) maps
We now describe more specifically how the \(f\) maps work.
Let \(X\) be one of the types \(B\), \(C\), \(D\). Let \(\lambda\in\mathcal{P}(N)\) with \(N\) even for type \(C\) and \(D\) and odd for type \(B\). We want to find \(\lambda_{X}\). Set \(\epsilon=\epsilon_{X}\). List the parts \(s\) in \(\lambda\) with \(s\equiv\epsilon\) and \(m(s)\) odd as \(a_{1}>a_{2}>\cdots>a_{2n}\ \geq 0\). For \(X=C\), these are the odd parts, so there is an even number of them. But for \(X=B\) or \(X=D\), since \(\epsilon=0\), we will add a single part equal to \(0\), if necessary, so that there is an even number of even parts with odd multiplicity. Next,
between \(a:=a_{2i-1}\) and \(c:=a_{2i}\), list the parts \(s\equiv\epsilon\) as \(b_{1}>b_{2}>\dots>b_{j}\), which necessarily have \(m(b_{i})\) even. Ignoring the parts not congruent to \(\epsilon\), then \(\lambda\) will look locally like
\[a^{m(a)},b_{1}^{m(b_{1})},\dots,b_{j}^{m(b_{j})},c^{m(c)}.\]
Then under the collapse of \(\lambda\) to \(\lambda_{X}\), these values will change to
\[a^{m(a)-1},a-1,b_{1}+1,b_{1}^{m(b_{1})-2},b_{1}-1,\dots,b_{j}+1,b_{j}^{m(b_{j} )-2},b_{j}-1,c+1,c^{m(c)-1}\]
so that the multiplicities of the parts congruent to \(\epsilon\) are now even, as required to be in \(\mathcal{P}_{X}(N)\). The other parts of \(\lambda\) are unaffected under the collapse. As a result of this rule, there is a formula for the collapse for a part \(s\) based on its height \(h(s)\) and its multiplicity \(m(s)\).
**Lemma 2.1**.: _Let \(X\) be \(B,C,\) or \(D\) and \(\epsilon=\epsilon_{X}\). Let \(\lambda\in\mathcal{P}(N)\) with \(N\) as above. Assume \(m(s)\) is even if \(s\not\equiv\epsilon\). Let \(s\equiv\epsilon\) be a part in \(\lambda\). Then \([s^{m(s)}]\) in \(\lambda\) changes to the following in \(\lambda_{X}\):_
\[[s^{m(s)-1},s-1] \text{if }h(s)\equiv 1,m(s)\equiv 1\] \[[s+1,s^{m(s)-1}] \text{if }h(s)\equiv 0,m(s)\equiv 1\] \[[s+1,s^{m(s)-2},s-1] \text{if }h(s)\equiv 1,m(s)\equiv 0\] \[[s^{m(s)}] \text{if }h(s)\equiv 0,m(s)\equiv 0\]
Proof.: Since \(h(s)=\sum_{j\geq s}m(j)\), it is clear that \(h(s)\equiv\#\{m(j)\ |\ m(j)\text{ is odd and }j\geq s\}\). Since any part \(s\) with \(s\not\equiv\epsilon\) has \(m(s)\) even, it follows that
\[h(s)\equiv\#\{m(j)\ |\ m(j)\equiv 1,j\geq s,\text{ and }j\equiv\epsilon\}.\]
So the four conditions in the lemma specify whether the part \(s\) plays the role of some \(a_{2i-1}\), \(a_{2i}\), \(b_{k}\), or a part between some \(a_{2i}\) and \(a_{2i+1}\) and hence unaffected by the collapse.
Let \(X\) be one of the types \(B,C,D\) or \(C^{\prime}\). Here \(C^{\prime}\), refers to the alternative special setting. Now we can say what happens under the \(f\) maps passing from \(X\) to type \(f(X)\).
**Lemma 2.2**.: _Let \(\lambda\in\mathcal{P}_{X}^{sp}(N)\). Let \(s\not\equiv\epsilon_{X}\) be a part in \(\lambda\). Then \(f(\lambda)\) is computed by replacing each occurrence of \([s^{m(s)}]\) by_
\[[s^{m(s)-1},s-1] \text{if }h(s)\equiv\epsilon_{X}^{\prime},m(s)\equiv 1 \tag{5}\] \[[s+1,s^{m(s)-1}] \text{if }h(s)\not\equiv\epsilon_{X}^{\prime},m(s)\equiv 1\] (6) \[[s+1,s^{m(s)-2},s-1] \text{if }h(s)\equiv\epsilon_{X}^{\prime},m(s)\equiv 0\] (7) \[[s^{m(s)}] \text{if }h(s)\not\equiv\epsilon_{X}^{\prime},m(s)\equiv 0 \tag{4}\]
Proof.: First, if \(s\not\equiv\epsilon_{X}\), then \(s\equiv\epsilon_{f(X)}\). For \(f_{BC}\) and \(f_{CD}\) the map is just the ordinary collapse (except for the smallest two parts in type \(B\)). In these cases, \(\epsilon_{X}^{\prime}=1\) and we are in the situation of the previous lemma when performing the collapse in type \(f(X)\). In type \(B\), there are a couple of cases to check that the effect of \(\lambda_{k}\) being replaced by \(\lambda_{k}-1\) is consistent with the above cases.
On the other hand, for \(f_{DC}\) and \(f_{CB}\) we have \(\epsilon_{X}^{\prime}=0\). For the \(f\) map, we first increase \(\lambda_{1}\) by \(1\) and then perform the collapse for type \(f(X)\). This \(1\), under the collapse, moves down to the first part \(x\) with \(x\not\equiv\epsilon_{X}\). By the assumption that the parts congruent to \(\epsilon_{X}\) have even multiplicity, we have that \(h(s)\) odd. So the rule is correct for the part \(x\). Call this new partition, where \(x\) changes to \(x+1\), \(\lambda^{\prime}\). Then
\[h_{\lambda^{\prime}}(s)\not\equiv\#\{m_{\lambda^{\prime}}(j)\ |\ m_{\lambda^{ \prime}}(j)\equiv 1,j\geq s,\text{ and }j\equiv\epsilon_{f(X)}\}.\]
Since \(\epsilon^{\prime}_{X}=0\), the previous lemma gives the result again for the collapse of \(\lambda^{\prime}\), which is \(f(\lambda)\). For \(f_{DC}\), there are a couple of cases to check that the effect of \(\lambda_{k}\) being replaced by \(\lambda_{k}-1\) is consistent with the above cases.
### Special pieces result
Spaltenstein [10] showed that each non-special orbit \(\mathcal{O}^{\prime}\) belongs to the closure of a unique special orbit \(\mathcal{O}\), which is minimal among all special orbits whose closure contains \(\mathcal{O}^{\prime}\). That is, if a special orbit contains \(\mathcal{O}^{\prime}\) in its closure, then it contains \(\mathcal{O}\) in its closure.
We now describe the process for finding the partition \(\lambda\) for \(\mathcal{O}\) given the partition \(\nu\) for \(\mathcal{O}^{\prime}\). Let \(X\) be one of the four types \(B,C,D,C^{\prime}\). Let \(\nu\in\mathcal{P}_{X}(N)\) be non-special. Let \(S\) be the collection of parts \(s\) in \(\nu\) such that \(s\equiv\epsilon_{X}\) and \(h(s)\not\equiv\epsilon^{\prime}_{X}\). These are the parts that fail the condition for \(\nu\) to be in \(\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) as required by (1). Note that \(s\equiv\epsilon_{X}\) means that \(m(s)\) is even, so \(m(s)\geq 2\). Let \(\lambda\) be obtained from \(\nu\) by replacing the subpartition \([s^{m(s)}]\) in \(\nu\) by \([s+1,s^{m(s)-2},s-1]\), for each \(s\in S\). It is clear that \(\lambda\in\mathcal{P}_{X}(N)\) and that it satisfies the special condition in (1), so lies in \(\mathcal{P}_{X}^{sp}(N)\). In fact, \(\lambda\) is the partition for \(\mathcal{O}\) by [10] for the cases of \(B,C,D\) (the case of \(C^{\prime}\) is similar).
### Removing rows and columns from a partition
In [10] and [10], Kraft and Procesi defined two operations that take a pair of partitions \((\lambda,\mu)\in\mathcal{P}(N)\) to another pair of partitions. Viewing the partitions as Young diagrams, the first operation is removing common initial rows of \(\lambda\) and \(\mu\) and the second operation is removing common initial columns.
#### 2.5.1. Type A
More precisely, we say \((\lambda,\mu)\) is _leading-row-equivalent (after removing \(r\) rows)_ to \((\lambda^{\prime},\mu^{\prime})\) if \(\lambda_{i}=\mu_{i}\) for \(i\leq r\), while \(\lambda_{i}=\lambda^{\prime}_{i-r}\) and \(\mu_{i}=\mu^{\prime}_{i-r}\) for \(i>r\). We say \((\lambda,\mu)\) is _column-equivalent (after removing \(s\) columns)_ to \((\lambda^{\prime},\mu^{\prime})\) if \(\lambda_{i}=\mu_{i}\) for \(i>\ell\) and \(\lambda_{i}=\lambda^{\prime}_{i}+s\) and \(\mu_{i}=\mu^{\prime}_{i}+s\) for \(i\leq\ell\), where \(\ell=\max\{i\ |\ \lambda_{i}>s\}\). In both cases, \(|\lambda^{\prime}|=|\mu^{\prime}|\), so \(\lambda^{\prime}\) and \(\mu^{\prime}\) are partitions of the same integer. We say \((\lambda,\mu)\) is _equivalent_ to \((\lambda^{\prime},\mu^{\prime})\) if they are related by a sequence of these two equivalences, and it follows in that case when \(\lambda\preceq\mu\) that
1. \(\lambda^{\prime}\preceq\mu^{\prime}\)
2. \(\operatorname{codim}_{\bar{\mathcal{O}}_{\lambda}}\mathcal{O}_{\mu}= \operatorname{codim}_{\bar{\mathcal{O}}_{\lambda^{\prime}}}\mathcal{O}_{\mu^ {\prime}}\)
3. The singularity of \(\bar{\mathcal{O}}_{\lambda}\) at \(\mathcal{O}_{\mu}\) is smoothly equivalent to the singularity of \(\bar{\mathcal{O}}_{\lambda^{\prime}}\) at \(\mathcal{O}_{\mu^{\prime}}\).
for the corresponding nilpotent orbits in \(\mathfrak{sl}_{n}\)[10].
#### 2.5.2. Other classical types
For \(\epsilon\in\{0,1\}\) and \(\mathfrak{g}=\mathfrak{g}(V_{\epsilon})\) as in SS2.1, similar results hold as above hold when we cancel \(r\) leading rows and \(s\) columns, with an additional condition. Let \(\lambda,\mu\in\mathcal{P}_{\epsilon}(N)\) and assume when we cancel \(r\) leading rows that
\[[\lambda_{1},\ldots,\lambda_{r}]\text{ is an $\epsilon$-partition}. \tag{8}\]
This condition always holds if we choose the maximal possible number of rows to cancel between \(\lambda\) and \(\mu\). If (8) holds, then \(\lambda^{\prime}\) and \(\mu^{\prime}\) are \(\tilde{\epsilon}\)-partitions, with \(\tilde{\epsilon}\equiv\epsilon+s\), where \(s\) is the number of columns canceled. Then the above three results hold when the nilpotent orbits are considered in \(\mathfrak{g}\)[10, SS13]. A pair of partitions \((\lambda,\mu)\) is _irreducible_ if no common rows or columns can be canceled.
Next, we say \((\lambda,\mu)\in\mathcal{P}_{\epsilon}(N)\) is _\(\epsilon\)-row-equivalent_ to \((\lambda^{\prime},\mu^{\prime})\in\mathcal{P}_{\epsilon}(m)\), if the latter is obtained from the former by canceling some leading and some trailing rows of the Young diagram. Namely, there exist \(r,r^{\prime}\in\mathbb{N}\) so that \(\lambda_{i}=\mu_{i}\) for \(i\leq r\) and \(i\geq r^{\prime}\), while \(\lambda_{i}=\lambda^{\prime}_{i-r}\) and \(\mu_{i}=\mu^{\prime}_{i-r}\) for \(r<i<r^{\prime}\). We pad the partitions by adding zeros so that both partitions have the same number of parts. If we set \(\nu=[\lambda_{1},\ldots,\lambda_{r},\lambda_{r^{\prime}},\lambda_{r^{\prime}+1},\ldots]\), then \(\nu\) is also an \(\epsilon\)-partition. We also say \((\lambda,\mu)\) is _locally of the form \((\lambda^{\prime},\mu^{\prime})\)._
Now suppose that \((\lambda,\mu)\) is \(\epsilon\)_-row-equivalent_ to \((\lambda^{\prime},\mu^{\prime})\). Let \(V=V_{\epsilon}\). Then, as in [10, SS13.4], there is an orthogonal decomposition \(V=V_{1}\oplus V_{2}\), with \(\dim V_{1}=|\lambda^{\prime}|=|\mu^{\prime}|\) and \(\dim V_{2}=|\nu|\) and the \(V_{i}\) carry a nondegenerate \(\epsilon\)-form by restriction from \(V\). Moreover, \(\lambda^{\prime},\mu^{\prime}\in\mathcal{P}_{\epsilon}(\dim V_{1})\) and \(\nu\in\mathcal{P}_{\epsilon}(\dim V_{2})\), so we can pick nilpotent elements \(x_{1},e_{1}\in\mathfrak{g}(V_{1})\) with partitions \(\lambda^{\prime},\mu^{\prime}\), respectively, and \(e_{2}\in\mathfrak{g}(V_{2})\) with partition \(\nu\). Then \(x=x_{1}+e_{2}\) has partition \(\lambda\) and \(e=e_{1}+e_{2}\) has partition \(\mu\). The arguments in SS13 in [10] give
**Proposition 2.3**.: _Choose an \(\mathfrak{sl}_{2}\)-triple for \(x_{1}\) in \(\mathfrak{g}(V_{1})\). Then the natural map of \(\mathfrak{g}(V_{1})\) to \(\mathfrak{g}(V_{1})+e_{2}\subset\mathfrak{g}\) gives an isomorphism of the slice \(\mathcal{S}_{\mathcal{O}_{\lambda^{\prime}},e_{1}}\) in \(\mathfrak{g}(V_{1})\) to the slice \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) in \(\mathfrak{g}\)._
The key ideas in the proof are that both slices have the same dimension (by the codimension result) and the fact that the closure of any nilpotent orbit in \(\mathfrak{gl}_{N}\) is normal.
We note that if \((\lambda^{\prime},\mu^{\prime})\) are obtained from \((\lambda,\mu)\) by removing \(r\) leading rows and \(s\) columns and if condition (8) holds, then \((\lambda,\mu)\) is \(\epsilon\)_-row-equivalent_ to :
\[(\lambda^{\prime\prime},\mu^{\prime\prime}):=([\lambda^{\prime}_{1}+s,\lambda^ {\prime}_{2}+s,\dots],[\mu^{\prime}_{1}+s,\mu^{\prime}_{2}+s,\dots]) \tag{9}\]
Finally, we call \((\lambda,\mu)\) and \((\lambda^{\prime\prime},\mu^{\prime\prime})\) locally equivalent, or say locally \((\lambda,\mu)\) is equal to \((\lambda^{\prime\prime},\mu^{\prime\prime})\).
In the next section SS3, we show that each pair of partitions corresponding to a minimal special degeneration in orthogonal and symplectic type is equivalent to a unique pair of partitions \((\lambda,\mu)\) of \(N\) for a unique smallest \(N\). These pairs are irreducible in the sense of Kraft and Procesi: the maximal possible number of common rows and columns has been removed from the original pair of partitions to obtain \((\lambda,\mu)\).
## 3. Combinatorial classification of minimal special degenerations in \(B\), \(C\), \(D\)
**Theorem 3.1**.: _Let \((\lambda,\mu)\in\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) be partitions corresponding to a minimal special degeneration in the corresponding classical Lie algebra. Then \((\lambda,\mu)\) is equivalent to a unique entry in Table 1 or Table 2._
Proof.: If a minimal special degeneration \((\lambda,\mu)\) is not already minimal, then there exists a non-special orbit \(\mathcal{O}_{\nu}\) such that \(\mathcal{O}_{\mu}\leq\mathcal{O}_{\nu}\leq\mathcal{O}_{\lambda}\), and such that \((\nu,\mu)\) is a minimal degeneration. Hence, the latter would be one of the entries in the Kraft-Procesi list [10, SS3.4]. We need to show that \((\lambda,\mu)\) must be equivalent to one of the five cases in Table 2.
First, since \(\mathcal{O}_{\nu}\) is not special, there is a unique special orbit whose closure contains \(\mathcal{O}_{\nu}\) and which is contained in the closure of all other special orbits whose closure contains \(\mathcal{O}_{\nu}\) (see SS2.4). Consequently, \(\mathcal{O}_{\lambda}\) must be this orbit, as we are assuming the degeneration \((\lambda,\mu)\) is minimal among special degenerations.
Next, we will show that \((\nu,\mu)\) cannot be one of the cases in Table 1. Let \(X\) be the type of the Lie algebra and \(\epsilon=\epsilon_{X}\), \(\epsilon^{\prime}=\epsilon^{\prime}_{X}\).
If \((\nu,\mu)\) is type \(a\), then locally it is \(([s{+}2,s],[(s{+}1)^{2}])\) where \(s\not\equiv\epsilon\). Since \(s{+}1\equiv\epsilon\) and \(s{+}1\) appears exactly twice, the heights satisfy \(h_{\nu}(x)\equiv h_{\mu}(x)\) for all \(x\equiv\epsilon\). This means that \(\nu\) must be special since \(\mu\) is special. Therefore the type \(a\) minimal degeneration cannot occur between a larger orbit that is not special and a smaller orbit that is special.
If \((\nu,\mu)\) is type \(b\), then locally it is \(([s{+}2n,s],[s{+}2n{-}2,s{+}2])\) where \(s\not\equiv\epsilon\). Hence, all four of \(s{+}2n,s,s{+}2n{-}2\), and \(s{+}2\) are not congruent to \(\epsilon\). As in the previous case, \(h_{\nu}(x)\equiv h_{\mu}(x)\) for all \(x\equiv\epsilon\), and again this forces \(\nu\) to be special too, a contradiction.
If \((\nu,\mu)\) is of type \(c,d\) or \(e\), it will be possible for \(\nu\) to be non-special, but we will show that then the degeneration \((\lambda,\mu)\) is not minimal among degenerations between special orbits.
For type \(c\), the pair \((\nu,\mu)\) is locally \(([s{+}2n{+}1,s^{2}],[s{+}2n{-}1,(s{+}1)^{2}])\) where \(s\equiv\epsilon\). In this case \(\nu\) will be non-special, as noted in the Table 1, exactly when the number of rows
removed is congruent to \(\epsilon^{\prime}\). This means \(h_{\nu}(s)=l+3\not\equiv\epsilon^{\prime}\) (and necessarily \(s\geq 1\)). If that is the case, then by SS2.4, \(\lambda\) must locally be \([s\!+\!2n\!+\!1,s\!+\!1,s\!-\!1]\). But then \(\lambda\) degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!-\!1,s\!+\!3,s\!-\!1]\), which is also special and degenerates to \(\mu\). Hence the degeneration \((\lambda,\mu)\) is not a minimal special degeneration, which is what we wanted to show.
For type \(d\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2n\!+\!1,s\!+\!2n\!+\!1,s],[s\!+\!2n,s\!+\!2n,s\!+\!2])\]
where \(s\not\equiv\epsilon\). In this case \(\nu\) will be non-special exactly when \(h_{\nu}(s\!+\!2n\!+\!1)\not\equiv\epsilon^{\prime}\). If that is the case, then \(\lambda\) must locally be \([s\!+\!2n\!+\!2,s\!+\!2n,s]\) from SS2.4. But \(\lambda\) also degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!+\!2,s\!+\!2n\!-\!2,s\!+\!2]\), which is also special and degenerates to \(\mu\). Hence the degeneration \((\lambda,\mu)\) is not a minimal special degeneration.
For type \(e\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2n,s\!+\!2n,s,s],[s\!+\!2n\!-\!1,s\!+\!2n\!-\!1,(s\!+\!1)^{2}])\]
where \(s\equiv\epsilon\). In this case \(\nu\) will be non-special exactly when \(h_{\nu}(s\!+\!2n)\not\equiv\epsilon^{\prime}\) (and \(s\geq 1\) is forced). Then \(\lambda\) must locally be \([s\!+\!2n\!+\!1,s\!+\!2n\!-\!1,s\!+\!1,s\!-\!1]\) by SS2.4. But \(\lambda\) degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!-\!1,s\!+\!2n\!-\!1,s\!+\!3,s\!-\!1]\), whenever \(n\geq 2\). This orbit is special since \(\mu\) is. Moreover, \(\nu^{\prime}\) degenerates to \(\mu\), so \((\lambda,\mu)\) is not a minimal special degeneration.
This shows, for any minimal special degeneration \((\lambda,\mu)\), which is not already a minimal degeneration, that there exists an intermediate orbit \(\nu\) such that \((\nu,\mu)\) is a minimal degeneration of codimension at least 4, **unless**\(\nu\) is of type \(e\) with \(n=1\). In Kraft-Procesi's classification [12, Table I], the minimal degenerations of dimension at least 4 are labeled \(f,g\) and \(h\), and are given by the minimal nilpotent orbit closures in types \(B\), \(C\) and \(D\), respectively.
Starting with type \(g\), where \(n\geq 2\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2,(s\!+\!1)^{2n-2},s],[(s\!+\!1)^{2n}])\]
with \(s\not\equiv\epsilon\). Then \(\nu\) is never special since \(\mu\), being special, forces \(h_{\nu}(s\!+\!1)=h_{\mu}(s\!+\!1)\!+\!1\) to fail the special condition. Then \(\lambda\) is forced, locally, to equal \([(s\!+\!2)^{2},(s\!+\!1)^{2n-4},s^{2}]\) by SS2.4. Because \(\mu\) is special, the number of rows \(l\) removed is congruent to \(\epsilon^{\prime}\). After removing \(s\) columns, we see that \((\lambda,\mu)\) has the type of \(g_{sp}\), and the latter is indeed a minimal special degeneration, containing only the (non-special) orbit between \(\lambda\) and \(\mu\).
For type \(f\), the pair \((\nu,\mu)\) is locally
\[([(s\!+\!2)^{2},(s\!+\!1)^{2n-3},s^{2}],[(s\!+\!1)^{2n+1}])\]
with \(s\equiv\epsilon\) and \(n\geq 2\). This is never special since \(h_{\nu}(s+2)\) and \(h_{\nu}(s)\) have different parities, so exactly one of them fails the special condition. In the former case, \(\lambda\) is locally equal to \([s\!+\!3,(s\!+\!1)^{2n-2},s^{2}]\) and the degeneration is given by \(f_{sp}^{1}\) in the Table 2. That this is a minimal such degeneration follows since \(\nu\) is the only (non-special) orbit between \(\lambda\) and \(\mu\). In the latter case, which forces \(s\geq 1\), then \(\lambda\) is locally equal to \([(s\!+\!2)^{2},(s\!+\!1)^{2n-2},s\!-\!1]\) and this is the minimal special degeneration \(f_{sp}^{2}\). Again, \(\nu\) is the only (non-special) orbit between \(\lambda\) and \(\mu\).
Finally, assume the pair \((\nu,\mu)\) is locally
\[([(s\!+\!2)^{2},(s\!+\!1)^{2n-4},s^{2}],[(s\!+\!1)^{2n}])\]
with \(s\equiv\epsilon\) for \(n\geq 2\). This is type \(e\) if \(n=2\) (for \(n=1\) in the table for \(e\)) and type \(h\) for \(n\geq 3\). Observe that the special condition \(h_{\nu}(s+2)\equiv\epsilon^{\prime}\) is satisfied if and only if \(h_{\nu}(s)\equiv\epsilon^{\prime}\), since these heights differ by the even number \(2n-4\). If both conditions are met, then \(\nu\) will be special since \(\mu\) is special and this is handled by the minimal degeneration cases.
Otherwise, both the pairs \((s{+}2,s{+}2)\) and \((s,s)\) in \(\nu\) cause \(\nu\) to fail to be special, which implies \(s\geq 1\). Then \(\lambda\) takes the form locally \([s{+}3,(s{+}1)^{2n-2},s{-}1]\) by SS2.4. This is the form of \(h_{sp}\) in Table 2 after removing \(s{-}1\) columns. This is a minimal special degeneration containing 3 (non-special) orbits between \(\lambda\) and \(\mu\):
\[[(s{+}2)^{2},(s{+}1)^{2n-3},\widetilde{s{-}1}]\]
The four unlabeled edges all have an \(A_{1}=C_{1}\) singularity (type \(a\)).
We have therefore shown that every minimal special degeneration is either minimal or takes the form in Table 2.
Next we will show that each degeneration in Table 2 has the given singularity type.
## 4. Determining the singularities in Table 2
For each type in Table 2, we need to show that the degeneration is as promised. The case of type \(h\) was done in [10]. We begin with the \(g_{sp}\) case.
### Type \(g_{sp}\) case
As discussed in SS2, for the classical Lie algebras \(\mathfrak{so}_{2n+1}\), \(\mathfrak{sp}_{2n}\), and \(\mathfrak{so}_{2n}\), the nilpotent orbits under the groups \(\mathrm{O}(2n+1)\), \(\mathrm{Sp}(2n)\), and \(\mathrm{O}(2n)\) are parametrized by partitions in \(\mathcal{P}_{B}(2n+1)\), \(\mathcal{P}_{C}(2n)\), and \(\mathcal{P}_{D}(2n)\). This occurs via the Jordan-canonical form of the matrix in the ambient general linear Lie algebra.
Let \(e\in\mathfrak{g}\) be nilpotent. Fix an \(\mathfrak{sl}_{2}\)-subalgebra \(\mathfrak{s}\) through \(e\) and let \(\mathfrak{c}(\mathfrak{s})\) be the centralizer of \(\mathfrak{s}\) in \(\mathfrak{g}\), which is a maximal reductive subalgebra of the centralizer of \(e\) in \(\mathfrak{g}\). Let \(C(\mathfrak{s})\) be the centralizer in \(G\). Then \(C(\mathfrak{s})\) is a product of orthogonal and symplectic groups, with each part \(s\) of \(\lambda\) contributing a factor \(G^{s}\), which is isomorphic to \(\mathrm{O}(m(s))\) when \(s\not\equiv\epsilon\) and isomorphic to \(\mathrm{Sp}(m(s))\) when \(s\equiv\epsilon\). Denote by \(\mathfrak{g}^{s}\) the Lie algebra of \(G^{s}\). See [11] for this background material.
Let \(V\) denote the defining representation of \(\mathfrak{g}\) via the ambient general linear Lie algebra. If \(\lambda\) is the partition corresponding to \(e\), then under \(\mathfrak{s}\), the representation \(V\) decomposes as a direct sum
\[\bigoplus_{s}V(s{-}1)^{\oplus m(s)}\]
over the distinct parts \(s\) of \(\lambda\). Here \(V(m)\) is the irreducible \(\mathfrak{sl}_{2}\)-representation of highest weight \(m\).
Now let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be nilpotent. Then \(e_{0}=\sum_{s}e_{0}^{(s)}\) for some nilpotent \(e_{0}^{(s)}\in\mathfrak{g}^{s}\). Choose an \(\mathfrak{sl}_{2}\)-subalgebra through \(e_{0}^{(s)}\) in \(\mathfrak{g}^{s}\) and let \(\mathfrak{s}_{0}\) be the diagonal \(\mathfrak{sl}_{2}\)-subalgebra for
\[e_{0}=\sum_{s}e_{0}^{(s)}.\]
Each \(e_{0}^{(s)}\) corresponds to a partition \(\mu^{(s)}\) of \(m(s)\), using the defining representation of \(\mathfrak{g}^{s}\).
Under the sum \(\mathfrak{s}\oplus\mathfrak{s}_{0}\), \(V\) decomposes as
\[\bigoplus_{s,j}V(s{-}1)\otimes V(\mu_{j}^{(s)}{-}1)\]
where \(s\) runs over the distinct parts of \(\lambda\) and \(j\) indexes the parts of \(\mu_{s}\).
Now consider the diagonal \(\mathfrak{sl}_{2}\)-subalgebra for \(e+e_{0}\) in \(\mathfrak{s}+\mathfrak{s}_{0}\). An application of the Clebsch-Gordan formula immediately gives
**Lemma 4.1**.: _The nilpotent element \(e+e_{0}\) in \(\mathfrak{g}\) has partition equal to the union of the partitions_
\[[s{+}\mu_{j}^{(s)}{-}1,\ \ s{+}\mu_{j}^{(s)}{-}3,\ldots,|s-\mu_{j}^{(s)}|{+}1]\]
_for each distinct part \(s\) in \(\lambda\) and each part \(\mu_{j}^{(s)}\) of \(\mu^{(s)}\)._
Suppose that \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) is a nilpotent element such that each \(e_{0}^{(s)}\in\mathfrak{g}^{s}\) has partition of the form
\[\mu^{(s)}=[2^{a_{s}},1^{b_{s}}] \tag{10}\]
for some positive integers \(a_{s}\) and \(b_{s}\) with \(2a_{s}+b_{s}=m(s)\). Then the partition \(\nu\) of \(e+e_{0}\) equals the union of the partitions
\[[(s+1)^{a_{i}},s^{b_{i}},(s-1)^{a_{i}}]\]
for each part \(s\) in \(\lambda\). This follows immediately from the previous lemma since the part \(s\) contributes \([s{+}1,s{-}1]\) to \(\nu\) when \(\mu_{j}^{(s)}=2\) and it contributes \([s]\) when \(\mu_{j}^{(s)}=1\).
**Proposition 4.2**.: _Let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be a nilpotent element satisfying (10). Let \(\mathcal{O}\) be the orbit through \(e+e_{0}\). Then the slice \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to_
\[\prod_{s}\overline{\mathcal{O}}_{\mu^{(s)}} \tag{11}\]
_where the product is over the distinct parts \(s\) of \(\lambda\). Here, \(\mathcal{O}_{\mu^{(s)}}\) is the orbit with partition \(\mu^{(s)}\) in \(\mathfrak{sp}(m(s))\) if \(s\equiv\epsilon\) and in \(\mathfrak{so}(m(s))\) if \(s\not\equiv\epsilon\)._
Proof.: The partition of \(e_{0}^{(s)}\) in \(\mathfrak{g}\) (rather than \(\mathfrak{g}^{i}\)) is equal to
\[[2^{s{\cdot}a_{s}},1^{N-2s{\cdot}a_{s}}]\]
where \(N\) is the dimension of \(V\). Setting \(a=\sum_{s}a_{s}\), the partition of \(e_{0}\) is equal to
\[[2^{a},1^{N-2a}],\]
which is of height \(2\) in \(\mathfrak{g}\). Then Corollary 4.9 in [10] implies that \(\mathcal{S}_{\mathcal{O},e}\) and \(\prod_{s}\overline{\mathcal{O}}_{\mu^{s}}\) have the same dimension. The latter is isomorphic to
\[f+\overline{C(\mathfrak{s})\cdot(e+e_{0})}=f+e+\overline{C(\mathfrak{s})\cdot e _{0}},\]
which is a subvariety of \(\mathcal{S}_{\mathcal{O},e}\). The result follows from [10, Cor 13.3] if we can show that \(\overline{\mathcal{O}}\) is normal at \(e\).
Since the only minimal degeneration of \([2^{a_{s}},1^{b_{s}}]\) in \(\mathfrak{g}^{s}\) is \([2^{a_{s}-1},1^{b_{s}+2}]\) when \(a_{s}>1\) is of minimal type (that is, types \(a\), \(f\), \(g\), or \(h\) in [10, Table 1]), the only minimal degenerations of \(\mathcal{O}\) that contain \(e\) are also of minimal type. The argument in [10, Thm 16.2] then shows that \(\overline{\mathcal{O}}\) is normal at \(e\).
We can now prove the \(g_{sp}\) case.
**Corollary 4.3**.: _Let \(\mathcal{O}=\mathcal{O}_{\lambda}\) and \(e\in\mathcal{O}_{\mu}\), where \((\lambda,\mu)\) are of type \(g_{sp}\) in Table 2. Then \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\overline{\mathcal{O}}_{[2^{2},1^{2n-4}]}\), the closure of the minimal special orbit in \(\mathfrak{sp}_{2n}\)._
Proof.: If \(k\) is the number of columns removed, then \(s=k+1\) is the relevant part of \(\mu\) and \(m(s)=2n\). We take \(e_{0}\) to be exactly equal to \(e_{0}^{(s)}\in\mathfrak{g}^{s}\) with partition \(\mu^{s}=[2^{2},1^{2n-4}]\). Then the result follows from the proposition. Since \(k\not\equiv\epsilon\) for the \(g_{sp}\) case, we have \(s\equiv\epsilon\) and \(\mathfrak{g}^{s}=\mathfrak{sp}_{2n}\).
_Remark 4.4_.: The \(h\) case in the table, already done in [10], corresponds to the situation where \(s\not\equiv\epsilon\) and \(m(s)\) is even, as well as having \(\mathcal{O}_{\lambda}\) is special (so the condition on \(l\) holds). Of course, the slice is still of type \(h\) even if the degeneration is not a minimal special one.
### Type \(f_{sp}^{1}\) and \(f_{sp}^{2}\)
These cases occur when \((\lambda,\mu)\) is \(\epsilon\)-row equivalent to \((\lambda^{(i)},[a^{N}])\) where \(N\) is odd and \(a\not\equiv\epsilon\) and
\[\lambda^{(1)} =[a{+}2,a^{N-3},(a{-}1)^{2}]\quad\text{ (type $f_{sp}^{1}$)}\] \[\lambda^{(2)} =[(a{+}1)^{2},a^{N-3},a{-}2]\quad\text{ (type $f_{sp}^{2}$)}\]
with \(a\geq 1\) when \(i=1\) and \(a\geq 2\) when \(i=2\). By Proposition 2.3, it is enough to assume that \(\mu=[a^{N}]\) and \(\lambda\) equals \(\lambda^{(1)}\) or \(\lambda^{(2)}\). Since we will need it for the next section,we consider the more general case \(N\) is any integer with \(N\geq 3\).
**Proposition 4.5**.: _Let \(\mathfrak{g}=\mathfrak{so}_{aN}\) if \(a\) is odd and \(\mathfrak{sp}_{aN}\) if \(a\) is even. Let \(e\in\mathcal{O}_{\mu}\). For \(i\in\{1,2\}\), let \(\mathcal{O}=\mathcal{O}_{\lambda^{(i)}}\). Then there is an isomorphism_
\[\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}}_{[3,1^{N-3}]}\]
_where the orbit closure is in \(\mathfrak{so}_{N}\)._
Proof.: If \(a=1\), the \(\lambda^{(1)}\) case is clear since there is equality between the slice and the orbit closure, so assume \(a\geq 2\). The situation is very similar to SS11.2 in [11]. Let \(I_{N}\) be the \(N\times N\) identity matrix. Let us define the form on \(V\) explicitly to be the one defined by the \(a\times a\) block anti-diagonal matrix \(J\) with
\[J=\begin{pmatrix}0&0&\dots&0&0&I_{N}\\ 0&0&\dots&0&-I_{N}&0\\ 0&0&\dots&I_{N}&0&0\\ \dots&&&&\\ (-1)^{a-1}I_{N}&0&\dots&0&0&0\end{pmatrix}.\]
The bilinear form defined by \(J\) is nondegenerate and is symmetric if \(a\) is odd and symplectic if \(a\) is even. Since \(a\not\equiv\epsilon\), this is the correct form for defining \(\mathfrak{g}=\mathfrak{g}(V_{\epsilon})\).
The \(a\times a\)-block-matrices \(e\) and \(f\) given by
\[e=\begin{pmatrix}0&0&\dots&0&0\\ c_{1}I_{N}&0&\dots&0&0\\ 0&c_{2}I_{N}&\dots&0&0\\ &\dots&&&\\ 0&0&\dots&c_{N-1}I_{N}&0\end{pmatrix},\ \ f=\begin{pmatrix}0&0&\dots&0&0\\ I_{N}&0&\dots&0&0\\ 0&I_{N}&\dots&0&0\\ 0&\dots&I_{N}&0\end{pmatrix}^{T} \tag{12}\]
with \(c_{j}=j(N-j)\) lie in \(\mathfrak{g}\) and \(e\) and \(f\) are both nilpotent with partition \(\mu\). They complete to an \(\mathfrak{sl}_{2}\)-triple as in [11, SS11.2]. The centralizer \(\mathfrak{g}^{f}\) is the set of block upper triangular
matrices of the form
\[X=\begin{pmatrix}Y_{1}&Y_{2}&Y_{3}&...&Y_{a-2}&Y_{a-1}&Y_{a}\\ 0&Y_{1}&Y_{2}&\dots&Y_{a-2}&Y_{a-1}\\ 0&0&Y_{1}&\dots&Y_{a-3}&Y_{a-2}\\ &&\dots&&\\ 0&0&0&\dots&Y_{2}&Y_{3}\\ 0&0&0&\dots&Y_{1}&Y_{2}\\ 0&0&0&\dots&0&Y_{1}\end{pmatrix}.\]
Then \(X\) lies in \(\mathfrak{g}\) if and only if \(Y_{i}=(-1)^{i}Y_{i}^{T}\). Let \(\Sigma_{N}\) denote the set of \(N\times N\) symmetric matrices and set
\[\phi:\mathfrak{so}_{N}\times\Sigma_{N}\times\mathfrak{so}_{N}\times\dots \rightarrow\mathfrak{g}^{f}\]
denote the map where \(\phi(Y_{1},Y_{2},\dots)\) is given by the matrix \(X\) above. The reductive centralizer \(\mathfrak{c}(\mathfrak{s})\simeq\mathfrak{so}_{N}\) is given by \(Y_{i}=0\) for \(i\geq 2\); similarly \(C(\mathfrak{s})\simeq\mathrm{O}(\mathds{N})\). An element \(g\in C(\mathfrak{s})\) acts on \(e+X\in\mathcal{S}_{e}\) by sending \(Y_{i}\) to \(gY_{i}g^{T}\). The \(\mathbb{C}^{*}\)-action on \(\mathcal{S}_{e}\) is given by \(t.Y_{i}=t^{2i}Y_{i}\) for \(t\in\mathbb{C}^{*}\).
Let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be a nilpotent element with partition \([3,1^{N-3}]\). Pick an \(\mathfrak{sl}_{2}\)-triple \(\mathfrak{s}_{0}\) through \(e_{0}\) in \(\mathfrak{c}(\mathfrak{s})\) and assume that the semisimple element \(h_{0}\) is a diagonal matrix. By Lemma 4.1 or computing \(h+h_{0}\) directly, we see that \(e+e_{0}\) has partition \(\nu:=[a{+}2,a^{N-2},a{-}2]\) since \(a\geq 2\).
Let \(N=3\). By [10] we have \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\simeq\overline{\mathcal{O}}_{[3]}\) since for \(i{=}1\) the degeneration is type \(c\) and for \(i{=}2\) it is type \(d\) in Table 1. Set \(A=e_{0}\), then \(A^{2}\in\Sigma_{N}\). Then \(\phi(0,A^{2},0,\dots)\) is an eigenvector for both \(\mathrm{ad}(h)\) and \(\mathrm{ad}(h_{0})\), with eigenvalue \(-2\) and \(4\), respectively. Since the absolute values of the eigenvalues of \(\mathrm{ad}(h_{0})\) on \(\mathfrak{z}(e)\) are at most \(4\) and the eigenvalue \(4\) only occurs once in \(\Sigma_{N}\), there are no other exceptional pairs in the sense of [10, SS4]. It follows that \(e+\phi(A,z_{i}A^{2},0,\dots,0)\in\mathcal{O}_{\lambda^{(i)}}\) for a unique \(z_{i}\in\mathbb{C}^{*}\). Since \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) has dimension two and is irreducible, this means it is exactly the set of elements \(e+\phi(A,z_{i}A^{2},0,\dots,0)\) where \(A\in\mathfrak{so}_{3}\) is nilpotent, giving the isomorphism to \(\overline{\mathcal{O}}_{[3]}\) explicitly.
Now consider the general \(N\) case. We can embed \(\mathfrak{so}_{3}\) into \(\mathfrak{c}(\mathfrak{s})\simeq\mathfrak{so}_{N}\) via the first \(3\) coordinates and similarly for the rest of the centralizer of \(e\) in the \(N=3\) case. Clearly, for \(A\in\mathcal{O}_{[3]}\), then \(\phi(A,0\dots,0)\in\mathfrak{c}(\mathfrak{s})\) lies in \(\mathcal{O}_{[3,1^{N-3}]}\), but also \(e+\phi(A,z_{i}A^{2},0,\dots,0)\in\mathcal{S}_{e}\) lies in \(\mathcal{O}_{\lambda^{(i)}}\), by observing the action of this element on the standard basis of \(V\). It follows, using the action of \(C(\mathfrak{s})\), that \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) contains \(e+A+z_{i}A^{2}\) for \(A\in\overline{\mathcal{O}}_{[3,1^{N-3}]}\).
Next, \(\dim\mathcal{O}_{\lambda_{1}}=\dim\mathcal{O}_{\lambda_{2}}\) since both orbits are minimal degenerations from \(\mathcal{O}_{\nu}\) of type \(a\), hence they are codimension two in \(\overline{\mathcal{O}_{\nu}}\). The pair \((\lambda_{1},\mu)\) is equivalent to \(([3,1^{N-3}],[1^{N}])\) after canceling \(a{-}1\) columns, thus the codimension of \(\mathcal{O}_{\mu}\) in \(\overline{\mathcal{O}_{\lambda^{(i)}}}\) equals the dimension of \(\overline{\mathcal{O}_{[3,1^{N-3}]}}\) for both \(i=1\) and \(i=2\). The only minimal degeneration from \(\mathcal{O}_{\lambda^{(i)}}\) that contains \(\mathcal{O}_{\mu}\) is to the partition \([(a{+}1)^{2},a^{N-4},(a{-}1)^{2}]\), which an \(A_{1}\) singularity for both \(i=1\) and \(i=2\). Hence, as in SS4.1, we have \(\overline{\mathcal{O}_{\lambda^{(i)}}}\) is unibranch at \(e\). Thus \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\simeq\overline{\mathcal{O}}_{[3,1^{ N-3}]}\).
### Type \(h_{sp}\)
As in the previous subsection, we are reduced by Proposition 9 to the case where \(\lambda=[a{+}2,a^{N-2},a{-}2]\) and \(\mu=[a^{N}]\). We have the same description for \(e\in\mathcal{O}_{\mu}\), \(\mathfrak{s}\), etc. as above. In the previous subsection, \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) was the closure of a \(C(\mathfrak{s})\)-orbit. We will first show that \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) is the closure of a \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit for \(N\geq 2\), where the \(\mathbb{C}^{*}\)-action is as above.
The arguments from [10, SS11.2.2] apply. First, we use them to show that for \(M\in\mathcal{S}_{\mathcal{O}_{\lambda},e}\), the matrices \(Y_{3},Y_{4},\dots\) are equal to sums of products of \(Y_{1}\) and \(Y_{2}\). This follows since \(\operatorname{rank}(M^{i})\leq N(a-i)\) for \(i=1,\dots,a-2\) as in loc. cit. However, \(\operatorname{rank}(M^{a-1})\leq N+1\) unlike in loc. cit. and this implies that
\[\operatorname{rank}(Y_{2}-dY_{1}^{2})\leq 1 \tag{13}\]
for some \(d\in\mathbb{C}^{*}\). The condition \(M^{a+2}=0\) yields the equation, in the block lower left corner,
\[d_{1}Y_{3}+d_{2}(Y_{1}Y_{2}+Y_{2}Y_{1})+d_{3}Y_{1}^{3}=0 \tag{14}\]
where \(d_{1}=\frac{(a+2)!(a-1)!}{6}\) and \(d_{3}=\frac{(a-1)(a-2)}{5}\cdot d_{1}\). It follows that the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-equivariant map from \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) to the space \(e+X\) where \(Y_{i}=0\) for \(i\geq 3\) is an isomorphism.
Now for \(N\geq 3\), there are actually two equations involving a linear term for \(Y_{3}\). The one from the lower left corner of \(M^{a+2}=0\) and the one from the \(\operatorname{rank}(M^{a-2})=N\) condition:
\[c_{1}Y_{3}+c_{2}(Y_{1}Y_{2}+Y_{2}Y_{1})+c_{3}Y_{1}^{3}=0. \tag{15}\]
where \(c_{1}=\frac{a!(a-1)!^{2}(a-2)!^{2}(a-3)!}{24}\) and \(c_{3}=\frac{(a+1)(a+2)}{5}\cdot c_{1}\). The equations (14) and (15) are not multiples of each other since \(\frac{d_{3}}{d_{1}}\neq\frac{c_{3}}{c_{1}}\) for \(a>0\). It follows, by canceling the \(Y_{3}\) term, that
\[tY_{1}^{3}=Y_{1}Y_{2}+Y_{2}Y_{1} \tag{16}\]
for some nonzero \(t\).
Consider the \(N=2\) case. Conjugating in \(GL_{2}\) so that \(C(\mathfrak{s})\) becomes the diagonal torus in \(SL_{2}\), we can represent \(Y_{1}=\left(\begin{smallmatrix}x&0\\ 0&-x\end{smallmatrix}\right)\) and \(Y_{2}=\left(\begin{smallmatrix}y&z\\ w&y\end{smallmatrix}\right)\) for \(x,y,z,w\in\mathbb{C}\). Then (16) implies that either \(x\) is identically \(0\), i.e., \(Y_{1}\equiv 0\) or \(y=\frac{t}{2}x^{2}\), i.e., \(\operatorname{tr}(Y_{2}-\operatorname{t}Y_{1}^{2})=0\). By (13), \(\det(Y_{2}-dY_{1}^{2})=0\). If \(Y_{1}\equiv 0\), then \(\det(Y_{2})=y^{2}-zw=0\) and \(x=0\) are the conditions defining the slice. If \(t=d\), the condition is \(zw=0\), with \(x\) arbitrary. Since \(([a+2,a-2],[a^{2}])\) is a minimal degeneration of type \(b\), the slice is isomorphic to the \(A_{3}\)-simple surface singularity, so neither of these cases hold. Instead, \(d\neq t\) and the defining equations are \(y=\frac{t}{2}x^{2}\) and \(t^{2}x^{4}=4wz\), which is indeed an \(A_{3}\)-singularity. Moreover, the points with \(x\neq 0\) form a single \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit, each of which has finite stabilizer, so this orbit is dense in the slice. Let \(v=e+\phi(v_{1},v_{2},\dots)\) be such a point in the slice.
Next, as in SS4.5, we bootstrap up to the general case by embedding the slice for the \(N=2\) case into the general slice by using the first two coordinates in each block. Since the coefficients in the equations given above continue to hold for \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) in \(\mathcal{S}_{e}\), independent of \(N\), we do indeed have a \(\operatorname{SO}_{2}\times\mathbb{C}^{*}\)-equivariant embedding of the \(N=2\) case. Note that \(v_{1}\in\mathfrak{so}_{N}\) is a multiple of a \(C(\mathfrak{s})\)-conjugate of \(h_{0}\) from SS4.5. Is stabilizer in \(C(\mathfrak{s})\) is \(\operatorname{SO}_{2}\times\operatorname{SO}_{N-2}\subset C(\mathfrak{s})\). From the \(N=2\) case it follows that the connected stabilizer in \(C(\mathfrak{s})\times\mathbb{C}^{*}\)of \(v\) is \(\operatorname{SO}_{N-2}\). Hence, the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \(v\) has dimension \((N(N-1)/2+1)-(N-2)(N-3)/2=2N-2\). This is also the dimension \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\). Since \(\overline{\mathcal{O}}_{\lambda}\) is unibranch at \(e\), we conclude
**Proposition 4.6**.: _The slice \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) is isomorphic to the closure of a \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\in\mathfrak{so}_{N}\times\Sigma_{N}\) where \(A=h_{0}\in\mathfrak{so}_{N}\), and \(B\in\Sigma_{N}\) satisfies \(\operatorname{rank}(B-dA^{2})=1\) and \(\operatorname{tr}(\operatorname{B}-\operatorname{tA}^{2})=0\) for some nonzero \(d,t\) with \(d\neq t\)._
Our goal now is to identify the subvariety of \(\mathfrak{so}_{N}\times\Sigma_{N}\) in the proposition with the quotient of closure of the orbit \(\mathcal{O}_{[3,1^{N-2}]}\) in \(\mathfrak{so}_{N+1}\). First, by employing the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-equivariant isomorphism of \(\mathfrak{so}_{N}\times\Sigma_{N}\) to itself sending \((A,B)\to(A,B-dA^{2})\), it follows that the slice is isomorphic to the closure of the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\in\mathfrak{so}_{N}\times\Sigma_{N}\) where \(B\) is now a matrix of rank \(1\) and \(\operatorname{tr}(\operatorname{B}-\operatorname{tA}^{2})=0\) for some nonzero \(t\).
Next, we recall material from [10, SS2]. Let \(X\in\mathfrak{so}_{N+1}\) be written as
\[\begin{pmatrix}M&u\\ -u^{T}&0\end{pmatrix}\]
where \(M\in\mathfrak{so}_{N}\) and \(u\in\mathbb{C}^{N}\) is a column vector. Let \(\theta\) be the involution of \(\mathfrak{so}_{N+1}\) given by conjugation of \(\operatorname{diag}(1,1,\dots,1,-1)\in\operatorname{O}_{N+1}\). Identifying \(\mathfrak{so}_{N+1}\) with the set of pairs \((M,u)\), we see that \(\theta\) maps \((M,u)\mapsto(M,-u)\). Then the map \(\varphi:\mathfrak{so}_{N+1}\to\mathfrak{so}_{N}\times\Sigma_{N}\) sending \(X\) to
\((M,uu^{T})\) induces an \(\mathrm{O}_{n}\times\mathbb{C}^{*}\)-equivariant isomorphism of \(\mathfrak{so}_{n+1}/\langle\theta\rangle\) with \(\mathfrak{so}_{n}\times\Xi\) where \(\Xi\) is the cone of elements of \(\Sigma_{N}\) of rank at most \(1\).
We can now state
**Proposition 4.7**.: _The slice \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) is isomorphic to_
\[\overline{\mathcal{O}}_{[3,1^{N-2}]}/\langle\theta\rangle.\]
Proof.: By [11, Corollary 2.,2], if \(Y=\overline{\mathcal{O}}_{[3,1^{N-2}]}\), then \(Y/\langle\theta\rangle\simeq\varphi(Y)\). As before we can use the \(N=2\) case. In that case, \(X=\begin{pmatrix}0&a&b\\ -a&0&c\\ -b&0&-c\end{pmatrix}\) and
\[\varphi(X)=\left(\left(\begin{smallmatrix}0&a\\ -a&0\end{smallmatrix}\right),\left(\begin{smallmatrix}b^{2}&bc\\ bc&c^{2}\end{smallmatrix}\right)\right).\]
The condition for \(X\) to be nilpotent is \(a^{2}+b^{2}+c^{2}=0\) and so the image is exactly the matrices \((A,B)\) where \(\det(B)=0\) and \(\mathrm{tr}(\mathrm{B}+\mathrm{A}^{2})=0\). In the general case, we embed \(\mathfrak{so}_{3}\) into the lower right corner. It follows from the discussion above and the proof of Proposition 4.6 that \(\varphi(Y)\) is isomorphic to the closure of the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\) with \(A\neq 0\), and hence isomorphic to the slice by Proposition 4.6.
Let the Klein four-group \(V_{4}\) act on \(\mathfrak{so}_{N+2}\) via the pair of commuting involutions \(\theta_{1},\theta_{2}\) given by conjugation by \(\mathrm{diag}(1,\ldots,1,-1)\) and \(\mathrm{diag}(1,\ldots,1,-1,1)\), respectively. Let \(\overline{\mathcal{O}}_{[2^{2},1^{N-2}]}\) be the minimal orbit in \(\mathfrak{so}_{N+2}\). Then by [11, Corollary 2.5], for example, it follows that
\[\overline{\mathcal{O}}_{[3,1^{N-2}]}\simeq\overline{\mathcal{O}}_{[2^{2},1^{ N-2}]}/\langle\theta_{1}\rangle.\]
**Corollary 4.8**.: _We have the isomorphism_
\[\mathcal{S}_{\mathcal{O}_{\lambda},e}\simeq\overline{\mathcal{O}}_{[2^{2},1^{ N-2}]}/\langle\theta_{1},\theta_{2}\rangle.\]
_and hence the minimal special degeneration \(h_{sp}\) in Table 2 is \(d_{n+1}/V_{4}\)._
Proof.: The \(h_{sp}\) degeneration is covered by the case when \(N\) is even with \(n=N/2\).
_Remark 4.9_.: a) The special case \(n=3\) was already observed in [10]. In that case we have \(\overline{\mathcal{O}_{\min}(\mathfrak{so}_{5})}\cong\mathbb{C}^{4}/\{\pm 1\}\) and so we obtain isomorphisms of \(\mathcal{S}_{\mathcal{O},e}\cap\overline{\mathcal{O}^{\prime}}\) with (i) \(\mathbb{C}^{4}/W(B_{2})\); (ii) \(\mathcal{N}(\mathfrak{so}_{4})/\mathfrak{S}_{2}=(\mathcal{N}(\mathfrak{sl}_{2} )\times\mathcal{N}(\mathfrak{sl}_{2}))/\langle\theta\rangle\) where \(\theta\) swaps the two copies of \(\mathfrak{sl}_{2}\).
b) The orbits which intersect non-trivially with \(\mathcal{S}_{\mathcal{O},e}\) are the nilpotent orbits lying between \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) in the partial order. If \(N\geq 4\) then there are five of these, as indicated by the following diagram:
where \(\mathfrak{so}_{N+1}^{(1)}\) and \(\mathfrak{so}_{N+1}^{(2)}\) are the two fixed point subalgebras for the two proper parabolic subgroups of \(\mathfrak{S}_{2}\mathfrak{S}_{2}\). For \(N=3\) there is no orbit with partition \([3^{2},2^{N-4},1^{2}]\), equivalently, \(\mathfrak{so}_{3}\) contains no elements of \(\mathcal{O}_{\min}(\mathfrak{so}_{5})\).
a) For \(N\) odd, the singularity \(\overline{\mathcal{O}}_{[2^{2},1^{N-2}]}/\langle\theta_{1},\theta_{2}\rangle\) arises as a slice, but never for a minimal special degeneration. This is because the \(f_{sp}\) singularities arise in this case as the minimal special degenerations.
### Minimal special degenerations in the exceptional Lie algebras
There are three unexpected singularities that arise in the exceptional Lie algebra: (i) \(\mu\) (with normalization \(A_{3}\)); (ii) \(a_{2}/\mathfrak{S}_{2}\); (iii) \(d_{4}/\mathfrak{S}_{4}\), which are dealt with in [10], [10]. They appear once, once, and twice, respectively. We will show in this subsection that all remaining singularities associated to minimal special degenerations in exceptional types are unions of simple surface singularities or minimal special singularities.
The case of \(G_{2}\) is clear. Most of the minimal special degenerations are minimal degenerations and hence were dealt with in [10] or [10]. There are three (resp. three, eight, ten) minimal special degenerations which are not minimal degenerations in type \(F_{4}\) (resp. \(E_{6}\), \(E_{7}\), \(E_{8}\)). These cases, with two exceptions, are covered by the following proposition.
**Proposition 4.10**.: _Let \(\mathcal{O}^{\prime}\) be a special nilpotent orbit in an exceptional Lie algebra such that the reductive centralizer \(\mathfrak{c}(\mathfrak{s})\) contains a non-simply-laced simple component \(\mathfrak{c}_{0}=\mathrm{Lie}(C_{0})\)._
_(a) There is a unique special orbit \(\mathcal{O}>\mathcal{O}^{\prime}\) such that \(\mathrm{codim}_{\overline{\mathcal{O}}}\,\mathcal{O}^{\prime}\) is equal to the dimension of the minimal special nilpotent \(C_{0}\)-orbit \(\mathcal{O}_{0}\) in \(\mathfrak{c}_{0}\)._
_(b) If \(\mathcal{O}^{\prime}=\mathcal{O}_{2A_{2}}\) in type \(E_{8}\) then there are two such simple components \(\mathfrak{c}_{0}\), both of type \(G_{2}\), and \(\mathcal{S}_{\mathcal{O},e}\) is a union of two copies of \(\overline{\mathcal{O}_{0}}\). The two copies are interchanged by \(C(\mathfrak{s})\). Other than this case, there is exactly one such \(\mathfrak{c}_{0}\) and \(\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}_{0}}\)._
Proof.: Statement (a) is a straightforward check using the tables of nilpotent orbits and Hasse diagrams in [11].
The singularities in (b) can be classified using the arguments in [10, SS4.3]. Indeed, several of these are discussed there, see [10, SS11, Table 13]. Let \(e_{0}\in\mathcal{O}_{0}\). We claim that, with the sole exception of \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in type \(E_{7}\), \(e+e_{0}\in\mathcal{O}\). By unibranchness and dimensions, it follows that \(\mathcal{S}_{\mathcal{O},e}=\overline{f+C_{0}\cdot e_{0}}\cong\overline{ \mathcal{O}_{0}}\). By [10, Prop. 4.8], it suffices to verify the following condition: let \(\langle h_{0},e_{0},f_{0}\rangle=\mathfrak{s}_{0}\subset\mathfrak{c}_{0}\) be an \(\mathfrak{sl}_{2}\)-subalgebra, then all irreducible \(\mathfrak{s}_{0}\)-summands in \(\mathfrak{g}^{e}(i)\) have dimension \(\leq(i+1)\). This can be checked by inspecting the tables in [11]. If \(\mathfrak{c}_{0}\) is of type \(B\), then all non-trivial simple summands for the action on the centralizer of \(e\) are natural modules or spin modules; a short root element acts with Jordan blocks of size \(2\) on the spin module and of size \(\leq 3\) on the natural module, so we only need to check that no natural modules occur in \(\mathfrak{g}^{f}(1)\). When \(\mathfrak{c}_{0}\) is of type \(G_{2}\) (excluding \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in type \(E_{7}\)), all non-trivial summands are isomorphic to the minimal faithful representation for \(\mathfrak{c}_{0}\); \(e_{0}\) acts on the minimal faithful representation with Jordan blocks of size \(\leq 3\), so we only need to check that the minimal representation doesn't appear in \(\mathfrak{g}^{e}(1)\). Finally, \(\mathfrak{c}_{0}\) of type \(C\) occurs once, when \(\mathcal{O}^{\prime}=\mathcal{O}_{D_{4}}\) in type \(E_{7}\) and \(\mathfrak{c}_{0}=\mathfrak{sp}_{6}\); here one has to check that \(e_{0}\) has no Jordan blocks of size \(>7\) on \(V(\varpi_{2})\), hence on the alternating square of the natural module, which is straightforward.
This only leaves the case \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in \(E_{7}\). Here \(\mathfrak{c}_{0}=\mathfrak{g}^{e}\cap\mathfrak{g}^{h}\) is simple of type \(G_{2}\) and the positive graded parts of \(\mathfrak{g}^{f}\) are:
\[\mathfrak{g}^{f}(2)=V(2\varpi_{1})\oplus\mathbb{C}e,\quad\mathfrak{g}^{f}(4)= V_{\mathrm{min}},\]
where \(V_{\mathrm{min}}=V(\varpi_{1})\) is the minimal faithful representation for \(\mathfrak{c}_{0}\). Note that the action of \(\mathfrak{c}_{0}\) on \(V_{\mathrm{min}}\) induces an embedding in \(\mathfrak{so}_{7}\), and \(\mathfrak{sl}_{7}\) decomposes over \(\mathfrak{c}_{0}\subset\mathfrak{so}_{7}\) as \(\mathfrak{so}_{7}\oplus V(2\varpi_{1})\). (In the notation of SS4.5, \(V(2\varpi_{1})=\Sigma_{7}\).) Furthermore, the matrix square operation on \(\mathfrak{gl}_{7}\) determines a quadratic map \(\mathfrak{so}_{7}\to V(2\varpi_{1})\) which restricts to \(\mathfrak{c}_{0}\) to give a \(C_{0}\)-equivariant map \(\psi:\mathfrak{c}_{0}\to V(2\varpi_{1})\subset\mathfrak{g}^{e}(2)\). In particular, if \(x=e_{\beta_{2}}+e_{3\beta_{1}+\beta_{2}}\) then \(\psi(x)\) is non-zero of weight \(3\beta_{1}+2\beta_{2}\). We checked using GAP that (with this notation) there exists an element of \(\mathcal{O}=\mathcal{O}_{D_{4}(a_{1})}\) of the form: \(e+x+\psi(x)\), where \(x\) is in the subregular nilpotent orbit in \(\mathfrak{c}_{0}\)
It follows that \(\mathcal{S}_{\mathcal{O},e}=e+\overline{C_{0}\cdot(x+\psi(x))}\), hence is isomorphic to the closure of the \(C_{0}\)-orbit through \(x\), which completes our proof.
**Proposition 4.11**.: _All minimal special degenerations in exceptional Lie algebras are either: (1) minimal degenerations, (2) covered by the above proposition, or (3) isomorphic to \(d_{4}/\mathfrak{S}_{4}\), which occurs for the two cases of \(\mathcal{O}_{F_{4}(a_{3})}>\mathcal{O}_{A_{2}}\) in \(F_{4}\) and \(\mathcal{O}_{E_{8}(a_{7})}>\mathcal{O}_{D_{4}+A_{2}}\) in \(E_{8}\)._
Proof.: We checked that all the minimal special degenerations, which are not minimal degenerations, are covered by the proposition, except for the two cases listed. The exceptional special degeneration \(\mathcal{O}_{F_{4}(a_{3})}>\mathcal{O}_{A_{2}}\) in type \(F_{4}\) was dealt with in [11, Theorem 4.11]. Hence it remains to show that the Slodowy slice singularity from \(D_{4}+A_{2}\) to \(E_{8}(a_{7})\) in \(E_{8}\) is also isomorphic to \(d_{4}/\mathfrak{S}_{4}\). To do this, we first consider \(\mathcal{O}^{\prime}=\mathcal{O}_{D_{4}}\). Let \(\mathcal{S}_{D_{4}}\) be the Slodowy slice at \(f\in\mathcal{O}^{\prime}\). Repeating the calculation in the proof of Proposition 4.10, we see that the condition of [11, Prop. 4.8] holds for an element of \(\mathfrak{c}\) of type \(F_{4}(a_{3})\). It follows that \(\mathcal{S}_{D_{4}}\cap\overline{\mathcal{O}_{E_{8}(a_{7})}}=f+\overline{C_{ 0}\cdot e_{2}}\) where \(e_{2}\) belongs to the \(F_{4}(a_{3})\) orbit in \(\mathfrak{c}_{0}\). By the same calculation (or by direct observation), \(\mathcal{S}_{D_{4}}\cap\overline{\mathcal{O}_{D_{4}+A_{2}}}=f+\overline{C_{0} \cdot e_{1}}\) where \(e_{1}\) is in the \(A_{2}\) orbit in \(\mathfrak{c}_{0}\). Now we use the following fact (which follows from equality of dimensions): if \(\{e_{1},h_{1},f_{1}\}\) is an \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{c}_{0}\) such that \(\dim C_{0}\cdot e_{1}\) equals the codimension of \(\mathcal{O}^{\prime}\) in \(\overline{G\cdot(f+f_{1})}\), then the centralizer of \(e+e_{1}\) equals \(\mathfrak{g}^{e}\cap\mathfrak{g}^{e_{1}}\). Hence the Slodowy slice at \(f+f_{1}\) is contained in the Slodowy slice at \(f\). It follows that \(\mathcal{S}_{D_{4}+A_{2}}\cap\overline{\mathcal{O}_{E_{8}(a_{7})}}\) is isomorphic to the Slodowy slice singularity in \(F_{4}\) from \(\mathcal{O}_{A_{2}}\) to \(\mathcal{O}_{F_{4}(a_{3})}\), hence is isomorphic to \(d_{4}/\mathfrak{S}_{4}\).
The following is true in both the classical and exceptional types.
**Corollary 4.12**.: _Let \(\mathcal{O}^{\prime}=\mathcal{O}^{\prime}_{e}\) be special. The action of \(C(\mathfrak{s})\) on \(\mathfrak{c}(\mathfrak{s})\) induces an action of \(A(e)\) on the set of simple components of \(\mathfrak{c}(\mathfrak{s})\). Each \(A(e)\)-orbit of simple components \(\mathfrak{c}_{0}\) corresponds to a unique special nilpotent orbit \(\mathcal{O}\) in \(\mathfrak{g}\) such that \((\mathcal{O},\mathcal{O}^{\prime})\) is a minimal special degeneration. Moreover, \(\mathcal{S}_{\mathcal{O},e}\) contains a subvariety isomorphic to the minimal special nilpotent orbit closure in \(\mathfrak{c}_{0}\). All minimal special degenerations of codimension at least \(4\) arise in this way_
Proof.: We just showed this in the exceptional types when \(\mathfrak{c}_{0}\) is not simply-laced, but it also holds when \(\mathfrak{c}_{0}\) is simply-laced where it gives a minimal degeneration. It also holds in the cases of \(d_{4}/\mathfrak{S}_{4}\) and \(a_{2}/\mathfrak{S}_{2}\) from [11]. In the classical types, we showed that each simple factor of \(\mathfrak{c}(\mathfrak{s})\) leads to a unique minimal special degeneration. The \(A(e)\)-orbits on the simple factors of \(\mathfrak{c}(\mathfrak{s})\) are singletons except for the case where \(\mathfrak{c}(\mathfrak{s})\) contains a copy of \(\mathfrak{so}_{4}\). This corresponds to the case of \([2A_{1}]^{+}=d_{2}^{+}\).
## 5. \(A(e)\)-action on slices
In this section we compute the action of \(A(e)\) on the slice \(\mathcal{S}_{\mathcal{O},e}\) for both minimal degenerations and minimal special degenerations in the classical types, and determine when the action is outer. This was done in the exceptional groups in [11] for minimal degenerations. There is only a single case of a minimal special degeneration not covered by those results: the case of \(e\in A_{2}\) from Proposition 4.10, which we now denote as \([2g_{2}^{sp}]^{+}\).
### Union of simple surface singularities
Recall that \(C(\mathfrak{s})\) acts on \(\mathcal{S}_{\mathcal{O},e}\). In the case of a simple surface singularity, as discussed in the introduction, we use Slodowy's notion of action, which amounts to the action on the projective lines in the exceptional fiber. Even when \(\mathcal{S}_{\mathcal{O},e}\) is not irreducible, we want to describe how \(C(\mathfrak{s})\) permutes the projective lines in the fiber, something we did in the exceptional groups. Since \(C^{\circ}(\mathfrak{s})\) acts trivially, we get a permutation action of \(A(e)\simeq C(\mathfrak{s})/C^{\circ}(\mathfrak{s})\) on the \(\mathbb{P}^{1}\)'s. We call this the outer action of \(A(e)\) on the slice.
To compute the action for \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), we use [11, Lemma 5.8]. We do not assume that the orbits are special, so the set-up is a minimal degeneration \((\mathcal{O}_{\lambda},\mathcal{O}_{\mu})\) in the classical groups where \(\dim(\mathcal{S}_{\mathcal{O},e})=2\) for \(e\in\mathcal{O}_{\mu}\), and where \(\lambda,\mu\) are the appropriate partitions indexing the nilpotent orbits. Let \(\mathfrak{n}_{P}\) denote the nilradical of the Lie algebra of a parabolic subgroup \(P\) of \(G\) such that \(\mathcal{O}_{\lambda}\) is Richardson for \(\mathfrak{n}_{P}\). Then we have the proper, surjective map \(\pi:G\times^{P}\ \mathfrak{n}_{P}\to\overline{\mathcal{O}}_{\lambda}\), which is generically finite. Below, we will always choose \(\mathfrak{n}_{P}\) so that \(\pi\) is birational.
Next, assume that the reductive centralizer for an element in \(\mathcal{O}_{\lambda}\) is semisimple. Let \(\mathcal{O}_{1},\mathcal{O}_{2},...\mathcal{O}_{t}\) be the maximal orbits in the complement of \(\mathcal{O}_{\lambda}\) in its closure. Assume that all \(\mathcal{O}_{i}\) are codimension two in \(\bar{\mathcal{O}}_{\lambda}\). Let \(e_{i}\in\mathcal{O}_{i}\). Let \(r_{i}\) equal the number of \(A(e_{i})\)-orbits on \(\pi^{-1}(e_{i})\). Then as in [11, Lemma 5.8], if \(G\) is connected, we have \(\sum_{i}r_{i}\) is equal to rank of \(\mathfrak{g}\) minus the rank of the Levi subgroup of \(P\). The quantities \(r_{i}\) will be enough to determine the outer action.
Remarkably, in types \(B\) and \(C\), the actions are large as possible as they were in the exceptional types (at least given the size of \(A(e)\)).
**Proposition 5.1**.: _In the classical groups \(B,C,D\) (working in the full orthogonal group for \(D\)),_
1. _If_ \(\mathcal{S}_{\mathcal{O},e}\) _is a simple surface singularity of type_ \(D_{k+1}\) _or_ \(A_{2k-1}\)_, then the_ \(A(e)\)_-action upgrades these singularities to_ \(C_{k}\) _and_ \(B_{k}\)_, respectively._
2. _If_ \(\mathcal{S}_{\mathcal{O},e}\) _is a union of two branches of type_ \(A_{2k-1}\)_, the_ \(A(e)\)_-action is_ \([2B_{k}]^{+}\) _as described in SS_1.3_._
The proof will occupy the remainder of this section. For the moment let \(G=\mathrm{O}(\mathrm{V})\) or \(\mathrm{Sp}(\mathrm{V})\), so that, as noted in SS4, a reductive subgroup of the centralizer \(G^{e}\) of \(e\) in \(G\) is \(C(\mathfrak{s})\), which is a product of orthogonal and symplectic groups.
Then the component group \(A(e):=G^{e}/(G^{e})^{\circ}\) of \(e\) with partition \(\mu\) is generated by the corners of Young diagram corresponding to parts \(s\) with \(s\not\equiv\epsilon\). Each such part \(s\) determines a copy of an orthogonal group in \(C(\mathfrak{s})\) and we denote by \(x_{s}\) an element of determinant \(-1\) in each orthogonal group. Then \(A(e)\) is elementary abelian \(\mathbf{Z}_{2}^{r}\) where \(r\) is the number of parts \(s\) with \(s\not\equiv\epsilon\).
### Type \(b\) degeneration
This is the case of a simple surface singularity of type \(D_{k+1}\) and it arises whenever \((\lambda,\mu)\) is locally \((\lambda^{\prime},\mu^{\prime}):=([a+2k,a],[a+2k-2,a+2])\), by [12]. Here \(k\geq 2\). This is a valid pair of partitions when \(a\) is even if \(\mathfrak{g}\) is of type \(C\) and odd if \(\mathfrak{g}\) is of types \(B\) or \(D\). By Proposition 2.3, we can replace \((\lambda,\mu)\) by \((\lambda^{\prime},\mu^{\prime})\). We note that the centralizer of \(e_{1}\) in \(G(V_{1})\) is a subgroup of the centralizer of \(e\) in \(G\). This gives an embedding of the component group fo \(e_{1}\) of \(G(V_{1})\), which is the Klein 4-group \(V_{4}\), into \(A(e)\), given by sending \(A(e_{1})\) to the subgroup of \(A(e)\) generated by \(x_{a\!+\!2k\!-\!2}\) and \(x_{a\!+\!2}\). The other parts contributing to \(A(e)\) act trivially on \(\mathfrak{g}(V_{1})\) and hence trivially on the slice.
#### 5.2.1. \(G\) is of type \(C\), \(a\) even
The weighted Dynkin diagram for \(\mathcal{O}_{\lambda}\) is
\[\overbrace{2\dots 2}^{k}\overbrace{0202\dots 02}^{a/2}\]
where the final node corresponds to the long simple root. Taking the associated parabolic subgroup \(P\), the map \(\pi\) above is birational.
If \(a=0\), we are in type \(C_{k}\) and \(\mathcal{O}_{\lambda}\) is regular. There is a unique minimal degeneration to \(\mathcal{O}_{\mu}\), the subregular orbit. Hence, using [11, Lemma 5.8], there are exactly \(k\) orbits for \(A(e)\) on the \(\mathbb{P}^{1}\)'s in the fiber, which implies the action on \(D_{k+1}\) must be \(C_{k}\). Indeed, the sole
\(A(e)\)-orbit of size two is coming from the orbital variety corresponding to the long root. (We could use knowledge of the Springer fiber in this case too).
Next if \(a>0\), which means \(a\geq 2\) since \(a\) is even, there is the degeneration of \(\lambda\) to \(\mu\) but also to \(\mu^{\prime}=[a{+}2k,a{-}2,2]\). The latter minimal degeneration is equivalent to \(([a],[a{-}2,2])\), which is a simple surface singularity of type \(D_{\frac{a}{2}+1}\) with action of \(A(e_{\mu^{\prime}})\) having \(\frac{a}{2}\) orbits, by induction. Since the total number of component group orbits on the fiber is \(k+\frac{a}{2}\), that leaves \(k\) orbits corresponding to the degeneration to \(e=e_{\mu}\). This forces the action on \(D_{k+1}\) to be non-trivial and must be \(C_{k}\), as desired. Indeed, we could explicitly see by using instead the parabolic \(P\) for the diagram
\[\overbrace{0202\ldots 02}^{a/2}\overbrace{2\ldots 2}^{k},\]
which is also birational to \(\mathcal{O}_{\lambda}\). Then the orbital varieties for \(\mathcal{O}_{\mu}\) correspond to the last \(k\) two's. The last node gives the \(A(e)\)-orbit with two elements.
Finally, the element \(x_{a+2k}x_{a}\) acts trivially on the fibers, since it belongs to the center of \(G\). So both \(x_{a+2k}\) and \(x_{a}\) will yield the outer action on the slice.
#### 5.2.2. \(G\) is of type \(D\), \(a\) odd
The weighted Dynkin diagram for \(\mathcal{O}_{\lambda}\) is
\[\overbrace{2\ldots 2}^{k-1}\overbrace{0202\ldots 02}^{(a-1)/2}2\]
where the two final nodes correspond to orthogonal simple roots and the first \(k-1\) nodes form a subsystem of type \(A_{k-1}\). Taking associated parabolic subgroup \(P\), the map \(\pi\) above is birational. This is similar to the type \(C\) case. If we work in the full orthogonal group then \(A(e)\) permutes the two \(\mathbf{P}^{1}\)'s corresponding to the tails of the Dynkin diagram. Finally, the element \(x_{a+2k}x_{a}\) acts trivially on the fiber, since it belongs to the center of \(G\). So both \(x_{a+2k}\) and \(x_{a}\) will yield the outer action on the slice.
### Type \(c\) singularity
This is a simple surface singularity of type \(A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k{+}1,a,a],[a{+}2k{-}1,a{+}1,a{+}1]).\]
Here, \(a\) is even for types \(B,D\) and odd for type \(C\). As in SS5.2 using Proposition 2.3, we can first reduce to the case of \(([a{+}2k{+}1,a,a],[a{+}2k{-}1,a{+}1,a{+}1])\) where \(G\) is type \(B\) for \(a\) even and type \(C\) for \(a\) odd.
The \(A_{2k-1}\) simple surface singularity arises from the diagonal cyclic group \(\Gamma\) of order \(2k\) in \(\operatorname{SL}_{2}(\mathbb{C})\). The centralizer of \(\Gamma\) in \(\operatorname{SL}_{2}(\mathbb{C})\) is the diagonal one-dimensional torus, leading to an invariant of degree two for the action of \(\Gamma\) on \(\mathbb{C}^{2}\). Since the isomorphism to the slice is \(\mathbb{C}^{*}\)-equivariant, we see that the slice, upon projection to \(\mathfrak{c}(\mathfrak{s})\) must be isomorphic to the Lie algebra of the torus for \(\operatorname{O}(2)\) corresponding to the part \(a{+}1\) in \(\mu\). The outer automorphism on \(\mathbb{C}^{2}/\Gamma\) acts non-trivially on the diagonal torus, we see that \(x_{a+1}\) gives rise to the action, while \(x_{a{+}2k{-}1}\) acts trivially.
### Type \(d\) degeneration
This is again a simple surface singularity of type \(A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k{+}1,a{+}2k{+}1,a],[a{+}2k,a{+}2k,a{+}2]).\]
This is a valid pair of partitions when \(a\) is even in type \(C\) and odd in types \(B\) or \(D\).
As in the previous case, it is enough to work it out for the case \(\lambda=[a{+}2k{+}1,a{+}2k{+}1,a]\) and \(\mu=[a{+}2k,a{+}2k,a{+}2]\). when \(G\) of type \(C\) for \(a\) even and type \(B\) when \(a\) is odd. As
before, we can detect the action by looking at the action of \(\mathfrak{c}(\mathfrak{s})\). Thus \(x_{a+2k}\) acts by outer action and \(x_{a+2}\) acts trivially.
### Type \(e\) degeneration
This is a union of simple surface singularities \(A_{2k-1}\cup A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k,a{+}2k,a,a],[a{+}2k{-}1,a{+}2k{-}1,a{+}1,a{+}1]).\]
Here, \(a\) is odd in type \(C\) and even in types \(B\) or \(D\). As before, we are reduced to the case of \(\lambda=[a{+}2k,a{+}2k,a,a]\) and \(\mu=[a{+}2k{-}1,a{+}2k{-}1,a{+}1,a{+}1]\) in type \(D\) for \(a\) even and type \(C\) for \(a\) odd. Here \(C(\mathfrak{s})\simeq\operatorname{O}(2)\times\operatorname{O}(2)\).
The full automorphism group of the singularity is dihedral of order eight. We want to show \(A(e)\) embeds as the Klein 4-group generated by the reflections through the midpoints of edges of the square. This will follow if we show that there is at least one orbit of size 4 of \(A(e)\) on the fiber over \(e\). This will force there to be \(k-1\) orbits of size 4 on the \(4k-2\) projective lines and one orbit of size 2,
By the method of the previous two sections, the element \(x_{a+2k-1}x_{a+1}\) must fix each irreducible component and act by outer automorphism on each one individually. This is because it is acting by \(-1\) on the two-dimensional space \(\mathfrak{c}(\mathfrak{s})\). The action \(x_{a+2k-1}\) and \(x_{a+1}\) can be determined in each case separately. Both of them will interchange the two irreducible components.
#### 5.5.1. C case
The Dynkin diagram of \(\mathcal{O}_{\lambda}\) is
\[\overbrace{0202\dots 02}^{k}\overbrace{00020002\dots 0002}^{(a-1)/2}00.\]
Using the method of SS5.2, if \(a=0\), we find there \(k\) orbits for the unique minimal degeneration to \(\mathcal{O}_{\mu}\). At the same time, there are \(4k-2\) projective lines in the fiber over \(e\). Since \(A(e)\) is isomorphic to \(V_{4}\), the possible orbit sizes are 1, 2, and 4. The only way for this to work is for there to be \(k-1\) orbits of size 4 and one orbit of size 2. Therefore the action is as desired.
When \(a>0\), there is another minimal degeneration to \(\mathcal{O}^{\prime}_{\mu}=[(a{+}2k)^{2},(a{-}1)^{2},2]\). Then \((\lambda,\mu^{\prime})\) is equivalent to \(([a,a],[(a{-}1)^{2},2])\), which is of a type \(d\) generation and has the form \(B_{a-1\over 2}\). This degeneration therefore accounts for \({a-1\over 2}\) of the \(k+{a-1\over 2}\) orbits, leaving \(k\) for the studied minimal degeneration and the result follows as in the \(a=0\) case.
#### 5.5.2. D case
If \(G\) is the full orthogonal group, there is a single orbit \(\mathcal{O}_{\lambda}\) with the given singularity. Working in the special orthogonal group, there are two very even orbits with the given partition, interchanged by the action of any element of \(O(N)\) not in \(\operatorname{SO}(N)\). This is where the two irreducible components are coming from, as they both degenerate to \(\mu\), which has an element fixed by action. Hence, the result follows.
### \(G\) is special orthogonal
When \(G\) is special orthogonal, there are two situations where the component group action changes.
For the type \(b\) singularity when \(\mu\) has exactly two odd parts (e.g., \(\mu=[8,8,5,3]\) or \(\mu=[8,8,5,5]\) ). In this case the component group is trivial. If there were more than two odd parts for this degeneration, there would have to be at least 3 distinct odd parts, which would guarantee the non-trivial action of \(A(e)\).
For the type \(e\) singularity when \(\mu\) again has only the odd parts that appear in the local version of \(\mu\) in Table 2. Otherwise, \(\mu\) would have at least two additional odd parts (possibly equal), which would ensure the same action by \(V_{4}\). Now if \(\mu\) has only the odd parts, say \([(a{+}2k{-}1)^{2},(a{+}1)^{2}]\), then since its others parts are even, the partition \(\lambda\) must be very even.
Then there are two orbits corresponding to \(\lambda\) and \(A(e)\simeq\mathfrak{S}_{2}\) acts by outer automorphism on each degeneration to \(\mu\), so both are of type \(B_{k}\).
### Dimension four or greater
In [10], we studied the image of \(C(\mathfrak{s})\) in \(\operatorname{Aut}(\mathfrak{c}(\mathfrak{s}))\) via the adjoint action in the exceptional groups, and then restricted the action to orbits of simple factors of \(\mathfrak{c}(\mathfrak{s})\). We observed using [12] (also computable using [1]) that \(C(\mathfrak{s})\) tends to act by outer automorphisms of simple factors of \(\mathfrak{c}(\mathfrak{s})\) that admit outer automorphisms. As in Corollary 4.12, the minimal (and minimal special degenerations) are controlled by \(\mathfrak{c}(\mathfrak{s})\) for most cases when \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). We then recorded this outer action on minimal singularities \(a_{n}\), \(d_{n}\), \(d_{4}\), and \(e_{6}\), when they arose.
A more intrinsic framework is to use the intersection homology \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) of \(\mathcal{S}_{\mathcal{O},e}\) under the induced action of \(A(e)\). Let \(\operatorname{p(X)}=\sum_{i}\dim(\operatorname{IH}^{2i}(X))\mathrm{q}^{i}\). When \(\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}_{\min.\operatorname{ sp}}}\) for the minimal special orbit in the simple Lie algebra \(\mathfrak{c}_{0}\), then we have
\[p(\mathcal{S}_{\mathcal{O},e})=q^{e_{1}-1}+q^{e_{2}-1}+\cdots+q^{e_{k}}\]
where \(e_{i}\) are the exponents of \(\mathfrak{c}_{0}\) (see [11]).
Let \(\mathfrak{c}_{0}\) be of type \(A_{k}\), \(D_{k}\), or \(E_{6}\) and \(\theta\) be an outer involution and denote by \(\mathfrak{c}_{0}^{\prime}:=\mathfrak{c}_{0}^{\langle\theta\rangle}\) the fixed subalgebra. Then \(\langle\theta\rangle\) acts trivially on the part of \(IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) corresponding to exponents of \(\mathfrak{c}_{0}^{\prime}\) and by the sign representation on the remaining part. In other words,
\[IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})^{\langle\theta\rangle }=IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}}/\langle\theta\rangle).\]
In the case of \(\mathfrak{S}_{3}\) acting by outer automorphisms when \(\mathfrak{c}_{0}\) is of type \(D_{4}\), the \(\mathfrak{S}_{3}\)-invariants on \(IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) correspond to the exponents of \(G_{2}\) (namely, \(1\) and \(5\)) and \(\mathfrak{S}_{3}\) acts by the reflection representation on the two-dimensional space \(IH^{4}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) for the two exponents of \(D_{4}\) equal to \(3\) and again
\[IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})^{K}=IH^{*}(\overline{ \mathcal{O}}_{\min.\operatorname{sp}}/K).\]
Since \(C^{\circ}(\mathfrak{s})\) acts trivially on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\), there is an action of \(A(e)\) on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) and this gives an intrinsic way to see the outer action when the slice is isomorphic to the closure of a minimal special orbit, rather than appealing to the action on \(\mathfrak{c}_{0}\) itself, when \(\mathfrak{c}_{0}\) is the relevant factor of \(\mathfrak{c}(\mathfrak{s})\) as in Corollary 4.12.
### Type \(h\) singularity
This corresponds to the closure of the minimal nilpotent orbit in type \(D_{k}\). The local action of the reductive centralizer coincides with the orthogonal group \(\operatorname{O(2k)}\), which contains an outer involution of \(\mathfrak{so}_{2k}\) and so the \(A(e)\) acts by outer action and coincides with the \(d_{k}^{+}\).
In the case of \(G=\operatorname{SO}(2N)\), the component group \(A(e)\) will still act by outer involution in this way, except for those cases where the partition \(\mu\) contains exactly one odd part (of even multiplicity \(2k\)).
### Exceptional degenerations
#### 5.9.1. The case of \(d_{n+1}/V_{4}\)
From SS4.3, the \(V_{4}\) is acting on \(d_{n+1}\) with \(\theta_{1}\) outer and \(\theta_{2}\) inner. Hence,
\[IH^{*}(d_{n+1}/V_{4})\simeq IH^{*}(d_{n+1}/\theta_{1})\simeq IH^{*}(b_{n}^{sp}).\]
Let \(\mathcal{S}_{\mathcal{O},e}\) be a slice of type \(h_{sp}\). Recall that there is a natural \(\operatorname{O(2n)}\) action on \(d_{n+1}/V_{4}\) which is the fixed points of the \(V_{4}\)-action on \(O(2n)\). Under the isomorphism to \(\mathcal{S}_{\mathcal{O},e}\), the \(\operatorname{O(2n)}\)-action becomes the action of \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\).
Since the action of \(O(2n)\) on \(d_{n+1}\) is also inner, we find that \(O(2n)\) acts trivially on \(IH^{*}(d_{n+1}/V_{4})\), and hence \(C(\mathfrak{s})\) acts trivially on \(d_{n+1}/V_{4}\).
On the other hand, it seems relevant that if we take the minimal degeneration to \(\mu\) in \(\mathcal{S}_{\mathcal{O},e}\), which is of type \(h\), then indeed \(A(e)\) acts by outer action on this \(d_{n}\).
#### 5.9.2. The case of \(d_{4}/s_{4}\)
From the proof of [10, Theorem 4.11], \(S_{4}\) acts on \(d_{4}\) by the semi-direct product of an inner \(V_{4}\) group and an outer \(S_{3}\) group. Hence,
\[IH^{*}(d_{4}/S_{4})\simeq IH^{*}(d_{4}/S_{3})\simeq IH^{*}(g_{2}^{sp}).\]
Let \(\mathcal{S}_{\mathcal{O},e}\) be one of the two slices of type \(d_{4}/S_{4}\). There is a natural action of \(SL_{3}\rtimes\mathfrak{S}_{2}\) on \(d_{4}/S_{4}\), which is the fixed points of \(S_{4}\) on the adjoint group of type \(D_{4}\). This action, as in the previous section, corresponds to \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\) under the equivariant isomorphism. The action of the \(\mathfrak{S}_{2}\) is inner, so we again find that \(A(e)\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\).
The minimal degeneration in \(\mathcal{S}_{\mathcal{O},e}\) corresponds to an \(a_{2}\) singularity coming from \(\mathfrak{c}(\mathfrak{s})\) and we note again that the action on this singularity is outer, \(a_{2}^{+}\).
## 6. Action of the canonical quotient
Let \(\mathcal{S}_{\mathcal{O},e}\) be the slice for a minimal special degeneration. In this section we explain how the kernel \(H\) of the homomorphism from \(A(e)\) to Lusztig's canonical quotient \(\bar{A}(e)\) acts on the slice \(\mathcal{S}_{\mathcal{O},e}\). When \(H\) acts by outer action, the exchange of singularities under the duality is not as expected.
### Exceptional groups
**Proposition 6.1**.: _Assume \(G\) is connected of exceptional type and \(H\) is nontrivial for \(A(e)\). Then there exists a unique minimal special degeneration to \(e\) and the degeneration is \(C_{k}\) for \(k\geq 2\), \(\mu\), or \(d_{4}/\mathfrak{S}_{4}\)._
_In the \(C_{k}\) cases, \(H=A(e)=\mathfrak{S}_{2}\) acts by outer automorphism on \(\mathcal{S}_{\mathcal{O},e}\)._
_In the one case \((D_{7}(a_{1}),E_{8}(b_{6}))\), where the singularity is of type \(\mu\) (which is \(C_{2}\) upon normalization), \(H\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\) and the induced action of \(\bar{A}(e)\) is by outer automorphism._
_In the two cases where \(\mathcal{S}_{\mathcal{O},e}\) is \(d_{4}/\mathfrak{S}_{4}\), the action of \(H\) is trivial on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\), however the action of \(H\) on the minimal degeneration to \(e\) is outer._
Proof.: The cases in the exceptional groups where \(H\) is nontrivial for \(A(e)\) can be read off from [11]. They are \(A_{2}\) and \(F_{4}(a_{2})\) in type \(F_{4}\); \(A_{3}+A_{2}\) and \(E_{7}(a_{4})\) in type \(E_{7}\); and \(A_{3}+A_{2}\), \(D_{4}+A_{2}\), \(E_{7}(a_{4})\), \(D_{5}+A_{2}\), \(E_{8}(b_{6})\), \(D_{7}(a_{1})\), and \(E_{8}(b_{4})\) in type \(E_{8}\). In all these cases, there is a unique \(\mathcal{O}\) such that \((\mathcal{O},\mathcal{O}^{\prime}_{e})\) is a minimal (special) degeneration.
If \(e\) does not belong to the \(E_{8}(b_{6})\) orbit and is not of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then \(A(e)\simeq\mathfrak{S}_{2}\) and \(H=A(e)\), so that \(\bar{A}(e)\) is trivial. Since we already know from [10] that \(A(e)\) is acting by outer action, we see that \(H\) does too.
If \(e\) is not of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then there exists a unique \(\mathcal{O}\) where \(\mathcal{O}\) is a minimal (special) degeneration to \(e\) and \(A(e)\) acts non-trivially \(\mathcal{S}_{\mathcal{O},e}\). The singularity of \(\mathcal{S}_{\mathcal{O},e}\) is a simple surface singularity \(D_{k+1}\), yielding a \(C_{k}\) singularity. It follows that \(H\) itself acts non-trivially on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\).
For the \(E_{8}(b_{6})\) orbit, we have \(A(e)\simeq\mathfrak{S}_{3}\) and \(H\) is the cyclic group of order \(3\). The slice of \(D_{7}(a_{1})\) at \(e\) is of type \(\mu\), which is not normal, but has normalization of type \(A_{3}=D_{3}\) and we previously computed that \(A(e)\) acts by outer action upon the normalization, so that the normalization is \(C_{2}\). Since the elements of \(H\) cannot give an outer action, the outer action descends to \(\bar{A}(e)\).
If \(e\) is of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(d_{4}/S_{4}\). This occurs for \((F_{4}(a_{3}),A_{2})\) and \((E_{8}(a_{7}),D_{4}+A_{2})\). By SS5.7, the action of \(A(e)\) on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) is trivial.
On the other hand, the action of \(A(e)\) is non-trivial on \(IH^{*}(\mathcal{S}_{\mathcal{O}^{\prime\prime},e})\), where \(\mathcal{O}^{\prime\prime}\) is the minimal non-special orbit between \(\mathcal{O}\) and \(\mathcal{O}_{e}\).
### Classical types
Let \(X\) be of type \(B,C,D\), or \(C^{\prime}\). Let \(\epsilon\) and \(\epsilon^{\prime}\) be defined for the given type. For a partition \(\mu\), define
\[R:=\{s\ |\ s\not\equiv\epsilon,m_{\mu}(s)\neq 0,h_{\mu}(s)\not\equiv\epsilon^{ \prime}\}.\]
For \(s\in R\), define \(s^{\prime}\) to satisfy \(s^{\prime}\not\equiv\epsilon\), \(m(s^{\prime})\neq 0\) and maximal for this property with \(s^{\prime}<s\). Set \(s^{\prime}=0\) if no such \(s^{\prime}\) exists and set \(x_{0}=1\) in \(A(e)\). Define \(H\) to the be subgroup of \(A(e)\) generated by the following elements of \(A(e)\):
\[H:=\langle x_{s}x_{s^{\prime}}\ |\ s\in R\rangle. \tag{17}\]
By [10], the quotient of \(A(e)\) by \(H\) gives Lusztig's canonical quotient in \(B\),\(C\). In \(D\) we get an extra factor of \(\mathbf{Z}_{2}\), as opposed to working in the special orthogonal group. In type \(C^{\prime}\), we get something new and we take this as the definition of the canonical quotient (we can give a definition that is simliar to the characterization in _op. cit._) Let \(r=\#R\). Then the canonical quotient \(\bar{A}(e)\) is elementary abelian with \(r\) generators in types \(C,D,C^{\prime}\) and \(r-1\) in type \(B\).
Let \(G\) be classical group.
**Proposition 6.2**.: _If the type of the minimal special degeneration is type \(C_{n}\) for \(n\geq 2\) and \(l\equiv\epsilon^{\prime}\) in Table 1, then \(H\) acts non-trivially on slice. Otherwise, \(H\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\)._
_When the slice \(\mathcal{S}_{\mathcal{O},e}\simeq d_{n}/V_{4}\), \(H\) acts by outer automorphism on the \(\mathcal{S}_{\mathcal{O}^{\prime\prime},e}\), where \(\mathcal{O}^{\prime\prime}\) is the minimal non-special orbit between \(\mathcal{O}\) and \(\mathcal{O}_{e}\)._
Proof.: In Table 1, the element \(e\in\mathcal{O}_{\mu}\). For the type \(B_{n}\) singularities:
* Type \(c\). The elements acting non-trivially on the slice involve \(x_{a+1}\). But the part \(a{+}2n{-}1\) has height \(l+1\) and the part \(a+1\) has height \(l+3\), both of which are congruent to \(\epsilon^{\prime}\). Hence, all the elements in \(H\) do not involve \(x_{a+1}\) and \(H\) acts trivially.
* Type \(d\). The elements acting non-trivially on the slice involve \(x_{a+2n}\). But the part \(a{+}2n\) has height \(l+2\), which is congruent to \(\epsilon^{\prime}\). For \(s\) minimal for \(s>a{+}2n\) and \(s\not\equiv\epsilon\), we must have \(h(s)\) even since the parts between \(s\) and \(a{+}2n\) are congruent to \(\epsilon\) and so come with even multiplicity. Hence none of the elements generating \(H\) involve \(x_{a+2n}\) and \(H\) acts trivially.
* Type \(e\). The elements acting non-trivially on the slice involve \(x_{a+2n-1}\) or \(x_{a+1}\). Both of these parts have height congruent to \(\epsilon^{\prime}\), so as in the type \(d\), none of the elements generating \(H\) involve either part and \(H\) acts trivially.
Next, we treat the case of type \(b\). Here, \(H\) acts non-trivially if either \(x_{a+2n-2}\) or \(x_{a+2}\) are involved in a generator of \(H\), but not both. The height of \(a{+}2n{-}2\) is \(l+1\) and \(a+2\) is \(l+2\). If \(l\equiv\epsilon^{\prime}\), then \(x_{a+2n-2}x_{a+2}\) is in \(H\), but no other generator involves \(a{+}2n{-}2\) since \(s\) minimal for \(s>a{+}2n{-}2\) and \(s\not\equiv\epsilon\) must have \(s\equiv\epsilon^{\prime}\). So \(H\) acts trivially. But if \(l\not\equiv\epsilon^{\prime}\), then some element of the form \(x_{a+2}x_{s^{\prime}}\) is in \(H\) and \(H\) does not act trivially on the slice. This happens exactly when the second diagram in Table 2 occurs, for the upper right \(C_{n+1}\) singularity. Hence it is denoted \(C_{n+1}^{*}\).
For exceptional type \(g_{sp}\) there is no \(A(e)\) outer action. However, \(H\) acts non-trivially on the \(\mathcal{S}_{\mathcal{O}^{\prime\prime},e}\). The proof is similar to the above cases, as is the proof that \(H\) acts trivially for type \(h\)
## 7. Combinatorial statement of duality, classical groups
Let \((\lambda,\mu)\) be a minimal special degeneration in types \(B,C,D\) or \(C^{\prime}\). Then all three of \((f(\lambda),f(\mu))\), \((d(\mu),d(\lambda))\), and \((d_{LS}(\mu),d_{LS}(\lambda))\) are minimal special degenerations. We now prove that the four types of singularities are given by the Figure 2.
There is a bit more going on in the first quartet where the partition pattern of type \(c\) will get interchanged under internal duality with that of type \(f^{1}_{sp}\) and that of type \(d\) is interchanged under internal duality with type \(f^{2}_{sp}\). Let internal duality map from type \(X\) to \(d(X)\).
**Lemma 7.1**.: _The vertical arrows in Figure 2 are correct._
_In particular, if \(l\equiv\epsilon^{\prime}_{X}\), then the singularity of type \(b\) is interchanged with the minimal special \(g_{sp}\)._
_If \(l\not\equiv\epsilon^{\prime}_{X}\), then the singularity of type \(b\) is interchanged with the minimal special \(h\)._
_Each of the two types of \(B_{n}\) singularities switches with a corresponding type of \(b^{sp}_{n}\) singularity: type \(c\) with \(f^{1}_{sp}\) and type \(d\) with \(f^{2}_{sp}\). The type \(e\) singularity switches with type \(h_{sp}\) and type \(a\) goes to type \(a\)._
Proof.: If \((\lambda,\mu)\) becomes \((\lambda^{\prime},\mu^{\prime})\) after removing \(l\) rows and \(s\) columns, then clearly \((d(\mu),d(\lambda))\) becomes \((d(\mu^{\prime}),d(\lambda^{\prime}))\) after removing \(s\) rows and \(l\) columns. So it is sufficient to work with the irreducible forms in Tables 1 and 2 to understand how a pair of partitions behaves under \(d\).
A quick check shows that, under the transpose operation on partitions, the partition in the table of type \(c\) is interchanged with the one of type \(f^{1}_{sp}\); the type \(d\) partition is interchanged with the one of type \(f^{2}_{sp}\); the one of type \(e\) is interchanged with the one of type \(h_{sp}\); the one of type \(b\) is interchanged with the partition found in ether \(g_{sp}\) and \(h\); and type \(a\) is self-dual.
The behavior of \(\epsilon\) and \(\epsilon^{\prime}\) under \(d\) is also follows:
\[(\epsilon,\epsilon^{\prime})_{X}+(\epsilon^{\prime},\epsilon)_{d(X)}\equiv(1,1).\]
As a result, for the interchange of the first three partition types described above, the switching of \(l\) and \(s\) upon going from \((\lambda,\mu)\) to \((d(\mu),d(\lambda))\) agrees with the restriction on \(l\) and \(s\) in the dual type \(d(X)\) given in the Tables. However, for type \(b\), when \(l\equiv\epsilon^{\prime}_{X}\), the interchange is with type \(g_{sp}\) and when \(l\not\equiv\epsilon^{\prime}_{X}\), the interchange is with type \(h\). The self-dual type \(a\) is clear.
Next we want to write down the rules for the horizontal arrows in Figure 2. First, we start with the case of type \(b\), the \(C_{n}\) singularity from [10]. In that case, we can write \((\lambda,\mu)\) locally as \(([(2n+s)^{t},s^{u}],(2n+s)^{t-1},2n-2+s,s+2,s^{u-1}]\) for positive integers \(t\) and \(u\), where \(s\not\equiv\epsilon_{X}\) where \(f\) maps \(X\) to \(f(X)\).
**Lemma 7.2**.: _Assume the degeneration is of type \(b\) for a given \(n\). When \(l\equiv\epsilon^{\prime}_{X}\), under the \(f\) map, the degeneration \((\lambda,\mu)\) is carried to a singularity of type_
\[e \text{ if }t\geq 2,u\geq 2\] \[d \text{ if }t\geq 2,u=1\] \[c \text{ if }t=1,u\geq 2\] \[C_{n+1} \text{ if }t=u=1\]
_When \(l\not\equiv\epsilon^{\prime}_{X}\), then type \(b\) is exchanged with type \(b\) with \(n\) replaced by \(n{-}1\), that is \(C_{n-1}\)._
Proof.: Since \(l\) rows are removed, \(h_{\lambda}(2n+s)=l+1\) and \(h_{\lambda}(s)=l+1+m_{\lambda}(s)\). Also, \(h_{\mu}(2n{-}2+s)=l+1\) and \(h_{\mu}(s+2)=l+2\) and \(m_{\mu}(2n{-}2+s)=m_{\mu}(s{+}2)=1\) if \(n>2\); and \(h_{\mu}(2n{-}2+s)=h_{\mu}(s{+}2)=l{+}2\) and \(m_{\mu}(s{+}2)=2\) if \(n=2\). All these parts are not congruent to \(\epsilon_{X}\) since \(s\not\equiv\epsilon\).
If \(l\not\equiv\epsilon^{\prime}_{X}\),then \(h_{\lambda}(2n{+}s)\equiv\epsilon^{\prime}_{X}\) and \(h_{\lambda}(s)+m_{\lambda}(s)\equiv\epsilon^{\prime}_{X}\). In particular, in Lemma 2.2, \(2n{+}s\) obeys line 1 or 3, and \(s\) obeys lines 2 or 3. Hence the partition \([2n{+}s,s]\) in \(\lambda\) gets replaced by \([2n{+}s{-}1,s{+}1]\) in \(f(\lambda)\) regardless of the parities of \(m_{\lambda}(2n{+}s)\) and \(m_{\lambda}(s)\). Moreover, \([2n{-}2{+}s,s{+}2]\) in \(\mu\) goes to \([2n{-}3{+}s,s{+}3]\) since \(s{+}2\) obeys line 2 if \(n>2\) and line 3 if \(n=2\). Hence we end of with \(f(\lambda),f(\mu)\) locally equal to
\[([2n{-}1{+}s,s{+}1],[2n{-}3{+}s,s{+}3]),\]
which is of type \(C_{n-1}\) with \(s+1\) rows removed. Note that \(s+1\not\equiv\epsilon_{f(X)}\) since \(\epsilon_{X}\) and \(\epsilon_{f(X)}\) have different parities.
Now if \(l\equiv\epsilon^{\prime}\), there are four cases to consider depending on \(t=m_{\lambda}(2n{+}s)\) and \(u=m_{\lambda}(s)\): if \(t=u=1\), \(\lambda\) is locally \([2n{+}s,s]\)
Now if \(l\equiv\epsilon^{\prime}\), then \(h_{\lambda}(2n{+}s)\not\equiv\epsilon^{\prime}_{X}\) so if \(m(2n{+}s)\geq 2\), the last two values of \(2n{+}s\) in \(\lambda\) are unchanged in \(f(\lambda)\) since lines two or four apply in Lemma 2.2. But if \(m(2n{+}s)=1\), then line two applies and \(2n{+}s\) becomes \(2n{+}s+1\). Now we have \(h_{\lambda}(s)+m_{\lambda}(s)\not\equiv\epsilon^{\prime}_{X}\), so we are in the setting of lines one and four of the lemma. Thus the first two values of \(s\) are unchanged if \(m(s)\geq 2\) and the solve values of \(s\) changes to \(s-1\) if \(m(s)=1\). Thus \(f(\lambda)\) is locally \([2n{+}s,2n{+}s,s,s]\), \([2n{+}s,2n{+}s,s{-}1]\), \([2n{+}s{+}1,s,s]\), or \([2n{+}s{+}1,s{-}1]\) when \((t,u)\) is \((\geq 2,\geq 2)\), \((\geq 2,1)\), \((1,\geq 2)\), \((1,1)\), respectively, after removing \(l{-}1\), \(l{-}1\), \(l\) or \(l\) rows, respectively.
In the four cases, \(\mu\) looks locally like
\[[2n{+}s,2n{+}s{-}2,s{+}2,s],[2n{+}s,2n{+}s{-}2,s{+}2],[2n{+}s{-}2,s{+}2,s],\ \text{and}\ [2n{+}s{-}2,s{+}2],\]
respectively. Using that \(h_{\mu}(2n{+}s{-}2)\not\equiv\epsilon^{\prime}_{X}\) and \(m_{\mu}(2n{+}s{-}2)=1\) and \(h_{\mu}(s)+m_{\mu}(s)\equiv\epsilon^{\prime}\) we get using Lemma 2.2, that \(f(\mu)\) is locally \([2n{+}s{-}1,2n{+}s{-}1,s{+}1,s{+}1]\)\([2n{+}s{-}1,2n{+}s{-}1,s{+}1]\)\([2n{+}s{-}1,s{+}1]\)\([2n{+}s{-}1,s{+}1]\), after removing \(l{-}1\), \(l{-}1\), \(l\) or \(l\) rows, respectively.
Hence, after removing \(s\), \(s{-}1\), \(s\), or \(s{-}1\) rows, respectively, we find that \((f(\lambda),f(\mu))\) is of type \(e\), \(d\), \(c\), for the same value of \(n\) or type \(b\) with \(C_{n+1}\).
**Proposition 7.3**.: _The four singularities behave as in the three quartets._
Proof.: The same ideas in the previous lemma, or the fact that \(f\circ f=\mathrm{id}\), shows that the type \(c\),\(d\),\(e\) partitions for rank \(n\) are mapped to the \(C_{n}\) singularity.
Now given any of the singularities in Table 1 or 2, either the singularity is codimension two or the degeneration obtained using \(d\) is. Either this singularity is type \(C_{n}\) or applying \(f\) to the degeneration gives a type \(C_{n}\) singularity with \(l\equiv\epsilon^{\prime}_{X}\). Putting this singularity in the upper left corner of the square of singularities, the two lemmas show that the three corners are as shown in Figure 2 after we note that if \(C_{n}\) is carried to \(C_{n+1}\) under \(f\), the value of \(l\) stays the same and satisfies \(l\not\equiv\epsilon^{\prime}_{f(X)}\). This means that applying \(d\) leads to the \(h\) singularity according to Lemma 7.1. In effect, the three quartets (or four if we include the two different ways to obtain \(B_{n}\) and \(b_{n}^{sp}\)) are controlled by the four possibilities for \(t\) and \(u\) in the \(l\equiv\epsilon^{\prime}_{X}\) case in Lemma 7.2.
_Remark 7.4_.: We can also write the specific conditions that describe the action under \(f\) where the singularity starts as \(c_{n}^{sp}\). Writing \(\mu\) locally as \([(s{+}2)^{t},(s{+}1)^{n},s^{u}]\) where \(t=m(s{+}2)\) and \(u=m(s)\), then \(c_{n}^{sp}\) maps to \(h,f_{sp}^{1},f_{sp}^{2}\), or \(h_{sp}\) when \((t,u)\) is \((\geq 1,\geq 1)\), \((0,\geq 1)\), \((\geq 1,0)\), \((0,0)\). respectively. Here, \(s=0\) is considered a part in type \(C\). These conditions are exactly the conditions from the four cases in Lemma 7.2 for \(C_{n}\), under \(d\) when \(l\equiv\epsilon^{\prime}_{X}\).
## 8. Outer automorphisms of \(\mathfrak{g}\)
### Outer automorphisms for \(A_{n}\) and \(E_{6}\)
Besides considering the automorphism group for \(D_{n}\), we can do the same for \(A_{n}\) and \(E_{6}\) (and the full automorphism group for \(D_{4}\)). The ideas follow [11, SS7.5] where the case of the regular and subregular orbits were handled. Let \(CA(\mathfrak{s})\), called the _outer reductive centralizer_ in _loc. cit._, denote the centralizer of \(\mathfrak{s}\) in \(\operatorname{Aut}(\mathfrak{g})\). There is a surjective map \(\pi:\operatorname{Aut}(\mathfrak{g})\to\operatorname{Aut}(\Delta)\) where \(\Delta\) is the Dynkin diagram of \(\mathfrak{g}\). Let \(e\in\mathcal{N}_{o}\) and \(\mathfrak{s}\) be an \(\mathfrak{sl}_{2}\)-triple \(\{e,h,f\}\) for \(e\). Assume that the weighted Dynkin diagram for \(e\) is invariant under \(\operatorname{Aut}(\Delta)\). Then \(\sigma(h)\) is conjugate to \(g.h\) for some \(g\in G\). This assumption holds as long as \(e\) is not very even in \(D_{2n}\) and is not \([5,1^{3}]\) or \([3,1^{5}]\) in \(D_{4}\). Then the proof of Lemma 2 in _loc. cit._ still applies.
**Lemma 8.1**.: _For all \(\mathfrak{sl}_{2}\)-triples \(\mathfrak{s}\) as above, the map \(\pi\) restricted to \(CA(\mathfrak{s})\) is surjective onto \(\operatorname{Aut}(\Delta)\). In particular, \(CA(\mathfrak{s})/C(\mathfrak{s})\simeq\operatorname{Aut}(\Delta)\)._
As before, notice that \(CA(\mathfrak{s})\) acts on \(\mathfrak{c}(\mathfrak{s})\) and we are interested in this action. Let \(\mathfrak{c}_{0}\subset\mathfrak{c}(\mathfrak{s})\) be a simple factor or the central toral subalgebra of \(\mathfrak{c}(\mathfrak{s})\). The rest of Lemma 2 in _loc. cit._ generalizes as follows:
**Lemma 8.2**.: _Suppose that \(\mathfrak{g}\) has type \(A_{n},D_{2n+1},\) or \(E_{6}\). Then there is an element \(\phi\in CA(\mathfrak{s})\) that stabilizes \(\mathfrak{c}_{0}\) and acts by \(-1\) on some maximal toral subalgebra of \(\mathfrak{c}_{0}\). In particular, if \(\mathfrak{c}_{0}\) is simple of type \(A_{k},D_{2k+1},\) or \(E_{6}\), then the image of \(CA(\mathfrak{s})\) in \(\operatorname{Aut}(\mathfrak{c}_{0})\) is an outer automorphism of order two._
Proof.: Fix a toral subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{g})\) be an automorphism of \(\mathfrak{g}\) that acts by \(-1\) on \(\mathfrak{h}\). Choose \(\mathfrak{m}\) to be a standard Levi subalgebra of \(\mathfrak{g}\) relative to \(\mathfrak{h}\) so that \(e\) is distinguished in \(\mathfrak{m}\)[10]. Pick \(\mathfrak{s}\) so that \(\mathfrak{s}\subset\mathfrak{m}\). Then \(\sigma(\mathfrak{m})=\mathfrak{m}\) and since \(e\in\mathfrak{m}\) is distinguished (and so satisfies the assumption above), there exists \(g\in G\) (specifically in the subgroup \(M\) with Lie algebra \(\mathfrak{m}\)) such that \(\operatorname{Int}(\mathrm{g})\circ\sigma\) is the identity on \(\mathfrak{s}\). That is, \(\phi:=\operatorname{Int}(\mathrm{g})\circ\sigma\in\operatorname{CA}(\mathfrak{ s})\).
Next, by [10] the center \(\mathfrak{t}\) of \(\mathfrak{m}\), which lies in \(\mathfrak{h}\), is a maximal toral subalgebra of \(\mathfrak{c}(\mathfrak{s})\). Since \(M\) acts trivially on \(\mathfrak{t}\), it follows that \(\phi\) acts by \(-1\) on \(\mathfrak{t}\). In particular, \(\phi(\mathfrak{c}_{0})=\mathfrak{c}_{0}\) and \(\phi\) acts by \(-1\) on its maximal toral subalgebra \(\mathfrak{c}_{0}\cap\mathfrak{t}\). Since \(-1\) is not in the Weyl groups of \(A_{k},D_{2k+1},\) or \(E_{6}\), the induced automorphism of \(\mathfrak{c}_{0}\) must by outer (since \(-w_{0}\) i s then clearly outer).
_Remark 8.3_.: We want to apply this when \(-1\) is not in the Weyl group of \(\mathfrak{g}\), but it is also applies when \(-1\) is in the Weyl group and \(\mathfrak{c}(\mathfrak{s})\) has simple factors of type \(a_{k}\), where it forces \(C(\mathfrak{s})\) to contain an element of order two. So in \(F_{4},E_{7}\) and \(E_{8}\), this explains why the action is upgraded to \(a_{k}^{+}\) always. There are no type \(D_{k}\) simple factors of \(\mathfrak{c}(\mathfrak{s})\) for \(k>4\) in the exceptional groups, except for the sole case of \(d_{6}\) the minimal orbit in \(E_{7}\). It remains (along with its dual), the only case where a natural outer action exists on the slice, but the induced action of \(CA(\mathfrak{s})\) does not realize it.
**Corollary 8.4**.: _Let \(G=\operatorname{Aut}(\mathfrak{g})\)._
_In type \(A_{n}\), any singularity for a minimal degeneration, which are not type \(A_{1}\), acquires an outer action (that is, becomes \(A_{k}^{+}\) or \(a_{k}^{+}\), when \(k\geq 2\))._
_In type \(E_{6}\), the singularities with no outer action using \(C(\mathfrak{s})\), all acquire the natural outer action._
Proof.: For the minimal orbit types, we can use the previous lemma since the simple factors of \(\mathfrak{c}(\mathfrak{s})\) are all of type \(A\) or \(E_{6}\).
For the simple surface singularities, we can use the action on the center of \(\mathfrak{c}(\mathfrak{s})\) as in SS5 (the subregular case already follows from Slodowy), since they are simple surface singularities of type \(A_{k}\).
Finally, the case of \([2a_{2}]^{+}\) becomes \([2a_{2}^{+}]^{+}\) since the outer automorphism preserves each simple factor of \(\mathfrak{c}(\mathfrak{s})\).
In SS11 we include the diagram for the \(D_{4}\) case with the \(\mathfrak{S}_{3}\)-action.
## 9. A conjecture of Lusztig
In [10, SS0.4], Lusztig associated to each minimal special degeneration in \(\mathfrak{g}\) a certain Weyl group \(W^{\prime}\). We describe it in the classical groups.
Let \(h:=\dim(\mathcal{S}_{\mathcal{O},e})/2+1\) for the slice \(\mathcal{S}_{\mathcal{O},e}\) associated to the degeneration. In types \(B\),\(C\),\(D\), the slice \(\mathcal{S}_{\mathcal{O},e}\) is either dimension two or the dimension of the minimal special orbit for a Lie algebra \(\mathfrak{c}_{0}\) of type \(B\), \(C\), \(D\). The dimension of the latter is \(2h^{\prime}-2\) where \(h^{\prime}\) is the Coxeter number of \(\mathfrak{c}_{0}\). Since \(h^{\prime}\) is even for types \(B_{k}\), \(C_{k}\), \(D_{k}\), respectively equal to \(2k\), \(2k\), \(2k-2\), the number \(h\) is even when \(\mathfrak{g}\) has type \(B,C,D\).
Here is the assignment of \(W^{\prime}\) to the degeneration, which varies in type \(D\) from Lusztig's assignment. For \(\mathfrak{g}\) of type \(A\), the Weyl group \(W^{\prime}\) associated to the degeneration is the Weyl group of type \(A_{h-1}\); for \(\mathfrak{g}\) of types \(B/C\), it is of type \(B_{h/2}\); and for \(\mathfrak{g}\) of type \(D\), it is of type \(D_{h/2}+1\) when \(\mathcal{S}_{\mathcal{O},e}\) is of type \(d_{k}\) when \(G=\operatorname{SO}(2N)\), and it is of type \(B_{h/2}\), otherwise.
Lusztig worked with the associated special representations of \(W\), the Weyl group of \(\mathfrak{g}\). Our definition of \(W^{\prime}\) is slightly different in type \(D\). It is necessary to include more special representations than were given in [10] for the conjecture to hold for the case of associating \(W^{\prime}=W(D_{h/2}+1)\), as the following lemma shows. In op. cit., only the case of \(s=0\) and \(\nu\) with parts all equal to \(1\) was considered for associating \(W^{\prime}=W(D_{h/2}+1)\).
**Lemma 9.1**.: _Let \((\lambda,\mu)\) be a minimal special degeneration of type \(d_{k}\) in \(\mathfrak{g}\) for \(G=\operatorname{SO}(2N)\). By SS5.8, this occurs precisely when \(\mu\) has a single odd part equal to \(2s+1\) with even multiplicity \(2j\)._
_Then the Springer representations of \(W(D_{N})\) attached to \(\mathcal{O}_{\lambda}\) and \(\mathcal{O}_{\mu}\) with the trivial local systems on \(\mathcal{O}_{\lambda}\) and \(\mathcal{O}_{\mu}\) are given respectively by the bipartitions_
\[([\nu,s{+}1,s^{j},\nu^{\prime}],[\nu,(s{+}1)^{j-1},\nu^{\prime}])\]
_and_
\[([\nu,s^{j},\nu^{\prime}],[\nu,(s{+}1)^{j},\nu^{\prime}]),\]
_where \(\nu\) is a partition with smallest part at least \(s+1\) and \(\nu^{\prime}\) is a partition with largest part at most \(s\), and such that \(2|\nu|+2|\nu^{\prime}|+j(2s+1)=N\)._
_Moreover, any such bipartition corresponds to a minimal special degeneration with one odd part in \(\mu\) (necessarily of even multiplicity)._
Proof.: We carried out the algorithm in [11].
Let \(\operatorname{p^{\prime}}(\mathcal{S}_{\mathcal{O},\mathrm{e}})=\sum_{i} \dim(\mathrm{IH}^{2i}(\mathcal{S}_{\mathcal{O},e})^{\mathrm{A(e)}})\mathrm{q}^ {i}\). In classical types, [10, Conjecture 1.4] can be interpreted as saying:
**Theorem 9.2**.: _Let \(G\) be classical and assume \(h>1\). Then_
\[\operatorname{p^{\prime}}(\mathcal{S}_{\mathcal{O},\mathrm{e}})=\operatorname{ q}^{\mathrm{e}_{1}-1}+\operatorname{q}^{\mathrm{e}_{2}-1}+\cdots+\operatorname{q}^{ \mathrm{e}_{k}-1}\]
_where \(e_{1},e_{2},\ldots,e_{k}\) are the exponents of \(W^{\prime}\)._
Proof.: Since \(h>1\), then \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\) and so by Table 2, the singularity of \(\mathcal{S}_{\mathcal{O},e}\) in types \(B\) and \(C\) is either \(c_{h/2}^{sp}\), \(b_{h/2}^{sp}\), \(d_{h/2+1}^{+}\), or \(d_{h/2+1}/V_{4}\). Each of these satisfies
\[\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})=1+\mathrm{q}^{3}+\cdots+ \mathrm{q}^{\mathrm{h}-1},\]
by SS5.7, which are the exponents of \(B_{h/2}\).
This also holds for \(\mathrm{SO}(2N)\), except for the case where the singularity is \(d_{h/2+1}\), in which case \(\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})=1+\mathrm{q}^{3}+\cdots+ \mathrm{q}^{\mathrm{h}-1}+\mathrm{q}^{\mathrm{h}/2}\), coming from the exponents will be those of \(D_{h/2+1}\).
In the exceptional groups a similar interpretation exists (except that a variation is needed for the \(3\) exceptional orbits in \(E_{7}\) and \(E_{8}\)). That is to say, the simple factors of \(\mathfrak{c}(\mathfrak{s})\) under the action of \(A(e)\) explain why \(\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})\) sees the exponents that Lusztig observes in [10, SS4]. We also point out a typo in _loc. cit._: the label of \(567_{46}--1400_{37}\) should be \(B_{5}\).
## 10. Duality
We can now gather up our results to state the duality result for a minimal special degeneration \((\mathcal{O},\mathcal{O}^{\prime})\) in \(\mathfrak{g}\) and its dual minimal special degeneration \((d_{LS}(\mathcal{O}^{\prime}),d_{LS}(\mathcal{O}))\) in \({}^{L}\mathfrak{g}\), the Langlands dual Lie algebra of \(\mathfrak{g}\).
Let \(X\) be the normalization of an irreducible component of the slice \(\mathcal{S}_{\mathcal{O},e}\) for \((\mathcal{O},\mathcal{O}^{\prime})\), where \(e\in\mathcal{O}^{\prime}\). Let \(Y\) be an irreducible component of the slice \(\mathcal{S}\) for \((d_{LS}(\mathcal{O}^{\prime}),d_{LS}(\mathcal{O}))\). Let \(e^{\prime}\in d_{LS}(\mathcal{O})\).
By [10], we can assume that \(\dim(\mathcal{S}_{\mathcal{O},e})=2\). This also follows from Proposition 7.3 and inspection of the graphs in SS11. Hence, \(X\) is simple surface singularity. Denote by \(\mathrm{Out}(X)\) its group of outer automorphisms, which are the graph automorphisms of the ADE diagram corresponding to \(X\) as in SS5. From [11] and SS5, we know that \(A(e)\) acts transitively on the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) and of \(\mathcal{S}\). Let \(J(e)\subset A(e)\) be the stabilizer of \(X\). Let \(K(e)\) be the image of \(J(e)\) in \(\mathrm{Out}(X)\).
On the dual side, let \(\mathrm{Out}(Y)\) be the outer automorphisms of the minimal symplectic leaf in \(Y\), as discussed in SS5.7. Let \(J(e^{\prime})\subset A(e^{\prime})\) be the stabilizer of \(Y\) and let \(K(e^{\prime})\) be the image of \(J(e^{\prime})\) in the \(\mathrm{Out}(Y)\).
The pair of minimal degenerations falls into one of three mutually exclusive cases:
1. The map from \(K(e)\) to its image in \(\bar{A}(e)\) is bijective and the map from \(K(e^{\prime})\) to its image in \(\bar{A}(e^{\prime})\) is bijective.
2. The map from \(K(e)\) to its image in \(\bar{A}(e)\) is not bijective.
3. The map from \(K(e^{\prime})\) to its image in \(\bar{A}(e^{\prime})\) is not bijective.
That these are mutually exclusive follows from SS6.
**Theorem 10.1**.: _Let \(G\) be of adjoint type or \(G=\mathrm{Aut}(\mathfrak{g})\) or \(G=\mathrm{O}(8)\). We have the following duality of singularities under the Lusztig-Spaltenstein involution:_
* _In case (1):_ 1. _If_ \((X,K(e))\) _corresponds to a simple Lie algebra_ \(\mathfrak{m}\)_, in the sense of Slodowy, then_ \(Y/K(e^{\prime})\) _is isomorphic to the closure of the minimal special nilpotent orbit in the fixed subalgebra_ \(({}^{L}\mathfrak{m})^{K(e^{\prime})}\)_, where_ \({}^{L}\mathfrak{m}\) _is a simple component of the reductive centralizer of_ \(e^{\prime}\) _in_ \({}^{L}\mathfrak{g}\)_._ 2. _If the pair_ \((X,K(e))\) _is of type_ \(A_{k}^{+}\)_, then_ \(Y/K(e^{\prime})\) _is isomorphic to_ \(a_{k}/\mathfrak{S}_{2}\)_._
* _In case (2): The pair_ \((X,K(e))\) _is of type_ \(C_{n+1}\)_, and_ \(Y/K(e^{\prime})\) _is isomorphic to_ \(c_{n}^{sp}\)_._
* _In case (3): The pair_ \((X,K(e))\) _is of type_ \(C_{n}\) _or_ \(G_{2}\)_, and_ \(Y\) _is isomorphic to_ \(d_{n+1}/V_{4}\) _or_ \(d_{4}/\mathfrak{S}_{4}\)_, respectively._
Proof.: This amounts to gathering up our results. For the classical groups, the duality statements follow from SS7. For the exceptional groups, it is by inspection of the graphs in SS11. When \(G=\operatorname{Aut}(\mathfrak{g})\), we make use of SS8.
_Remark 10.2_.: We noticed in case (2) that the simple surface singularity \(B_{n}\) is a two-fold cover of the simple surface singularity \(C_{n+1}\). In case (3) we observe that \(b_{n}^{sp}\) is a two-fold cover of \(d_{n+1}/V_{4}\) (as in SS4.3) and that \(g_{2}^{sp}\) is a four-fold cover of \(d_{4}/\mathfrak{S}_{4}\). Accessing these covers would allow cases (2) and (3) to behave like the more well-behaved duality in case (1).
As we were finishing this paper, the preprint [1] appeared. We expect there is some overlap with our results, but we have not had a chance yet to understand the connection.
## 11. Graphs
We include here the Hasse diagrams of the minimal special degenerations for the exceptional Lie algebras, except for the straightforward \(G_{2}\), as well as several examples in the classical types. We write \((Y)\) when we only know that the normalization of the Slodowy slice singularity is isomorphic to \(Y\). See [10, SS6.2] for a discussion of the component group action and branching in the exceptional types for the more complicated cases in the graphs. We write \(C_{n+1}^{*}\) to indicate when the kernel of the map to Lusztig's canonical quotient acts by outer action on \(\mathcal{S}_{\mathcal{O},e}\) as in SS6. We use the notation \(A_{1}\) in the exceptional groups, but in the classical groups we use \(B_{1}\) or \(C_{1}\) to be consistent with Table 1; these are all the same singularity.
\[\begin{array}{ccc}\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{ \@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:
\(D_{5}\) Minimal Special Degenerations \(C_{5}\) Alternative Minimal Special Degenerations
Figure 5. \(F_{4}\) and inner \(E_{6}\)
|
2309.13358 | Towards Quantum Software Requirements Engineering | Quantum software engineering (QSE) is receiving increasing attention, as
evidenced by increasing publications on topics, e.g., quantum software
modeling, testing, and debugging. However, in the literature, quantum software
requirements engineering (QSRE) is still a software engineering area that is
relatively less investigated. To this end, in this paper, we provide an initial
set of thoughts about how requirements engineering for quantum software might
differ from that for classical software after making an effort to map classical
requirements classifications (e.g., functional and extra-functional
requirements) into the context of quantum software. Moreover, we provide
discussions on various aspects of QSRE that deserve attention from the quantum
software engineering community. | Tao Yue, Shaukat Ali, Paolo Arcaini | 2023-09-23T12:34:04Z | http://arxiv.org/abs/2309.13358v1 | # Towards Quantum Software Requirements Engineering
###### Abstract
Quantum software engineering (QSE) is receiving increasing attention, as evidenced by increasing publications on topics, e.g., quantum software modeling, testing, and debugging. However, in the literature, quantum software requirements engineering (QSRE) is still a software engineering area that is relatively less investigated. To this end, in this paper, we provide an initial set of thoughts about how requirements engineering for quantum software might differ from that for classical software after making an effort to map classical requirements classifications (e.g., functional and extra-functional requirements) into the context of quantum software. Moreover, we provide discussions on various aspects of QSRE that deserve attention from the quantum software engineering community.
quantum software engineering, requirements engineering, requirements
## I Introduction
Quantum software engineering (QSE) [1, 2], as classical software engineering, is expected to focus on various phases of quantum software development, including requirements engineering, design and modeling, testing, and debugging. Various studies have been conducted in the literature regarding most of these phases. However, as reported in [3, 1], the requirements engineering phase remains relatively untouched. Only a few preliminary works exist on requirements engineering [4, 5].
Requirements engineering, like in the classical context, if not conducted properly, will build incorrect quantum software and cause high costs in fixing it once problems are discovered in later phases of quantum software development. Thus, this paper focuses on quantum software requirements engineering (QSRE). In particular, we highlight the key aspects of QSRE that differentiate itself from the classical domain. To illustrate differences, we also present a motivating example of financial risk management. Moreover, we shed light on how typical requirements engineering will be impacted due to the quantum context and suggest key following activities.
## II Motivating Example
We will use the motivating example of credit risk analysis with quantum algorithms from Qiskit [6]. Detailed information about the algorithm is published in [7, 8]. The proposed quantum algorithm is more efficient than its equivalent classical implementations, such as using Monte Carlo simulations on classical computers. We calculate two key risk measures, i.e., _Value at Risk_ (VaR) and _Conditional Value at Risk_ (CVaR). Key requirements of the risk estimation, including the calculation of these two risk measures (i.e., functional requirements), are shown in Figure 1. In addition, we present extra-functional requirements specific to quantum computing, e.g., estimating the number of gates and (ancilla) qubits. Moreover, we show hardware constraints such as the limited number of qubits and limited depth of circuits.
Figure 2 (a) present a use case diagram including actor _Credit Analyst_ responsible for managing risk in finance, as illustrated with use case _Manage risk in finance with quantum_. This use case includes use cases _Determine Var_ and _Determine CVaR_. Also, for calculating VaR or CVaR, a credit analyst needs to define the confidence level, captured with use case _Define the confidence level_. In Figure 2 (b), we use the use case diagram notation to illustrate the main functionalities of a quantum expert applying the Amplitude Estimation [7, 8] algorithm for calculating VaR and CVaR.
## III Quantum Software Requirements Engineering
### _Stakeholders_
The ISO/IEC/IEEE 15288 standard defines stakeholders as: "_Individual or organization having a right, share, claim, or interest in a system or in its possession of characteristics that meet their needs and expectations_" [9]. Identifying stakeholders and their requirements is a crucial activity in requirements engineering. When building quantum software systems, stakeholders are the same as in the classical context. For example, in our example, stakeholders related to the development of the quantum risk management system include credit analysts (domain experts), borrowers (customers), banks, and software developers, all having different concerns on various aspects, including functionality, ease of use, price, and performance.
### _Requirements classifications_
Requirements are commonly classified into _functional_ and _extra-functional_ (Section III-B1). A further classification specific to QSRE is whether requirements are related to the quantum or the classical part (Section III-B2) of the system.
#### Iii-B1 Functional requirements and extra-functional requirements
_Functional requirements_ are related to the functionality that a quantum software system is expected to provide. For instance, the functional requirements of our example are indicated with <<Functional Requirement>>, such as _Determine Value at Risk (VaR) with the 95% of confidence level_ (see Figure 1). Identifying functional requirements for quantum software shall be the same as for the classical one.
SEBoK defines _non-functional requirements_ (also commonly named _extra-functional_) as "_Quality attributes or characteristics that are desired in a system, that define how a system is supposed to be_" [10]. These attributes vary from one system to another. For instance, safety requirements (i.e., one type of extra-functional requirements) will only apply to a safety-critical system. All the relevant extra-functional requirements from classical software systems generally apply to quantum software systems. However, there are additional requirements. For instance, Figure 1 shows three extra-functional requirements: _Estimation accuracy shall be a quadratic speed-up over classical methods (e.g., Monte Carlo)_, which is further decomposed into another two extra-functional requirements on estimating the required numbers of gates and (ancilla) qubits. These two requirements relate to the hardware constraints: _Limited number of qubits_ and _Limited depth of quantum circuits_. Identifying and realizing these extra-functional requirements require knowledge of quantum computing. The good news is that such requirements are common across various quantum software applications, implying that they can be reused and that common solutions can be proposed to address them. We would also like to point it out that Saraiva et al. [5] have already identified such five common extra-functional requirements. Moreover, such extra-functional requirements might need to be elicited step-wise, as their elicitation depends on identifying other requirements. Ideally, when available in the future, an actionable requirements elicitation process could clearly guide users through all required activities.
#### Iii-B2 Quantum requirements vs. classical requirements
It is crucial to distinguish requirements that should be addressed with classical computers and those to be addressed with quantum computers. Moreover, there should be high-level requirements that are hybrid. For instance, Figure 1 defines three stereotypes <<Reg>>, <<Reg>>, and <<hReq>> to distinguish requirements that need to be addressed in the classical, quantum, or hybrid manner, respectively. Doing so is essential, as mentioned by Weder et al. [10, 11]; indeed, the first phase of their proposed quantum software lifecycle is about performing quantum-classical splitting. Requirements engineering, especially requirements analysis, and optimization, is typically performed in this phase to decide which parts of a targeted problem need to be solved on a quantum computer and which parts go to classical hardware. Consequently, requirements specification and modeling solutions should provide mechanisms to support the problem separation into classical and quantum parts. We explain this idea by applying three stereotypes to use case modeling (see Figure 2).
### _Specific extra-functional concerns_
#### Iii-C1 Portability
Near future quantum computers will be built with different hardware technologies; thus, portability will remain a key requirement to be captured. For example, a quantum software system for our example (i.e., credit risk analysis) may need to be deployed to different quantum computers. Moreover, in the near future, various quantum computing resources and classical computing will be pooled so that more computations can be performed jointly. Thus, converting a problem into a set of requirements, where requirements shall be addressed with different types of quantum computers and classical computers, is needed.
#### Iii-C2 Performance
Performance requirements over classical implementation are essential to justify the need for quantum computing. For example, our example requires that the estimation accuracy be a quadratic speed-up over classical methods (see Figure 1). Such requirements may consider other requirements, e.g., the estimation accuracy depends on the number of gates and the number of (ancilla) qubits that need to be
Fig. 1: Finance application for credit risk analysis – key requirements, in the SysML requirements diagram notation. Stereotypes <<Reg>>, <<Reg>>, and <<hReq>> are applied to distinguish quantum requirements, classical requirements and the hybrid of both, respectively. Stereotypes <<Functional Requirement>> and <<Extra-functional Requirement>> distinguish functional and extra-functional requirements.
Fig. 2: (a) Application for credit risk analysis – key use cases. (b) Key functionalities of realizing _Determine VaR_ and _Determine_ C_i_aR_ in (a).
estimated at the requirements level to check whether or not the expected quadratic speed-up on the estimation accuracy can be achieved. These are two additional requirements. Such requirements are common across quantum software, as, currently and in the near future, the capabilities of quantum computers are limited. Thus, deciding early on whether available resources can achieve the expected performance requirements and, if yes, with which margin is important.
#### Iii-B3 Reliability
Currently, hardware errors affect the reliability of their computations and consequently constrain how quantum computers should be used and how quantum software should be designed. For instance, performing a reset after running several calculations for a period of time might be needed; this means that a quantum algorithm might not be run for a long time [12]. Thus, when identifying requirements of quantum software, it is essential to identify reliability requirements and associated constraints, especially considering the impact of hardware errors on the reliability of quantum software systems. Decisions such as introducing Quantum Error Correction [13] (which requires additional quantum resources) or other fault tolerance mechanisms might be needed early in the quantum software development lifecycle.
#### Iii-B4 Scalability
Current quantum computers support a limited number of qubits, i.e., resources are scarce and expensive. Therefore, scalability requirements are carefully considered while designing quantum software. For instance, as discussed in [8], in the context of quantum risk analysis (our motivating example), based on the results of the authors' investigation, more qubits are needed to model more realistic scenarios, thereby achieving practically meaningful advantages over Monte Carlo simulations, which represent state of the art in risk management. Moreover, scalability requirements (e.g., on the number of parameters and constraints expected to be handled in the risk analysis) should be carefully defined such that they can be satisfied with a limited depth of the quantum circuit to mitigate the impact of decoherence, with limited use of two-qubit gates (e.g., CNOT gates) to reduce the effect of crosstalk, and so on, which can be ensured with more powerful quantum computers, dedicated error mitigation mechanisms, and even carefully-designed quantum algorithms.
#### Iii-B5 Maintainability
Like classical software, quantum software will require maintainability. Given that, as expected, quantum hardware will continue to evolve, existing quantum software needs to be updated (in some cases) to deal with the hardware changes. For example, with the decreased hardware error rates provided by the latest technological advancements, error handing mechanisms in quantum software systems must be updated to improve performance and reduce the cost of additional error correction. Thus, quantum software systems shall identify and capture such maintainability requirements.
#### Iii-B6 Reusability
Like classical software, the reusability of quantum software is essential to be easily reused across different systems. Thus, such requirements shall be captured during requirements engineering. However, some specific requirements related to quantum software shall be explicitly captured. For instance, quantum software is often built as hybrid software. Therefore, having tight coupling between the two parts would reduce the reusability of quantum software. Instead, the high cohesion of the quantum software part is expected to enable more reusability.
## IV Discussions and Suggestions
Requirements elicitation elicits software requirements, i.e., quantum software in our context. Given that, in this phase, we investigate _what_ problem a quantum software should solve rather than _how_ the software should be implemented to address this problem, the requirements engineering for quantum software shall remain similar to the classical one. For instance, identifying stakeholders and defining system boundaries remain the same. However, one difference might be in checking whether it is needed to solve a problem that has been solved in the classical world with quantum, especially considering the known limitations of quantum computing. For example, in our running example (see Figure 1), we need to consider requirements specific to the quantum domain, such as the required number of qubits (i.e., a hardware constraint). Regarding stakeholders, there remain similarities between the classical and quantum requirements elicitation. For instance, a possible stakeholder in our example is the credit analyst, which would remain the same as in the classical domain. Existing methods, such as interviews and prototyping for requirements elicitation, are also expected to be largely similar.
Functional and non-functional requirements are typically specified during requirements specification at various formalization degrees, ranging from informal natural language specifications to fully formal specifications. Examples include semi-formal notations such as use cases and entity-relations diagrams or formal notations such as Hoare logic [14]. Requirements specifications for quantum software will be changed to accommodate concepts related to quantum software. For example, when using use case diagrams, as shown in our example, it is helpful to distinguish use cases from the classical world, the quantum world, and the mix of the two. Moreover, when specifying requirements with modeling notations (e.g., SysML requirements diagram), they need to be extended to capture novel concepts from quantum software. Finally, formal methods are also relevant to investigate for specifying quantum software requirements as surveyed in [15]. Nonetheless, such methods are also quite early in their stage of development [15].
Requirements _verification_ of quantum software has received less attention; when considering formal methods, only preliminary tools and methods are available as discussed in [15]. Moreover, the survey discusses the need for new methods for formal verification for complex quantum software. Requirements validation via automated testing is getting popular in the software engineering community, with several new works being published (e.g., [16, 17, 18, 19, 20, 21]). Nonetheless, as discussed in [22], many testing challenges remain unaddressed. Finally, the classical verification and validation methods, e.g., inspection and walk-through, apply to some extent to quantum software requirements.
Based on our investigation, we recommend the following:
(1) Carefully consider separating parts of the problem that should be addressed in the classical world and those on quantum computers; (2) Identify and specify requirements related to various constraints, especially those about quantum hardware. Realizing these requirements depends on available and realistic quantum computing resources and explicitly specifying such requirements support requirements analysis on the feasibility of the realization; (3) Identify existing quantum algorithms that could be incorporated. Selecting which quantum algorithms to use is a decision that might need to be made at the early stage, as the availability and capability of such quantum algorithms have an impact on the quantum part of the realization of certain extra-functional requirements; (4) Based on the identified and specified requirements, requirements analysis might be needed to identify key factors (e.g., selection of quantum algorithms, determining quantum hardware resources, assessing the feasibility of satisfying extra-functional requirements) that have a significant impact on the development of quantum software, and potential trade-offs among these factors. Doing so is expected to effectively support decision-making on selecting quantum hardware resources; (5) Identify requirements whose realization strongly depends on constantly emerging quantum algorithms and advanced quantum computers. Doing so is necessary because as soon as more advanced quantum algorithms or quantum computers are available, such requirements could be realized (if not possible before) or realized better. Also, decisions made regarding the satisfaction of certain requirements (e.g., the required number of gates) and rationales behind these decisions are highly recommended to be recorded.
## V Conclusions and Future Work
Requirements engineering (RE) for quantum software has gotten less attention than other phases, such as quantum software testing. Thus, we present some ideas on how RE for quantum software will differ from the classical counterpart. For instance, what will be the key differences for extra-functional requirements? Finally, we discussed how various steps in RE, such as requirements elicitation, specification, verification, and validation, will be impacted, including developing requirements specification/modeling, analyses, verification, and validation methods, with tool support, for supporting quantum software development at the RE phase.
|
2302.14745 | Epitaxial growth and characterization of (001) [NiFe/M]$_{20}$ (M = Cu,
CuPt and Pt) superlattices | We present optimization of [(15 $\unicode{x212B}$) Ni$_{80}$Fe$_{20}$/(5
$\unicode{xC5}$) M]$_{20}$ single crystal multilayers on (001) MgO, with M
being Cu, Cu$_{50}$Pt$_{50}$ and Pt. These superlattices were characterized by
high-resolution X-ray reflectivity (XRR) and diffraction (XRD) as well as polar
mapping of important crystal planes. It is shown that cube on cube epitaxial
relationship can be obtained when depositing at the substrate temperature of
100 $^\circ$C regardless of the lattice mismatch (5% and 14% for Cu and Pt,
respectively). At lower substrate temperatures poly-crystalline multilayers
were obtained while at higher substrate temperatures {111} planes appear at
$\sim$10$^\circ$ off normal to the film plane. It is also shown that as the
epitaxial strain increases, the easy magnetization axis rotates towards the
direction that previously was assumed to be harder, i.e. from [110] to [100],
and eventually further increase in the strain makes the magnetic hysteresis
loops isotropic in the film plane. Higher epitaxial strain is also accompanied
with increased coercivity values. Thus, the effect of epitaxial strain on the
magnetocrystalline anisotropy is much larger than what was observed previously
in similar, but polycrystalline samples with uniaxial anisotropy (Kateb et al.
2021). | Movaffaq Kateb, Jon Tomas Gudmundsson, Snorri Ingvarsson | 2023-02-28T16:45:13Z | http://arxiv.org/abs/2302.14745v1 | Epitaxial growth and characterization of (001) [NiFe/M]\({}_{20}\) (M = Cu, CuPt and Pt) superlattices
###### Abstract
We present optimization of [(15 A) Ni\({}_{0}\)Fe\({}_{20}\)/(5 A) M]\({}_{20}\) single crystal multilayers on (001) MgO, with M being Cu, Cu\({}_{50}\)Pt\({}_{50}\) and Pt. These superlattices were characterized by high resolution X-ray reflectivity (XRR) and diffraction (XRD) as well as polar mapping of important crystal planes. It is shown that cube on cube epitaxial relationship can be obtained when depositing at substrate temperature of 100 \({}^{\circ}\)C regardless of the lattice mismatch (5% and 14% for Cu and Pt, respectively). At lower substrate temperatures poly-crystalline multilayers were obtained while at higher substrate temperatures [111] planes appear at \(\sim\)10\({}^{\circ}\) off normal to the film plane. It is also shown that as the epitaxial strain increases, the easy magnetization axis rotates towards the direction that previously was assumed to be harder, i.e. from [110] to [100], and eventually further increase in the strain makes the magnetic hysteresis loops isotropic in the film plane. Higher epitaxial strain is also accompanied with increased coercivity values. Thus, the effect of epitaxial strain on the magnetocrystalline anisotropy is much larger than what was observed previously in similar, but polycrystalline samples with uniaxial anisotropy (Kateb _et al._ 2021).
NiFe; Superlattice; Magnetic Anisotropy; Microstructure; Substrate Temperature pacs: 75.30.Gw, 75.50.Bb, 73.50.Jt, 81.15.Cd
## I Introduction
Since the discovery of the giant magneto-resistance (GMR) effect by Fert [1] and Grunberg [2] in the late 1980s, magnetic multilayers have been widely studied. In many cases they present unique features that cannot be achieved within the bulk state namely inter-layer exchange coupling [3], magnetic damping, due to the interface [4; 5] rather than alloying [6], and perpendicular magnetic anisotropy [7].
The GMR discovery, without a doubt, was an outcome of the advances in preparation methods such as molecular beam epitaxy (MBE), that enabled deposition of multilayer films with nanoscale thicknesses [8]. Thus, a great deal of effort has been devoted to enhancing the preparation methods over the years using both simulations [9; 10; 11; 12] and experiments (cf. Ref. [13] and references therein). Permalloy (Py) multilayers with non-magnetic (NM) Pt [13; 14; 15] or Cu [16; 17; 18; 19; 12] as spacers have been studied extensively in recent years. Various deposition methods have been utilized for preparing magnetic multilayers such as MBE [16], pulsed laser deposition (PLD) [20], ion beam deposition [21; 12], dc magnetron sputtering (dcMS) [14; 17; 3; 18], and more recently, high power impulse magnetron sputtering (HiPMS) [13].
Permalloy (Py) is a unique material with regards to studying magnetic anisotropy, which has been shown to strongly depend on the preparation method [22]. For instance, uniaxial anisotropy can be induced in polycrystalline Py by several means [23]. However, it has been thought that the cubic symmetry of single crystal Py encourages magneto-crystalline anisotropy, while uniaxial anisotropy cannot be achieved. We have recently shown that using HiPIMS deposition one can decrease the Ni\({}_{3}\)Fe (L1\({}_{2}\)) order, but maintain the single crystal form, to achieve uniaxial anisotropy. We attributed this to the high instantaneous deposition rate during the HiPIMS pulse [24], which limits ordering compared to dcMS that present cubic (biaxial) anisotropy. Regarding Py multilayers there has been a lot of focus on magneto-dynamic properties recently while the effects of interface strain on magnetic anisotropy has not received much attention. Rook _et al._[16] prepared polycrystalline Py/Cu multilayers by MBE and reported a weak anisotropy in them, i.e. hysteresis loops along both the hard and easy axes with complete saturation at higher fields. They compared the coercivity values (\(H_{\rm c}\)) and saturation fields of their samples to \(H_{\rm c}\) and anisotropy field (\(H_{\rm k}\)) of sputter deposited multilayers showing uniaxial anisotropy and concluded that the latter gives more than twice harder properties. They also reported an increase in \(H_{\rm c}\) with Py thickness and attributed this to the interface strain that relaxes with increased thickness. Correa _et al._[14] prepared nanocrystalline Py/Pt multilayers on rigid and flexible substrates and in both cases obtained weak anisotropy but two orders of magnitude larger \(H_{\rm c}\). Unfortunately, they did not mention any change in magnetic anisotropy
upon straining the flexible substrate.
Recently we showed that utilizing increased power to the dcMS process, and in particular, by using HiPIMS deposition that the interface sharpness in polycrystalline [Py/Pt]\({}_{20}\) multilayers can be improved, due to increased ionization of the sputtered species [13]. Briefly, in dcMS deposition the film forming material is composed mostly of neutral atoms [25], while in HiPIMS deposition a significant fraction of the film forming material consists of ions [26; 27]. In fact we have shown that higher ionization of the film-forming material leads to smoother film surfaces and sharper interfaces using molecular dynamics simulations [28; 29]. We also showed that by changing the non-magnetic spacer material one can increase interface strain that is accompanied with higher \(H_{\mathrm{c}}\), \(H_{\mathrm{k}}\) and limited deterioration of uniaxial anisotropy [13].
Another aspect of preparation is that deposition chambers for multilayers mostly benefit from oblique deposition geometry, which encourage uniaxial anisotropy in Py. The origin of uniaxial anisotropy induced by oblique deposition has been proposed to be self-shadowing, but this has not been systematically verified. We demonstrated uniaxial anisotropy, even in atomically smooth films with normal texture, which indicates lack of self-shadowing [30; 31]. We also showed that oblique deposition is more decisive in definition of anisotropy direction than application of an _in-situ_ magnetic field for inducing uniaxial magnetic anisotropy. Also for polycrystalline Py films oblique deposition by HiPIMS presents a lower coercivity and anisotropy field than when dcMS deposition is applied [32; 33; 34]. While none of the above mentioned results verify self-shadowing they are consistent with our interpretation of the order i.e. oblique deposition induces more disorder than _in-situ_ magnetic field and HiPIMS produce more disorder than dcMS. Note that the level of order in polycrystals cannot be easily observed by X-ray diffraction. In this regard we proposed a method for mapping the resistivity tensor that is very sensitive to level of order in Py [23; 35]. We reported much higher coercivity and deterioration of uniaxial anisotropy in (111) Py/Pt multilayers obtained by HiPIMS deposition of the Py layers [13]. We attributed the latter effect to the interface sharpness and higher epitaxial strain when HiPIMS is utilized for Py deposition.
Here, we study the properties of Py superlattices deposited by dcMS with Pt, Cu and CuPt as non-magnetic spacers. Pt and Cu were chosen as spacer because they have lattice parameters of 3.9 and 3.5 A, respectively, and therefore provide varying strain to the Py film which has lattice constant of 3.54 A. In this regard, calibration of the substrate temperature during deposition with respect to the desired thickness is of prime importance [36]. It is worth mentioning that dcMS deposition is expected to give more ordered single crystal (001) Py layers in which crystalline anisotropy is dominant [22]. This enables understanding to what extent interface strain will affect magnetocrystalline anisotropy of Py which we will show is much larger than the changes in uniaxial anisotropy in our latest study [13]. Section II discusses the deposition method and process parameters for the fabrication of the superlattices and the characterization methods applied. In Section III the effects of substrate temperature on the properties of the Py/Cu system are studied followed by exploring the influence of varying the lattice parameter of the non-magnetic layer on the structural and magnetic properties of the superlattice. The findings are summarized in Section IV.
## II Experimental apparatus and methods
The substrates were one side polished single crystal (001) MgO (Crystal GmbH) with surface roughness \(<\)5 A and of dimensions 10 mm\(\times\)10 mm\(\times\)0.5 mm. The MgO substrates were used as received without any cleaning but were baked for an hour at 600 \({}^{\circ}\)C in vacuum for dehydration, cooled down for about an hour, and then maintained at the desired temperature \(\pm\)0.4 \({}^{\circ}\)C during the deposition. The superlattices were deposited in a custom built UHV magnetron sputter chamber with a base pressure below \(5\times 10^{-7}\) Pa. The chamber is designed to support 5 magnetron assemblies and targets, which are all located 22 cm away from substrate holder with a 35\({}^{\circ}\) angle with respect to substrate normal. The shutters were controlled by a LabVIEW program (National Instruments). The deposition was made with argon of 99.999 % purity as the working gas using a Ni\({}_{80}\)Fe\({}_{20}\) at.% and Cu targets both of 75 mm diameter and a Pt target of 50 mm in diameter.
The Py depositions were performed at 150 W dc power (MDX 500 power supply from Advanced Energy) at argon working gas pressure of 0.25 Pa which gives deposition rate of 1.5 A/s. Both pure Cu and Pt buffer layers were deposited at dc power of 20 W. For the deposition of CuPt alloy we calibrated Cu\({}_{50}\)Pt\({}_{50}\) at. % at dc power at 10 and 13 W for Cu and Pt, respectively. This selection of powers provide a similar deposition rate of 0.45 A/s in all cases. In order to ensure that the film thickness is as uniform as possible, we rotate the sample at \(\sim\)12.8 rpm. These deposition processes were repeated to fabricate superlattices consisting of 20 repetitions of 15 A Py and 5 A Pt, Cu or Cu\({}_{50}\)Pt\({}_{50}\) at.% (CuPt).
X-ray diffraction measurements (XRD) were carried out using a X'pert PRO PANalitiated diffractometer (Cu K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) lines, wavelength 0.15406 and 0.15444 nm, respectively) mounted with a hybrid monochromator/mirror on the incident side and a 0.27\({}^{\circ}\) collimator on the diffracted side. We would like to remark that, K\({}_{\alpha 2}\) separation at 2\(\theta\) = 55\({}^{\circ}\) is only 0.2\({}^{\circ}\) and much less at the smaller angles i.e. where our multilayer peaks are located. This is an order of magnitude smaller than the full width half maximum (FWHM) of our multiplayer and satellite peaks. A line focus was used with a beam width of approximately 1 mm. The film thickness, mass density, and surface roughness, was determined by low-angle
X-ray reflectivity (XRR) measurements with an angular resolution of 0.005\({}^{\circ}\), obtained by fitting the XRR data using the commercial X'pert reflectivity program, that is based on the Parrat formalism [37] for reflectivity.
The magnetic hysteresis was recorded using a homemade high sensitivity magneto-optic Kerr effect (MOKE) looper. We use a linearly polarized He-Ne laser of wavelength 632.8 nm as a light source, with Glan-Thompson polarizers to further polarize and to analyze the light after Kerr rotation upon reflection off the sample surface. The Glan-Thompson polarizers linearly polarize the light with a high extinction ratio. They are cross polarized near extintion, i.e. their polarization states are near perpendicular and any change in polarization caused by the the Kerr rotation at a sample's surface is detected as a change in power of light passing through the analyzer. The coercivity was read directly from the easy axis loops. The anisotropy field is obtained by extrapolating the linear low field trace along the hard axis direction to the saturation magnetization level, a method commonly used when dealing with effective easy axis anisotropy.
## III Results and discussion
### Effect of substrate temperature on structural and magnetic properties
Figure 1 shows the XRR results from Py/Cu superlattices deposited at different substrate temperatures. The \(\Lambda\) and \(\delta\) indicated in the figure are inversely proportional to the superlattice period and the total thickness, respectively. It can be clearly seen that the fringes decay faster for a Py/Cu superlattice deposited at substrate temperature of 21 \({}^{\circ}\)C and 200 \({}^{\circ}\)C than when deposited at 100 \({}^{\circ}\)C. This indicates lower surface roughness obtained in the Py/Cu superlattice deposited at 100 \({}^{\circ}\)C. When deposited at room temperature, the large lattice mismatch between MgO and Py/Cu does not allow depositing a high quality superlattice. For substrate temperature of 200 \({}^{\circ}\)C, however, it is difficult to grow a continuous Cu layer with such a low thickness (5 A). This is due to the dewetting phenomenon which causes the minimum Cu thickness that is required to maintain its continuity to be 12 A. Earlier, it has been shown that for substrate temperature up to 100 \({}^{\circ}\)C Py/(1 A) Cu showed a limited intermixing upon annealing [17]. The optimum substrate temperature for deposition obtained here is very close to 156 \({}^{\circ}\)C which has earlier been reported for the deposition of (001) Fe/MgO [38] and (001) Fe\({}_{84}\)Cu\({}_{16}\)/MgO [39] superlattices. We would like to remark that in our previous study we deposited 5 nm Ta underlayer to reduce the substrate surface roughness [13]. However, Ta on MgO is non-trivial due to the large lattice mismatch (22%). Besides, Ta underlayer encourages polycrystalline \(\langle 111\rangle\) texture normal to substrate surface that does not serve our purpose here.
Figure 2 shows the result of symmetric (\(\theta-2\theta\)) XRD scan normal to the film for Py/Cu superlattices deposited at different substrate temperatures. It can be seen that no Cu and Py peak were detected in the superlattice deposited at room temperature. Thus, epitaxial growth of Py and Cu were suppressed by the low substrate temperature. Furthermore, we studied room temperature deposited Py/Cu using grazing incidence XRD which indicated a polycrystalline structure (not shown here). For substrate temperature of 100 - 200 \({}^{\circ}\)C there are clear (002) Py/Cu peaks indicating an epitaxial relationship in the (001) Py \(\parallel\) (001) Cu \(\parallel\) (001) MgO stack. However, there is no sign of satellite peaks due to the \(\Lambda\) (Py/Cu) period. We explain this further when comparing the Py/Cu, Py/Pt, and Py/CuPt superlattices in Section III.2.
Figure 3 shows the pole figures from the \(\{200\}\) and \(\{111\}\) planes for Py/Cu superlattices deposited at different substrate temperatures. For the Py/Cu superlattice deposited at 21 \({}^{\circ}\)C, there is only a peak in the middle of the \(\{111\}\) pole figure that indicates a weak \(\langle 111\rangle\) contribution normal to the film plane. For a superlattice deposited with substrate temperature of 100 \({}^{\circ}\)C the \(\{200\}\) pole figure indicates an intense spot at \(\psi=0\) that is corresponding to (002) Py/Cu planes parallel to the substrate. There is also a weaker four-fold spot at \(\psi=90^{\circ}\) and \(\phi=0,90,180\) and 270\({}^{\circ}\) from the \(\{200\}\) planes parallel to the substrate edges. In the \(\{111\}\) pole figure only four-fold points appear at \(\psi=54.7^{\circ}\) and with 45\({}^{\circ}\) shifts in \(\phi\) with respect to substrate edges. These are the characteristics of the _so-called_ cube on cube epitaxy achieved at 100 \({}^{\circ}\)C. For deposition with substrate temperature of 200 \({}^{\circ}\)C, however, there is a weak \(\{111\}\) ring at \(\psi=7.5^{\circ}\). Note that these \(\{111\}\) planes were not detected by normal XRD because the \(\{111\}\) Py/Cu peak appears at
Figure 1: Comparison of the XRR pattern from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at different substrate temperatures. The \(\Lambda\) and \(\delta\) are inversely proportional to the Py/Cu period and total thickness, respectively.
which is masked by the strong (002) MgO peak normal to the film plane.
Figure 4 compares the MOKE response of Py/Cu superlattices deposited at different substrate temperatures. For a superlattice deposited at room temperature, uniaxial anisotropy along the [100] direction is evident. This is expected since the oblique deposition in a co-deposition chamber tends to induce uniaxial anisotropy in Py films [30; 31; 32; 22]. However, the oblique deposition cannot overcome magnetocrystalline anisotropy due to symmetry in an ordered single crystal Py [22]. Thus, the low substrate temperature must be accounted for the limiting order in the Py layer and presence of uniaxial anisotropy.
For deposition at higher substrate temperatures, however, biaxial anisotropy was obtained with the easy axes along the [110] directions in plane. It is worth mentioning that the bulk crystal symmetry gives the easy axis along the [111] direction which is forced into the film plane along the [110] direction due to shape anisotropy [22]. In the Py/Cu superlattice grown at 100 \({}^{\circ}\)C (Figure 4 (b)), \(\langle 1\bar{1}0\rangle\) is clearly an easy direction, with a very low \(H_{\rm c}\) of 0.7 Oe and double-hysteresis loops along the \(\langle 110\rangle\) direction that saturates at 1.2 Oe. For the Py/Cu superlattice deposited at 200\({}^{\circ}\)C (Figure 4 (c)), it seems the double-hysteresis loops overlap and the other easy axis gives a step that in total present increased coercivity. With increasing substrate temperature not only do the coercivities vary but also the shapes of the hysteresis curves are different. When the substrate temperature during deposition is 21\({}^{\circ}\)C the magnetization, shown in Figure 4 (a), is much like we obtain for polycrystalline single layer films. When the substrate temperature is higher, as shown in Figures 4 (b) and (c), however, the anisotropy has rotated by 45 degrees and the hysteresis loops have changed. The intermediate steps in the hysteresis curves are caused by antiferromagnetic alignment of the magnetic layers, that minimizes the exchange and dipolar magnetic interactions. In some cases this results in perfectly zero magnetic remanence, while in other cases the cancellation is not perfect. The non-magnetic Cu spacer layer is only 5 A in our case, just at the onset of the first antiferromagnetic exchange coupling peak observed by Parkin [40]. Double hysteresis curves have been observed in the Py/Cu system [18]. Note that Ni is miscible in Cu and during annealing a mixing of Ni
Figure 3: Comparison of the {200} and {111} pole figures from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at substrate temperatures of 21, 100, and 200\({}^{\circ}\)C. The background is removed for better illustration.
Figure 2: Comparison of the XRD pattern from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at different substrate temperatures. The intense peak belongs to (002) planes of MgO and and the other peak is due to (002) planes of Py/Cu multilayer.
and Cu is possible. Such intermixing causes a decrease of magnetic homogenity and a reduction in the GMR [17; 18].
### Effect of strain on structural and magnetic properties
In order to explore the influence of strain on the magnetic properties we deposited NM layers of Pt and Cu\({}_{50}\)Pt\({}_{50}\) at. % alloy in addition to Cu discussed in Section III.1. Pt has lattice constant of 3.9 A which is larger than of Py, which has lattice constant of 3.54 A. Therefore, by going from Cu to Cu\({}_{50}\)Pt\({}_{50}\) and then to Pt the strain is gradually increased. Figure 5 shows XRR results from different superlattices deposited on (001) MgO at 100 \({}^{\circ}\)C. Note that the \(\Lambda\) peak is suppressed in the Py/Cu superlattice. One may think this arises from a diffused Py/Cu interface that leads to smooth density variation at the interface. This is not the case here and the \(\Lambda\) peaks intensity decreases due the similar density of Py and Cu. The latter has been shown to reduce the resolution of the XRR measurement in Si/SiO\({}_{2}\) by a few orders of magnitude [41].
The layers thickness and their mass density as well as surface and interface roughness obtained by fitting XRR results for deposition at substrate temperature of 100\({}^{\circ}\)C are summarized in table 1. The period \(\Lambda\) is in all cases about 19 A with \(t_{\rm Py}\sim 16\) A and \(t_{\rm NM}\sim 3\) A. The film mass density of the Py layers is the highest (8.74 g/cm\({}^{3}\)) in the Py/Cu stack but is lowest (7.45 g/cm\({}^{3}\)) in the Py/CuPt stack.
Figure 6 shows the XRD results from Py/NM superlattices deposited on (001) MgO at substrate temperature of 100 \({}^{\circ}\)C. Since the Py/Pt and Py/CuPt superlattices exhibit multiple (002) peaks the pole figure were obtained for the main peak (indicated
\begin{table}
\begin{tabular}{c|c c c|c c c|c} \hline \hline Sample & \multicolumn{3}{c|}{\(t\) (Å)} & \multicolumn{3}{c|}{Ra (Å)} & \multicolumn{2}{c}{\(\rho\) (g/cm\({}^{3}\))} \\ & Py & NM & \(\Lambda\) & Py & NM & Py & NM \\ \hline Py/Cu & 15.8 & 3.46 & 19.3 & 7.62 & 6.25 & 8.74 & 9.8 \\ \hline Py/Pt & 15.9 & 2.97 & 18.9 & 5.92 & 3.24 & 8.55 & 27.2 \\ \hline Py/CuPt & 15.9 & 3.43 & 19.3 & 2.25 & 4.94 & 7.45 & 26.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Py and NM layer thicknesses (\(t\)), roughness (Ra) and density (\(\rho\)) extracted by fitting the XRR results of different superlattices deposited on (001) MgO at 100\({}^{\circ}\)C substrate temperature.
Figure 5: XRR measurements from the various superlattices, [Py/Cu]\({}_{20}\), [Py/Pt]\({}_{20}\), and [Py/CuPt]\({}_{20}\), deposited on (001) MgO at substrate temperature of 100 \({}^{\circ}\)C.
Figure 6: Comparison of the XRD results from the various superlattices, [Py/Cu]\({}_{20}\), [Py/Pt]\({}_{20}\) and [Py/CuPt]\({}_{20}\), deposited at substrate temperature of 100 \({}^{\circ}\)C. All the peaks are due to the (002) plane and the vertical dashed line indicate the (002) peak position for the bulk state.
by 0 in figure 6). It can be seen that all the pole figures are very similar. All these pole figures indicate a cube on cube epitaxial relationship.
Figure 8 depicts the MOKE response from Py/Pt and Py/CuPt superlattices prepared at 100 \({}^{\circ}\)C. For the Py/Pt superlattice we did not detect any clear easy direction in the film plane, the film appeared almost isotropic in the film plane with \(H_{\text{c}}\) of 60 - 75 Oe. The hysteresis in the [100] and [110] directions are displayed in figure 8(a). Aside from \(H_{\text{c}}\) of 3 Oe, the Py/CuPt superlattice presents biaxial anisotropy similar to Py/Cu, cf. figure 4(b) and (c). However, a Py/Cu superlattice exhibits an easy axis along the [110] directions, while an easy axis appears along the [100] orientations for a Py/CuPt superlattice. Note that the [100] directions are harder than both the [110] and [111] directions. However, forcing easy axes along the [100] direction in the single crystal Py on (001) MgO has been reported previously [42].
For the polycrystalline but highly (111) textured Py/M multilayers we observed limited change in coercivity and opening in hard axis with interface strain due to the choice of M [13]. Here, for Py/Pt the coercivity increases an order of magnitude and cubic anisotropy is almost destroyed.
## IV Summary
In summary it is shown that Py superlattices can be successfully deposited on (001) MgO within a narrow substrate temperature window around 100 \({}^{\circ}\)C. For small lattice mismatch of 5% superlattice the easy axes detected along the [110] directions is similar to the single crystal
Py. It is also shown that the moderate lattice mismatch (7%) rotates the easy axes towards the [100] orientation and the coercivity increases. The higher lattice mismatch of 14% present nearly isotropic behaviour and a very high coercivity, simultaneously. Thus, the results indicate that the changes in magnetocrystalline anisotropy due to epitaxial strain are much larger than the changes we observed earlier in the case of uniaxial anisotropy.
###### Acknowledgements.
The authors would like to acknowledge helpful comments and experimental help from Dr. Fridrik Magnus and Einar B. Thorsteinsson. This work was partially supported by the Icelandic Research Fund Grant Nos. 228951, 196141, 130029 and 120002023.
|
2309.16220 | Unmasking the Chameleons: A Benchmark for Out-of-Distribution Detection
in Medical Tabular Data | Despite their success, Machine Learning (ML) models do not generalize
effectively to data not originating from the training distribution. To reliably
employ ML models in real-world healthcare systems and avoid inaccurate
predictions on out-of-distribution (OOD) data, it is crucial to detect OOD
samples. Numerous OOD detection approaches have been suggested in other fields
- especially in computer vision - but it remains unclear whether the challenge
is resolved when dealing with medical tabular data. To answer this pressing
need, we propose an extensive reproducible benchmark to compare different
methods across a suite of tests including both near and far OODs. Our benchmark
leverages the latest versions of eICU and MIMIC-IV, two public datasets
encompassing tens of thousands of ICU patients in several hospitals. We
consider a wide array of density-based methods and SOTA post-hoc detectors
across diverse predictive architectures, including MLP, ResNet, and
Transformer. Our findings show that i) the problem appears to be solved for
far-OODs, but remains open for near-OODs; ii) post-hoc methods alone perform
poorly, but improve substantially when coupled with distance-based mechanisms;
iii) the transformer architecture is far less overconfident compared to MLP and
ResNet. | Mohammad Azizmalayeri, Ameen Abu-Hanna, Giovanni Ciná | 2023-09-28T07:52:01Z | http://arxiv.org/abs/2309.16220v1 | # Unmasking the Chameleons: A Benchmark for
###### Abstract
Despite their success, Machine Learning (ML) models do not generalize effectively to data not originating from the training distribution. To reliably employ ML models in real-world healthcare systems and avoid inaccurate predictions on out-of-distribution (OOD) data, it is crucial to detect OOD samples. Numerous OOD detection approaches have been suggested in other fields - especially in computer vision - but it remains unclear whether the challenge is resolved when dealing with medical tabular data. To answer this pressing need, we propose an extensive reproducible benchmark to compare different methods across a suite of tests including both near and far OODs. Our benchmark leverages the latest versions of eICU and MIMIC-IV, two public datasets encompassing tens of thousands of ICU patients in several hospitals. We consider a wide array of density-based methods and SOTA post-hoc detectors across diverse predictive architectures, including MLP, ResNet, and Transformer. Our findings show that i) the problem appears to be solved for far-OODs, but remains open for near-OODs; ii) post-hoc methods alone perform poorly, but improve substantially when coupled with distance-based mechanisms; iii) the transformer architecture is far less overconfident compared to MLP and ResNet.
Out-of-Distribution Detection, Medical Tabular Data
## 1 Introduction
The utilization of ML models in health-related applications is rapidly increasing (Greener et al., 2022; Varoquaux and Cheplygina, 2022). However, a significant limitation lies in their performance evaluation, which is primarily based on optimizing the algorithms for data from the training distribution. This means that they may fail under distribution shift: a model trained on the data from a hospital may not generalize to other hospitals (Rios and Abu-Hanna, 2021; de Hond et al., 2023). Since such ML models are meant to be deployed in real-world healthcare scenarios, ensuring their reliability becomes of utmost importance. One way to prevent models from providing unreliable suggestions is to detect OOD samples in real time, prior to generating predictions. This is known as OOD detection, where a model trained on an in-distribution (ID) set is employed for distinguishing OOD samples from ID data.
In this paper, we investigate this problem for tabular medical data. The problem has already been investigated mainly in the field of computer vision (Yang et al., 2021; Zimmerer et al., 2022), but the results may not extend to tabular medical data. For example, while computer vision has focused on im
Figure 1: The practical dilemma of OOD data: there are no guarantees on how a model will perform on OOD data, hence real-time OOD detection becomes imperative.
proving post-hoc OOD detectors (Yang et al., 2022), it is demonstrated that these kinds of methods perform even worse than a random classifier in medical tabular data due to the phenomenon of overconfidence (Ulmer et al., 2020; Ulmer and Cina, 2021; Zadorozhny et al., 2022).
Existing OOD detection methods can be categorized into three main groups: i) post-hoc methods, which are detectors that can be applied on top of any trained classifier, such as Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017), ii) density-based methods, which are trained to estimate the marginal distribution of the training set in order to detect samples that fall out of the training distribution, such as auto-encoders (AEs) (Kingma and Welling, 2014), and finally iii) methods that require retraining of the prediction model which are mainly designed specifically to be applied on images, such as OpenGAN (Kong and Ramanan, 2021).
The focus of this work is medical tabular data, thus we only consider the first two OOD categories since they can be applied to any medical tabular dataset without limitations. Furthermore, we examine three important architectures for the classifier on which post-hoc methods are implemented, namely MLP, ResNet, and FT-Transformer (Gorishniy et al., 2021)--to assess their impact on OOD detection.
A crucial aspect of comparing these methods involves evaluating their performance across diverse scenarios. Therefore, we incorporate the latest publicly available datasets including eICU and MIMIC-IV (Pollard et al., 2018; Johnson et al., 2023) in our experiments. We also employ different approaches for classifying the data into ID and OOD sets. This lets us consider OOD sets near to and far from the IDs in the experiments, which are referred to as near-OOD and far-OOD (Fort et al., 2021). For example, to consider near-OOD instances, the ID and OOD sets can be constructed by dividing a dataset based on a distinguishing feature such as gender (male vs. female), and for a far-OOD experiment, we can use a dataset from a different data generating process, such as a distinct care protocol (de Hond et al., 2023), or generate artificial OOD samples.
We use this setup to conduct broad experiments that can illustrate the differences between various types of OOD detection methods in order to provide the community with an extensive reproducible benchmark in the medical tabular data domain1. Our results are provided in section 5, and the main findings are as follows: i) the OOD detection problem appears to be almost resolved on far-OOD samples, but it is still an open issue for near-OODs, ii) the distance-based post-hoc OOD detectors are better than other post-hoc methods, iii) unlike the claims in previous work, post-hoc methods can be competitive with the density-based ones, iv) the transformer architecture mitigates the problem of over-confidence that other models suffer from.
Footnote 1: The codes are available at [https://github.com/maximalsayeri/TabMedOOD](https://github.com/maximalsayeri/TabMedOOD).
## 2 Related Work
OOD detection has received a lot of attention, with several kinds of approaches being proposed for this purpose. This has resulted in interest in comparing these methods fairly within the same setting. Some benchmarks compare methods using standard image datasets (Han et al., 2022; Yang et al., 2022; Zhang et al., 2023). However, it is still necessary to have benchmarks that are specific to other domains and data modalities. For example, the MOOD challenge (Zimmerer et al., 2022) investigated OOD detection within the context of medical imaging, with various new methods showing outstanding results (Marinont and Tarroni, 2021; Tan et al., 2022).
In the realm of medical tabular datasets, notable endeavors have been undertaken. A pipeline is presented in Nicora et al. (2022) to train a model on ICU data and detect test samples for which the model might exhibit poor performance. BEDS-Bench (Avati et al., 2021) has provided a benchmark on the generalization ability of ML models over electronic health records, highlighting performance drops under distribution shift. Moreover, it has been found that access to OOD data does not improve the test performance on ICU data (Spathis and Hyland, 2022). The work by Ulmer et al. (2020), proposed one of the first comparisons between basic OOD detectors in the space of medical tabular data, pointing to the fact that the problem of OOD detection was wide open. Also, some guidelines have been provided on how to evaluate an OOD detector in practice within the context of medical data (Zadorozhny et al., 2022) such as methods for selecting the OOD set during evaluation. Compared to this related work, we benchmark the most recent SOTA OOD detection approaches and SOTA architectures, leading to novel insight e.g. on the over-confidence of transformers and the combination of post-hoc and distance-based methods.
## 3 Problem Definition and Evaluation Protocol
In the following, we describe the problem definition, the metrics used to measure performance, and how we select the ID and OOD sets.
### OOD Detection
In training any ML model, we require a training set \(\mathcal{D}_{in}=\{(x_{i},y_{i})\}_{i=1}^{n}\), where each instance \(x_{i}\) has a label \(y_{i}\). Our goal is to have a model \(f:\mathcal{X}\rightarrow\mathcal{R}\) and a binary classifier \(G_{\lambda}:\mathcal{X}\rightarrow\{0,1\}\) such that for a test input \(x\sim\mathcal{X}\):
\[G_{\lambda}(x;f)=\begin{cases}\text{OOD}&f(x)\geq\lambda\\ \text{ID}&f(x)<\lambda\end{cases}\quad. \tag{1}\]
The score according to \(f\) is sometimes called the 'novelty score'. Samples whose novelty score is higher than the threshold \(\lambda\) are classified as OOD. Hence, the final goal would be to train model \(f\) such that it assigns lower scores to \(x\sim\mathcal{D}_{in}\) and higher scores to \(x\sim\mathcal{D}_{out}\), where \(\mathcal{D}_{out}\) is the dataset that includes the OOD samples.
### Metrics
To assess the separability of the ID and OOD scores given by \(f\), we measure the area under the receiver operating characteristic (AUROC) as a well-known threshold-independent classification criterion, and FPR@95, which measures the FPR at a \(\lambda\) corresponding to the TPR being equal to 95%.
### Near, Far, and Synthesized OOD
The distance between \(\mathcal{D}_{in}\) and \(\mathcal{D}_{out}\) plays an important role in the performance of OOD detection methods. It is relatively simpler to detect samples far from the training distribution than samples close to it; we refer to the former samples as far-OOD and to the latter as near-OOD. To consider this in the comparisons, we define different sets of ID and OOD as follows.
**Near and far OODs:** Assuming that we have a dataset \(\mathcal{D}\), we can define the ID and OOD sets in two ways: First, we can separate \(\mathcal{D}\) based on a specific feature such as gender (male vs. female) or age (e.g. elderly vs. young). This reflects what may happen when employing an ML model in practice. As an example, if one develops a model on a population of mostly young people, it might not be fully reliable when applied to the elderly. The second way would be to use \(\mathcal{D}\) as ID and a totally different dataset as OOD. Since identifying OODs in the second scenario appears to be easier, we will refer to the first scenario as near-OOD and the second as far-OOD following the convention in computer vision (Winkens et al., 2020; Fort et al., 2021).
**Synthesized-OOD:** Following the data corruption suggested in Ulmer et al. (2020), we can simulate the OOD samples by scaling a single feature from ID set by a factor of 10, 100, or 1000. For each factor, we will repeat the experiments 100 times with different features, and average the results to isolate the influence of scaling in the results, minimizing the impact of the chosen feature. By increasing the scaling factor, it looks like we are gradually transforming near-OOD samples into far-OOD ones.
## 4 Experiment Setup
### Supported OOD Detectors
Two main OOD detection method categories, described in Table 1, are included in our experiments. The first category is density estimators, which learn the marginal distribution of the ID data and label OOD samples by comparison to such distribution. We include 7 density-based models covering different types of density estimators. These methods are used in prior works and have reached outstanding results (Ulmer et al., 2020; Zadorozhny et al., 2022).
The second group is the post-hoc detectors, which can be integrated into any pre-trained classifier without requiring any additional fine-tuning or training. They mostly generate a novelty score for the input based on the classifier output or intermediate representations. We evaluate 17 different post-hoc detectors, including both commonly used detectors and top-performing ones in their respective fields.
### Datasets
We use eICU (Pollard et al., 2018) and MIMIC-IV (Johnson et al., 2023) in our experiments as two public and popular datasets encompassing tens of thousands of ICU patients in several hospitals. Following the descriptions in section 3.3, each of these datasets is considered as far-OOD set for the other one. For the near-OOD case, we divide the datasets based on the time-independent variables, as this choice better emulates the potential distribution shifts in ID data
that can be encountered in practice than the time-dependent ones. Among the time-independent variables available for each dataset, we have selected age (older than 70 as ID), gender (females as ID), and ethnicity (Caucasian or African American as ID) in eICU, and Age (older than 70 as ID), gender (females as ID), admission type (surgical in the same day of admission as ID), and first care unit (CVICU as ID) in MIMIC-IV dataset. In each case, the remaining part of the dataset would be OOD. The results for the age (older than 70), ethnicity (Caucasian), and first care unit (CVICU) are reported in the main text, and others in Appendix C.
### Pre-processing
We pre-processed the eICU data using the pipeline provided in Sheikhalishahi et al. (2020). Subsequently, the data is filtered to keep only patients with a length of stay of at least 48 hours and an age greater than 18 years old. Additionally, data with unknown discharge status were removed. Furthermore, patients with NaN values in the features used in our models are also removed. This pre-process resulted in a total 54826 unique patients for this dataset.
For the MIMIC-IV, we used the pipeline provided in Gupta et al. (2022) with slight modifications e.g., we added a mapping from the feature IDs to feature names to have each feature name in the final pre-processed data. The data is then filtered similarly to the eICU dataset. This resulted in 18180 unique patients for this dataset.
These datasets contain different types of clinical variables, but they are not recorded for all the patients. To avoid NaN values as much as possible, we are interested in using only the more important variables that are recorded for more patients. Based on these criteria and considering the important clinical variables suggested in Ulmer et al. (2020), we have selected a combination of time-dependent and time-independent variables for each dataset, which are reported in Appendix A.
It should be noted that when datasets are evaluated against each other, only the variables found in both datasets are taken into account. Moreover, for the time-dependent variables, we aggregated the time series by means of 6 different statistics including mean, standard deviation, minimum, maximum, skewness, and number of observations calculated over windows consisting of the full time-series and its first and last 10%, 25%, and 50%.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Method** & **Short Description** \\ \hline AE, VAE (Kingma and Welling, 2014) & Encodes input in a latent representation and reconstructs it. \\ HI-VAE (Nazabal et al., 2020) & Modifies VAE to consider heterogeneous data. \\ Flow (Papamakarios et al., 2017) & Estimates ID data by transformations into a normal distribution. \\ PPCA (Tipping and Bishop, 1999) & Reduces data dimensionality based on singular value decomposition. \\ LOF (De Vries et al., 2010) & Compares local density of input to the density of its closest neighbors. \\ DUE (van Amersfoort et al., 2021) & Uses a deep neural Gaussian process for modeling the ID data. \\ \hline MDS (Lee et al., 2018) & Uses Mahalanobis distances to the class-conditional normal distributions. \\ RMDS (Ren et al., 2021) & Modifies MDS for detecting near-OODs. \\ KNN (Sun et al., 2022) & Measures distance of input to the \(k_{\text{th}}\) nearest neighbor in the ID data. \\ VIM (Wang et al., 2022) & Uses logits and features norm simultaneously. \\ SBE (Zhang et al., 2023b) & Measures the distance of input to the class-conditional representations. \\ KLM (Hendrycks et al., 2022) & Uses KL distance of softmax output from its range over the ID data. \\ OpenMax (Bendale and Boult, 2016) & Fits a Weibull distribution on the logits instead of softmax. \\ MSP (Hendrycks and Gimpel, 2017) & Uses maximum softmax probability as a simple but effective baseline. \\ MLS (Hendrycks et al., 2022) & Uses the maximum logit score instead of MSP. \\ TempScale (Guo et al., 2017) & Calibrates the temperature parameter in the softmax. \\ ODIN (Liang et al., 2018) & Perturbs the input adversarially before using TempScaling. \\ EBO (Liu et al., 2020) & Uses an energy function instead of softmax. \\ GRAM (Sastry and Oore, 2020) & Measures deviation of the Gram matrix from its range over the ID data. \\ GradNorm (Huang et al., 2021) & Uses norm of the backpropagated gradients. \\ ReAct (Sun et al., 2021) & Rectifies the model activations at an upper limit. \\ DICE (Sun and Li, 2022) & Suggests sparsification of the last linear layer. \\ ASH (Djurisic et al., 2023) & Extends the DICE idea to the intermediate feature layers. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Density-based models and post-hoc detectors evaluated in this work, with a short description.
### Task and Prediction Models
To perform post-hoc OOD detection, we would need a prediction model performing a supervised classification task. The main task that is used in prior work is mortality prediction (Sheikhalishahi et al., 2020; Ulmer et al., 2020; Meijerink et al., 2020; Zadorozhny et al., 2022). In mortality prediction, we only use the first 48 hours of data from the intensive care unit collected from patients to predict the in-hospital mortality. It is noteworthy that the mortality rate in the pre-processed data is 12.57% in the MIMIC-IV and 6.77% in the eICU dataset.
To perform this task, we consider three widely used architectures: MLP, ResNet, and FT-Transformer (Gorishniy et al., 2021). MLP passes data through the fully-connected layers and non-linear activation functions with dropout to improve the generalization. ResNet adds batchnorm and residual connections to MLP. We also consider FT-Transformer, constituted of transformer blocks that utilize the attention mechanism (Vaswani et al., 2017).
## 5 Results
In this section, we describe the results for each of the OOD settings based on the AUROC criterion. Results for FPR@95 are presented in Appendix D. Moreover, each experiment is repeated 5 times, and the results are averaged to reduce the impact of randomness in selecting train/test data and training itself. Additionally, we have measured the mortality prediction performance of the prediction models in Appendix B, indicating that they are trained well.
### Far-OOD
Results for the far-OOD setting are displayed in Table 2. According to this table, there are methods that can effectively detect OOD data on each dataset.
Among the density-based methods, Flow attains the best result on eICU, while others exhibit superior performance on MIMIC-IV. Additionally, DUE can detect OODs on MIMIC-IV, but it falls short of being competitive on the eICU dataset. Except for these two approaches and HI-VAE, other density-based methods including AE, VAE, PPCA, and LOF demonstrate strong performance on both datasets.
Within the post-hoc methodologies, MDS exhibits better results compared to others regardless of the choice of the prediction model. Moreover, MDS applied on ResNet is competitive with the density-based approaches, even marginally outperforming them based on the average results across both datasets. After MDS, there is not a single winner. However, approaches like KNN, VIM, and SHE which somehow compute the distance to the training set as the novelty score, outperform the rest.
Regarding the prediction model, the top-performing methods on ResNet demonstrate superior results compared to both MLP and FT-Transformer. However, a problem with ResNet and MLP is that they have over-confidence issues with some approaches on MIMIC-IV. This means that they have more confidence in the OODs, resulting in a performance even worse than a random detector. This is mainly being observed with the detectors that do not rely on distance-based novelty scores such as MSP and GradNorm. Note that the eICU dataset contains data from a more extensive array of hospitals, which can increase diversity in the data and reduce the over-confidence on this dataset.
FT-Transformer seems to solve the over-confidence problem observed in MLP and ResNet, as all detectors perform better than random on both datasets. The attention mechanism in the transformer blocks enables this model to consider relationships between input elements, leading to a better understanding of ID data.
### Near-OOD
Results for the near-OOD setting are presented in Table 2. In the eICU dataset, the diversity among the ID data and the proximity of OODs to ID have collectively yielded an almost random performance for all the approaches. Still, it indicates that methods like MDS and Flow are marginally better than others.
In MIMIC-IV, which contains less diversity, the age variable still results in an almost random performance across all approaches, but FCU reflects some differences between detectors. Among the density-based approaches, AE and VAE are the best choices, followed by PPAC, LOF, and HI-VAE. The post-hoc methods mostly demonstrate similar performance within the same architecture category. Moreover, they can be competitive with density-based methods when applied on the FT-Transformer.
### Synthesized-OOD
Results for the synthesized-OOD setting are displayed in Table 3. It is expected that increasing the
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{eICU} & \multicolumn{3}{c}{MIMIC-IV} \\ & & Far-OOD & Near-OOD(Eth) & Near-OOD(Age) & Far-OOD & Near-OOD(FCU) & Near-OOD(Age) \\ \hline \hline & AE & 96.5\(\pm\)0.2 & 55.1\(\pm\)0.8 & 50.0\(\pm\)0.4 & 99.8\(\pm\)0.0 & 79.4\(\pm\)0.6 & 56.8\(\pm\)0.4 \\ & VAE & 95.8\(\pm\)0.2 & 55.4\(\pm\)0.7 & 50.1\(\pm\)0.4 & **99.8\(\pm\)0.0** & **79.7\(\pm\)0.6** & **57.0\(\pm\)0.4** \\ & HI-VAE & 56.7\(\pm\)1.6 & 44.3\(\pm\)1.2 & 44.4\(\pm\)2.1 & 68.8\(\pm\)8.8 & 79.1\(\pm\)13.4 & 56.8\(\pm\)10.0 \\ Density & Flow & **100.0\(\pm\)0.0** & **61.1\(\pm\)0.9** & 49.7\(\pm\)0.4 & 87.4\(\pm\)7.5 & 31.8\(\pm\)3.9 & 51.6\(\pm\)0.5 \\ Based & PPCA & 96.7\(\pm\)0.2 & 59.1\(\pm\)0.6 & **51.3\(\pm\)0.5** & 99.8\(\pm\)0.0 & 75.1\(\pm\)0.6 & 56.8\(\pm\)0.7 \\ & LOF & 96.5\(\pm\)0.1 & 56.0\(\pm\)0.8 & 49.5\(\pm\)0.5 & 99.2\(\pm\)0.2 & 73.5\(\pm\)0.5 & 55.3\(\pm\)0.8 \\ & DUE & 73.4\(\pm\)0.5 & 53.7\(\pm\)0.9 & 49.5\(\pm\)0.4 & 98.2\(\pm\)0.2 & 59.8\(\pm\)1.7 & 51.6\(\pm\)0.6 \\ \hline \hline & MDS & **84.3\(\pm\)1.4** & **56.8\(\pm\)0.9** & **51.7\(\pm\)0.7** & **98.9\(\pm\)0.6** & 68.0\(\pm\)3.9 & **53.8\(\pm\)0.9** \\ & RMDS & 59.2\(\pm\)2.1 & 50.0\(\pm\)1.7 & 49.1\(\pm\)0.5 & 79.9\(\pm\)12.1 & 60.9\(\pm\)1.1 & 50.5\(\pm\)0.4 \\ & KNN & 79.4\(\pm\)1.2 & 53.4\(\pm\)2.4 & 48.5\(\pm\)0.8 & 65.4\(\pm\)8.3 & **73.3\(\pm\)0.9** & 54.0\(\pm\)0.5 \\ & VIM & 69.9\(\pm\)1.4 & 51.2\(\pm\)1.7 & 46.7\(\pm\)0.8 & 97.8\(\pm\)2.4 & 71.1\(\pm\)1.2 & 54.0\(\pm\)0.9 \\ & SHE & 61.5\(\pm\)0.8 & 50.8\(\pm\)1.9 & 50.1\(\pm\)0.1 & 93.9\(\pm\)5.7 & 56.7\(\pm\)0.7 & 49.6\(\pm\)0.2 \\ & KLM & 68.8\(\pm\)1.2 & 52.7\(\pm\)1.8 & 51.7\(\pm\)0.2 & 79.5\(\pm\)5.8 & 46.1\(\pm\)2.0 & 49.3\(\pm\)0.6 \\ & OpenMax & 52.1\(\pm\)1.3 & 47.4\(\pm\)1.5 & 46.3\(\pm\)1.3 & 62.7\(\pm\)1.03 & 66.9\(\pm\)0.9 & 52.6\(\pm\)0.5 \\ \hline \multirow{2}{*}{MLP} & MSP & 51.2\(\pm\)1.2 & 47.1\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.4\(\pm\)7.7 & 66.0\(\pm\)0.6 & 52.4\(\pm\)0.4 \\ & MLS & 51.3\(\pm\)1.4 & 46.8\(\pm\)1.7 & 46.2\(\pm\)1.3 & 14.2\(\pm\)7.6 & 65.9\(\pm\)0.8 & 52.3\(\pm\)0.4 \\ & TempScale & 51.2\(\pm\)1.2 & 46.8\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.3\(\pm\)7.7 & 66.0\(\pm\)0.8 & 52.4\(\pm\)0.3 \\ & ODIN & 51.3\(\pm\)1.2 & 46.8\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.5\(\pm\)7.7 & 66.0\(\pm\)0.8 & 52.5\(\pm\)0.3 \\ & EBO & 51.4\(\pm\)1.4 & 46.8\(\pm\)1.7 & 46.2\(\pm\)1.3 & 14.4\(\pm\)7.7 & 65.9\(\pm\)0.8 & 52.5\(\pm\)0.3 \\ & GRAM & 46.7\(\pm\)1.4 & 46.6\(\pm\)1.5 & 48.4\(\pm\)0.6 & 16.7\(\pm\)11.7 & 53.6\(\pm\)1.4 & 49.9\(\pm\)0.1 \\ & GradNorm & 50.2\(\pm\)1.3 & 46.8\(\pm\)1.8 & 46.4\(\pm\)1.3 & 12.7\(\pm\)7.9 & 65.8\(\pm\)0.8 & 52.5\(\pm\)0.4 \\ & ReAct & 52.5\(\pm\)1.2 & 46.7\(\pm\)1.5 & 46.6\(\pm\)1.4 & 74.3\(\pm\)1.6 & 65.1\(\pm\)0.9 & 52.5\(\pm\)0.4 \\ & DIGE & 50.6\(\pm\)1.3 & 46.9\(\pm\)1.8 & 46.6\(\pm\)1.3 & 14.7\(\pm\)8.4 & 65.8\(\pm\)0.9 & 52.7\(\pm\)0.8 \\ & ASH & 50.6\(\pm\)1.3 & 46.8\(\pm\)1.5 & 46.6\(\pm\)1.7 & 14.0\(\pm\)7.3 & 65.7\(\pm\)0.9 & 52.1\(\pm\)0.4 \\ \hline \hline & MDS & **96.9\(\pm\)0.3** & **58.4\(\pm\)0.6** & 51.6\(\pm\)0.7 & **99.7\(\pm\)0.1** & 74.0\(\pm\)2.4 & 55.4\(\pm\)0.3 \\ & RMDS & 45.8\(\pm\)2.9 & 50.1\(\pm\)2.5 & 49.9\(\pm\)0.3 & 79.5\(\pm\)13.1 & 62.6\(\pm\)1.4 & 50.8\(\pm\)0.4 \\ & KNN & 91.2\(\pm\)0.9 & 59.2\(\pm\)1.6 & 50.1\(\pm\)0.1 & 93.0\(\pm\)1.9 & 57.7\(\pm\)2.3 & 54.3\(\pm\)0.5 \\ & VIM & 93.5\(\pm\)1.0 & 56.9\(\pm\)1.5 & 48.2\(\pm\)1.1 & 99.5\(\pm\)0.2 & **75.0\(\pm\)1.8** & **56.2\(\pm\)0.4** \\ & SHE & 65.7\(\pm\)1.6 & 50.5\(\pm\)1.3 & 51.7\(\pm\)0.8 & 99.7\(\pm\)0.1 & 73.0\(\pm\)1.6 & 52.8\(\pm\)0.6 \\ & KLM & 55.5\(\pm\)2.6 & 47.5\(\pm\)1.5 & 51.5\(\pm\)0.4 & 78.1\(\pm\)6.7 & 55.2\(\pm\)1.4 & 47.7\(\pm\)0.2 \\ & OpenMax & 64.8\(\pm\)5.4 & 50.7\(\pm\)1.3 & 47.0\(\pm\)1.1 & 65.2\(\pm\)2.4 & 67.2\(\pm\)1.4 & 54.3\(\pm\)0.4 \\ & MSP & 65.4\(\pm\)5.0 & 51.3\(\pm\)2.4 & 47.1\(\pm\)0.8 & 19.5\(\pm\)10.3 & 65.0\(\pm\)4.3 & 53.9\(\pm\)0.7 \\ \hline \multirow{2}{*}{ResNet} & MLS & 64.4\(\pm\)6.0 & 51.9\(\pm\)2.2 & 46.8\(\pm\)1.4 & 38.0\(\pm\)26.5 & 66.5\(\pm\
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{eICU} & \multicolumn{3}{c}{MIMIC-IV} \\ & & \(\mathcal{F}\)=10 & \(\mathcal{F}\)=100 & \(\mathcal{F}\)=1000 & \(\mathcal{F}\)=10 & \(\mathcal{F}\)=100 & \(\mathcal{F}\)=1000 \\ \hline \hline & AE & 80.5\(\pm\)1.3 & 88.4\(\pm\)0.9 & 90.0\(\pm\)0.7 & 76.4\(\pm\)1.6 & 83.9\(\pm\)2.1 & 86.6\(\pm\)2.1 \\ & VAE & 80.0\(\pm\)1.3 & 88.3\(\pm\)0.9 & 89.9\(\pm\)0.7 & 76.4\(\pm\)1.6 & 83.8\(\pm\)2.1 & 86.6\(\pm\)2.1 \\ & HI-VAE & 50.0\(\pm\)0.1 & 50.0\(\pm\)0.1 & 50.1\(\pm\)0.2 & 50.5\(\pm\)0.8 & 53.1\(\pm\)1.4 & 52.0\(\pm\)2.0 \\ Density & Flow & 70.2\(\pm\)2.5 & 82.1\(\pm\)2.5 & 87.7\(\pm\)1.4 & 53.8\(\pm\)1.8 & 65.0\(\pm\)3.0 & 75.7\(\pm\)2.5 \\ Based & PPCA & 80.7\(\pm\)1.3 & 88.3\(\pm\)0.9 & 89.7\(\pm\)0.8 & 76.9\(\pm\)1.5 & 84.0\(\pm\)2.1 & 86.6\(\pm\)2.0 \\ & LOF & **84.4\(\pm\)1.3** & **89.4\(\pm\)0.8** & **90.5\(\pm\)0.7** & **78.4\(\pm\)1.5** & **84.7\(\pm\)2.1** & **86.9\(\pm\)1.9** \\ & DUE & 63.9\(\pm\)1.6 & 80.5\(\pm\)1.3 & 88.7\(\pm\)0.8 & 60.3\(\pm\)2.2 & 76.0\(\pm\)1.9 & 83.0\(\pm\)2.0 \\ \hline \hline & MDS & **68.5\(\pm\)2.4** & **82.8\(\pm\)2.0** & 89.2\(\pm\)2.0 & **69.3\(\pm\)2.4** & **80.8\(\pm\)1.6** & **84.7\(\pm\)1.8** \\ & RMDS & 60.8\(\pm\)1.0 & 75.2\(\pm\)2.0 & 85.8\(\pm\)2.0 & 52.7\(\pm\)4.1 & 64.5\(\pm\)6.9 & 76.4\(\pm\)2.8 \\ & KNN & 60.3\(\pm\)1.5 & 68.0\(\pm\)2.0 & 70.9\(\pm\)2.3 & 59.1\(\pm\)4.9 & 67.2\(\pm\)8.9 & 70.7\(\pm\)10.0 \\ & VIM & 48.5\(\pm\)2.0 & 47.2\(\pm\)3.1 & 46.0\(\pm\)4.9 & 56.8\(\pm\)7.9 & 63.5\(\pm\)13.7 & 66.6\(\pm\)15.1 \\ & SHE & 62.5\(\pm\)2.8 & 77.8\(\pm\)1.6 & **89.8\(\pm\)0.1** & 62.3\(\pm\)3.0 & 77.8\(\pm\)2.2 & 83.6\(\pm\)1.5 \\ & KLM & 60.2\(\pm\)1.4 & 69.6\(\pm\)1.5 & 78.1\(\pm\)1.4 & 55.4\(\pm\)2.0 & 66.9\(\pm\)1.6 & 74.0\(\pm\)1.5 \\ & OpenMax & 52.6\(\pm\)2.0 & 65.2\(\pm\)2.5 & 77.4\(\pm\)2.5 & 48.4\(\pm\)5.3 & 54.0\(\pm\)5.1 & 69.4\(\pm\)5.6 \\ \hline \multirow{2}{*}{MLP} & MSP & 40.5\(\pm\)2.2 & 27.1\(\pm\)3.0 & 14.2\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.7\(\pm\)5.5 & 21.8\(\pm\)1.3 \\ & MLS & 40.4\(\pm\)2.4 & 27.0\(\pm\)3.4 & 13.8\(\pm\)2.8 & 45.6\(\pm\)4.1 & 32.1\(\pm\)4.0 & 24.9\(\pm\)3.7 \\ & TempScale & 40.4\(\pm\)2.2 & 27.0\(\pm\)3.1 & 14.0\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.6\(\pm\)5.5 & 21.7\(\pm\)1.3 \\ & ODIN & 40.5\(\pm\)2.2 & 27.0\(\pm\)3.1 & 14.0\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.7\(\pm\)5.5 & 21.8\(\pm\)1.3 \\ & EBO & 40.4\(\pm\)2.4 & 27.0\(\pm\)3.4 & 13.8\(\pm\)2.8 & 45.5\(\pm\)3.9 & 32.0\(\pm\)4.0 & 24.9\(\pm\)3.8 \\ & GRAM & 38.7\(\pm\)2.0 & 25.2\(\pm\)2.6 & 12.7\(\pm\)2.1 & 47.0\(\pm\)0.7 & 32.8\(\pm\)3.0 & 20.4\(\pm\)2.2 \\ & GradNorm & 40.2\(\pm\)2.1 & 26.4\(\pm\)2.9 & 13.2\(\pm\)2.4 & 42.6\(\pm\)2.4 & 26.1\(\pm\)1.1 & 18.8\(\pm\)1.4 \\ & ReAct & 45.0\(\pm\)1.6 & 37.3\(\pm\)2.7 & 31.3\(\pm\)2.5 & 49.0\(\pm\)5.7 & 46.6\(\pm\)12.2 & 46.7\(\pm\)1.4 \\ & DICE & 40.9\(\pm\)2.7 & 28.9\(\pm\)3.5 & 16.7\(\pm\)2.9 & 44.8\(\pm\)3.6 & 31.4\(\pm\)2.7 & 22.0\(\pm\)1.1 \\ & ASH & 40.4\(\pm\)2.4 & 27.3\(\pm\)3.1 & 14.3\(\pm\)2.6 & 45.3\(\pm\)4.1 & 32.1\(\pm\)4.3 & 25.1\(\pm\)3.7 \\ \hline \hline & MDS & **74.4\(\pm\)2.0** & **88.9\(\pm\)1.7** & 91.3\(\pm\)1.4 & **72.4\(\pm\)1.1** & **81.8\(\pm\)0.8** & **85.4\(\pm\)1.0** \\ & RMDS & 51.7\(\pm\)0.6 & 59.8\(\pm\)2.5 & 78.7\(\pm\)2.4 & 46.2\(\pm\)1.5 & 57.6\(\pm\)2.7 & 73.9\(\pm\)1.2 \\ & KNN & 69.5\(\pm\)2.1 & 84.8\(\pm\)2.2 & 87.9\(\pm\)2.2 & 67.5\(\pm\)1.5 & 79.1\(\pm\)1.2 & 83.7\(\pm\)0.8 \\ & VIM & 72.2\(\pm\)2.5 & 87.8\(\pm\)1.8 & 91.5\(\pm\)1.4 & 68.8\(\pm\)1.3 & 80.2\(\pm\)1.2 & 84.5\(\pm\)1.1 \\ & SHE & 68.2\(\pm\)0.7 & 86.6\(\pm\)0.5 & **91.5\(\pm\)0.2** & 67.2\(\pm\)1.4 & 80.0\(\pm\)0.9 & 84.5\(\pm\)0.7 \\ & KLM & 56.6\(\pm\)0.8 & 69.4\(\pm\)1.7 & 80.5\(\pm\)1.9 & 53.2\(\pm\)0.9 & 65.4\(\pm\)0.5 & 75.3\(\pm\)0.8 \\ & OpenMax & 57.5\(\pm\)0.7 & 66.4\(\pm\)4.3 & 77.4\(\pm\)2.9 & 55.2\(\pm\)1.0 & 61.6\(\pm\)1.4 & 61.7\(\pm\)11.4 \\ & MSP & 49.4\(\pm\)2.2 & 35.1\(\pm\)2.1 & 16.8\(\pm\)1.0 & 52.2\(\pm\)1.1 & 38.0\(\pm\)0.7 & 23.1\(\pm\)0.9 \\ ResNet & MLS & 49.0\(\pm\)1.7 & 34.7\(\pm\)2.0 & 18.5\(\pm\)2.
scaling factor (\(\mathcal{F}\)) in this setting facilitates the detection of OODs. However, over-confidence hampers OOD detection, causing certain methods to exhibit a totally opposite behavior with MLP and ResNet architectures, on both datasets. In this case, even the diversity in eICU does not prevent over-confidence. Similar to the far-OOD scenario, FT-Transformer seems to solve this issue, as increasing \(\mathcal{F}\) results in an improved performance for this architecture.
To compare different approaches, density-based methods like AE, VAE, PPCA, and LOF demonstrate better performance than post-hoc ones when \(\mathcal{F}=10\). However, this gap in performance is reduced with increasing \(\mathcal{F}\) to the extent that methods like MDS, VIM, and SHE applied on ResNet are outperforming density-based ones on the eICU dataset with \(\mathcal{F}=1000\).
### Over-confidence
The results above showed over-confidence for the MLP and ResNet architectures, whereas the FT-Transformer appeared as a solution to this problem. For a more visual exploration of this issue, we employ a classification toy example. In this example, each of the predictive architectures is trained on a multi-class classification task with 2D samples. Next, the entropy of the model's softmax output is plotted for a wide array of inputs. Plots are shown in Fig. 2, with a lighter color indicating more confidence (i.e. low entropy in the softmax output). As depicted in this figure, MLP and ResNet confidence is increased by getting further away from the ID data. This observation validates the presence of over-confidence in both the MLP and ResNet models. Conversely, in the FT-Transformer, confidence increases along certain directions and decreases in others. This suggests that the transformer can mitigate the over-confidence in some directions, but does not solve it entirely.
## 6 Discussion
According to our results, when OODs are far from ID data, there exist methods that can effectively perform OOD detection. However, OODs near to ID data such as multiplication by \(\mathcal{F}=10\) or in a near-OOD setting are still challenging to spot. While the poor results in the near-OOD scenarios might be due to a large overlap with the ID data, when multiplying features with a scaling factor one can ensure that there is a clear difference between such OOD and ID data. This points to the fact that there is still ample room for improvement in detecting near OODs.
Delving into OOD detectors, density-based methods, particularly AE, VAE, PPCA, and Flow, exhibit consistent good performance across different settings and datasets. Within the post-hoc methods, they perform poorly in some cases, but they improve substantially and become competitive with the density-based ones when used in conjunction with the distance-based mechanisms. For example, MDS applied on ResNet can even marginally outperform density-based methods in some cases, such as the experiment with a scaling factor of \(\mathcal{F}=1000\) on the eICU dataset. This nuances previous claims that post-hoc methods generally exhibit poor performance (Ulmer et al., 2020; Zadorozhny et al., 2022).
To compare different prediction models, ResNet combined with distance-based detectors demon
Figure 2: Depiction of confidence scores for different architectures. High confidence (low entropy) is represented in light orange. MLP and ResNet exhibit regions of confidence extending far away from the ID data. FT-Transformer mitigates this over-confidence but does not solve it.
strates better results than MLP and FT-Transformer. On the other hand, MLP and ResNet suffer from over-confidence, causing certain detectors to perform even worse than a random classifier. This aligns with what was highlighted in prior studies (Hein et al., 2019; Ulmer and Cina, 2021). Our numerical results suggest that FT-Transformer could be a solution to this problem, however, our simple example with toy data showcases that transformers do not completely eliminate over-confidence.
This benchmark is built on two intensive care datasets, which are highly granular. Hence, caution should be exercised when transporting these findings to alternative healthcare tabular datasets with different characteristics. To facilitate the extension of this benchmark, we provided a modular implementation allowing for the addition of new datasets and methods to the experiments.
|
2309.06033 | Energy-Aware Federated Learning with Distributed User Sampling and
Multichannel ALOHA | Distributed learning on edge devices has attracted increased attention with
the advent of federated learning (FL). Notably, edge devices often have limited
battery and heterogeneous energy availability, while multiple rounds are
required in FL for convergence, intensifying the need for energy efficiency.
Energy depletion may hinder the training process and the efficient utilization
of the trained model. To solve these problems, this letter considers the
integration of energy harvesting (EH) devices into a FL network with
multi-channel ALOHA, while proposing a method to ensure both low energy outage
probability and successful execution of future tasks. Numerical results
demonstrate the effectiveness of this method, particularly in critical setups
where the average energy income fails to cover the iteration cost. The method
outperforms a norm based solution in terms of convergence time and battery
level. | Rafael Valente da Silva, Onel L. Alcaraz López, Richard Demo Souza | 2023-09-12T08:05:39Z | http://arxiv.org/abs/2309.06033v1 | # Energy-Aware Federated Learning with Distributed User Sampling and Multichannel ALOHA
###### Abstract
Distributed learning on edge devices has attracted increased attention with the advent of federated learning (FL). Notably, edge devices often have limited battery and heterogeneous energy availability, while multiple rounds are required in FL for convergence, intensifying the need for energy efficiency. Energy depletion may hinder the training process and the efficient utilization of the trained model. To solve these problems, this letter considers the integration of energy harvesting (EH) devices into a FL network with multi-channel ALOHA, while proposing a method to ensure both low energy outage probability and successful execution of future tasks. Numerical results demonstrate the effectiveness of this method, particularly in critical setups where the average energy income fails to cover the iteration cost. The method outperforms a norm based solution in terms of convergence time and battery level.
Energy Harvesting, Federated Learning, Multi-channel ALOHA, User Sampling.
## I Introduction
Federated learning (FL) has emerged as a prominent research topic within the wireless communication community, gaining significant attention in recent years [1]. In FL, edge devices collaboratively train a global model by only sharing local model updates, which provides a higher protection against the exposure of sensitive data, such as surveillance camera images, geolocation data, and health information. However, such collaborative training requires multiple communication rounds, raising spectral and energy efficiency concerns [1]. The latter is particularly important for edge devices, given their inherent energy limitations.
The sixth generation (6G) of wireless systems targets 10-100 times more energy efficiency than 5G, which is critical for supporting massive Internet of Things (IoT) networks [2]. Such demanding vision requires a meticulous design of the communication system, where medium access control (MAC) mechanisms play a major role. Grant-free random access protocols, such as slotted ALOHA (SA) with multiple channels, are suitable candidates for massive IoT applications, since control signaling is much reduced. Moreover, energy availability must be considered to support self-sustainable networks, in which _energy neutrality_[3], balancing availability and expenditure of energy resources, is essential.
Existing literature on FL indirectly addresses spectral and energy efficiency by optimizing the convergence time, leveraging informative updates from users [4, 5] or the relationship between local and global models [6], reducing the required number of iterations. These approaches often overlook the initial battery levels of different devices, which can result in energy depletion during the training process and hinder the overall progress. Even if the training process is not impeded, the remaining energy may be insufficient for the execution of future tasks and the utilization of the trained model.
This letter considers the use of EH devices, which eliminate the need for frequent battery replacement [7], while also allow energy neutrality. Prior works in [8, 9] considered some sort of energy income for FL networks. In [8], a wireless-powered FL system is considered and the tradeoff between model convergence and the transmission power of the access point is derived. The authors in [9] consider EH devices with multiple base stations (BS) and propose a user selection algorithm to minimize the training loss. However, [8, 9] overlook the residual energy in the devices at the end of the training process and the energy imbalances among users, which are considered in this letter. Moreover, they do not consider a random access protocol and massive IoT settings. We present a novel energy-aware user sampling technique for a FL network under a multichannel SA protocol. The proposed method enables users to make informed decisions regarding their participation in an iteration, controlling the computation cost. Numerical results corroborate the effectiveness of our method. In critical energy income setups, lower error and higher energy availability can be achieved compared to [4], which solely considers the informativeness of updates. We can achieve an error 46.72% smaller, while maintaining 37% more energy in a network of 100 devices, while the the performance gap increases with the number of deployed devices.
## II System Model
Consider a wireless network comprising \(K\) users, indexed as \(k\in\mathcal{K}=\{1,2,\ldots,K\}\), a BS, and \(M\) orthogonal channels. Each user has a dataset \(\mathcal{D}_{k}=\{\mathbf{x}_{k},\mathbf{y}_{k}\}\) associated with its respective local model. Here, \(\mathbf{x}_{k}\) is the unlabeled sample vector, with size \(L\times 1\), and \(\mathbf{y}_{k}\) is the ground truth vector for supervised learning. The common goal of every device is to minimize a global loss function \(F(\mathbf{w})\) as
\[\min_{\mathbf{w}}\frac{1}{K}\sum_{k=1}^{K}f_{k}(\mathbf{w}), \tag{1}\]
where \(f_{k}(\mathbf{w})=\ell(\mathbf{w},\mathbf{x}_{k},\mathbf{y}_{k})\) is the local loss function for the \(k\)-th user and \(\mathbf{w}\) is the global model. In FL, the problem in (1) is tackled by distributively minimizing \(f_{k}(\mathbf{w})\) over iterations, which yields a local model update \(\mathbf{g}_{k}(t)=\nabla f_{k}(\mathbf{w}(t))\) for the stochastic gradient descendent method. To ensure collaborative learning, each user transmits \(\mathbf{g}_{k}(t)\) to the BS, which employs an aggregation function to update the global model. Here, we consider FedAvg [10], thus, the global model is updated as
\[\mathbf{w}(t+1)=\mathbf{w}(t)-\mu\sum_{k\in\mathcal{K}}d_{k}\mathbf{g}_{k}(t), \tag{2}\]
where \(\mu>0\) is the learning rate and \(d_{k}=|\mathcal{D}_{k}|/\sum_{k^{\prime}=1}^{K}|\mathcal{D}_{k^{\prime}}|\). Then, the BS broadcasts \(\mathbf{w}(t+1)\) for all users.
From (2), we can observe that the size of the learning step is directly affected by the norm of the local update \(||\mathbf{g}_{k}(t)||\), which quantifies how informative the update is. In [4], the authors present a method to adaptly decide the transmission probability of users based on the local update norm given by
\[p_{\text{tx},k}(t)=\max(\min(e\ln||\mathbf{g}_{k}(t)||-\lambda(t),1),0). \tag{3}\]
In this context, \(\lambda(t)\) serves as a feedback signal that ensures an efficient utilization of the \(M\) orthogonal channels in a multichannel SA setup 1. The value of \(\lambda(t)\) is determined by
Footnote 1: As discussed in [11], transmission errors (or collisions) may compromise the FL performance. However, following [4], the considered network maximizes the utilization of the available resources.
\[\lambda(t)=\lambda(t-1)+\mu_{1}(\hat{K}-M), \tag{4}\]
where \(\mu_{1}\) is a step size and \(\hat{K}\leq K\) is the number of transmissions that occurred at the previous iteration.
Note that this method does not consider the, potentially limited, energy availability at the devices. For instance, an EH user could repeatedly transmit and drain its battery in the process, rendering the execution of future tasks impossible. To mitigate this, we introduce a sleep probability and consider an strategy depicted in Fig. 1 and based on the following steps.
1. **Energy Harvesting:** At the start of an iteration, each device harvests \(\zeta_{k}(t)\) Joules of energy and stores in the battery if its capacity allows, being \(\zeta_{k}(t)\) a random variable with a predefined distribution.
2. **Engagement:** Each user decides whether to engage in the iteration with a sleep probability \[p_{s,k}(t)=1-\alpha\frac{B_{k}(t)}{B_{\max}},\] (5) where \(\alpha\) is a constant, \(B_{k}(t)\) is the current battery level, and \(B_{\max}\) is the battery capacity, which is the same for all devices. We propose this sleep probability to equalize the battery charge of all devices over time. The awaken users receive the global model \(\mathbf{w}(t)\) from the BS and compute their local model updates \(\mathbf{g}_{k}(t)\).
3. **Informative Multi-Channel SA:** Users transmit \(\mathbf{g}_{k}(t)\) with a probability given by (3). Transmissions occur through a randomly chosen channel among \(M\) channels. A transmission is only successful if there is no collision.
4. **Global Model Updates**: Following (2) the BS aggregates the local updates and broadcasts \(\mathbf{w}(t+1)\) and \(\lambda(t+1)\), which are assumed to be collision-free.
Following this procedure, the battery evolution model is
\[B_{k}(t) =B_{k}(t-1)+\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))\] \[-\delta_{\text{e},k}(t)(E_{k}^{\text{cmp}}+E_{k}^{\text{rx}})- \delta_{\text{tx},k}(t)E_{k}^{\text{tx}}, \tag{6}\]
where \(\delta_{\text{e},k}(t)\) and \(\delta_{\text{tx},k}(t)\) are indicator functions representing user engagement and transmission, respectively. They are equal to \(1\) when the corresponding event occurs and \(0\) otherwise. Additionally, \(E_{k}^{\text{cmp}}\), \(E_{k}^{\text{rx}}\), and \(E_{k}^{\text{tx}}\) are the computation, reception, and transmission energy costs, respectively, whose models are presented in Section III. Moreover, it is crucial to choose a precise value for \(\alpha\) in step 2) to ensure the proper functioning of the network, which is discussed in Section IV.
## III Energy Consumption Models
### _Local-Computation Model_
The computation complexity of a machine learning algorithm can be measured by the number of required floating point operations (FLOPs). Let \(W\) denote the number of FLOPs per data sample for a given model. The total number of FLOPs for the \(k\)-th user to perform one local update is
\[G_{k}=W|\mathcal{D}_{k}|. \tag{7}\]
Let \(f_{\text{clk},k}\) be the processor clock frequency (in cycles/s) of the \(k\)-th user and \(C_{k}\) be the number of FLOPs it processes within one cycle. Then, the time required for one local update is
\[t_{k}=\frac{G_{k}}{C_{k}f_{\text{clk},k}},\quad\forall k\in\mathcal{K}. \tag{8}\]
Moreover, for a CMOS circuit, the central processing unit (CPU) power is often modeled by its most predominant part: the dynamic power [12], which is proportional to the square of the supply voltage and to the operating clock frequency. Moreover, for a low voltage supply, as in our case, the frequency scales approximately linear with the voltage [12]. Therefore, the CPU power consumption can be written as [8]
\[p_{k}^{\text{cmp}}=\psi_{k}f_{\text{clk},k}^{3}\quad\forall k\in\mathcal{K}, \tag{9}\]
where \(\psi\) is the effective capacitance and depends on the chip architecture. Based on (8) and (9), the energy consumption of the computation phase for the \(k\)-th user is given by
\[E_{k}^{\text{cmp}}=t_{k}P_{k}^{\text{cmp}}=\psi_{k}\frac{G_{k}}{C_{k}}f_{\text {clk},k}^{2}. \tag{10}\]
Fig. 1: Users begin the iteration by harvesting energy. Then, a user may engage by computing its local model update \(\mathbf{g}_{k}(t)\). A user can either transmit or withhold its update. Transmissions occur through one of \(M\) channels using SA. If more than one user access the same channel, there is a collision.
### _Transceiver Model_
The energy consumed by the edge devices' transceivers is
\[E_{k}^{\text{comms}}=E_{k}^{\text{tx}}+E_{k}^{\text{rx}}+E_{k}^{\text{sleep}}, \tag{11}\]
where \(E_{k}^{\text{tx}}\) (\(E_{k}^{\text{rx}}\)) is the energy required to transmit (receive) a local (global) update while \(E_{k}^{\text{sleep}}\) is the consumed energy during the inactive time. Since \(E_{k}^{\text{sleep}}\) is much smaller than \(E_{k}^{\text{tx}}\) and \(E_{k}^{\text{rx}}\), we neglect its impact in the following.
Considering the transmission of local updates with a radiated power \(P_{k}^{\text{tx}}\), the power consumed by the edge transceivers is can be modeled as [13]
\[P_{k}^{\text{total}}=\frac{P_{k}^{\text{tx}}}{\eta}+P_{\text{circ}}, \tag{12}\]
where \(\eta\) is the drain efficiency of the power amplifier (PA), and \(P_{\text{circ}}\) is a fixed power consumption that comprises all other transceiver circuits except the PA. Then, the energy required to transmit a local update is
\[E_{k}^{\text{tx}}=\frac{P_{k}^{\text{total}}}{R_{b}^{\text{tx}}}N_{k}, \tag{13}\]
where \(N_{k}\) is the local update size in bits, and \(R_{b}^{\text{tx}}\) is the bit rate in the uplink. Meanwhile, the energy consumed when receiving the global updates is modeled by
\[E_{k}^{\text{rx}}=\frac{P_{k}^{\text{rx}}}{R_{b}^{\text{tx}}}N, \tag{14}\]
where \(N\) is the global update size in bits, \(R_{b}^{\text{rx}}\) is the bit rate in the downlink, and \(P_{k}^{\text{rx}}\) is the receive power consumption, which includes \(P_{\text{circ}}\). Thus, \(P_{k}^{\text{rx}}\) is slightly greater than \(P_{\text{circ}}\), but usually smaller than \(P_{k}^{\text{total}}\).
## IV Sleep Probability Tuning
To ensure that a device saves enough energy for future tasks while still participating in the model training, we propose a precise selection of parameter \(\alpha\) based on the EH process and the desired battery level at the end of the training. Notice that the expected battery level with respect to \(k\) and assuming equal costs for all devices can be obtained from (6) as
\[\mathbb{E}[B_{k}(t)] =\mathbb{E}[B_{k}(t-1)]+\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]\] \[-\mathbb{E}[\delta_{\text{e},k}(t)](E^{\text{cmp}}+E^{\text{rx} })-\mathbb{E}[\delta_{\text{tx},k}(t)]E^{\text{tx}}\] \[=\mathbb{E}[B_{k}(t-1)]+\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]\] \[-\alpha\frac{\mathbb{E}[B_{k}(t)]}{B_{\max}}(E^{\text{cmp}}+E^{ \text{rx}})-p_{\text{tx},k}(t)E^{\text{rx}}, \tag{15}\]
where \(\mathbb{E}[\delta_{\text{e},k}(t)]=1-p_{\text{s},k}(t)\) and \(\mathbb{E}[\delta_{\text{tx},k}(t)]=p_{\text{tx},k}(t)\). We also consider the expectation of the battery level in \(p_{\text{s},k}\), since we aim to stabilize the average battery level to a fixed threshold \(\xi>0\) over time. Therefore, as \(t\) tends to infinity, \(\mathbb{E}[B_{k}(t)]\) converges to \(\xi\). Using this in (15) leads to
\[\alpha=\left(E_{h}-p_{\text{tx},k}(t)E^{\text{tx}}\right)\frac{B_{\max}}{\xi( E^{\text{cmp}}+E^{\text{rx}})}, \tag{16}\]
where \(E_{h}=\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))]\) is the average harvested energy. Note that the proposed solution requires knowledge of \(\zeta_{k}(t)\) and \(B_{k}(t-1)\) distributions. Although it is reasonable to assume that a device has such knowledge, mathematical tractability of the battery level is challenging. Since the required battery knowledge pertains to a previous time than the energy income, the distributions of these two variables are independent. This allows us to rearrange the expectations and state the average harvested energy as
\[E_{h} =\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))]\] \[=\mathbb{E}_{\xi}[\mathbb{E}_{B}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]]\] \[\overset{\text{op}}{=}\mathbb{E}_{\xi}[\min(\zeta_{k}(t),B_{\max }-\mathbb{E}[B_{k}(t-1)])]\] \[\overset{\text{op}}{=}\mathbb{E}[\min(\zeta_{k}(t),B_{\max}- \xi)]\] \[=\mathbb{E}[\zeta_{k}(t)\mid\zeta_{k}(t)\leq B_{\max}-\xi]\text{Pr} \{\zeta_{k}(t)\leq B_{\max}-\xi\}\] \[+(B_{\max}-\xi)\text{Pr}\{\zeta_{k}(t)>B_{\max}-\xi\}. \tag{17}\]
Since the minimum function is convex, we employed Jensen's inequality in step (a) and from step (b) onward we consider \(t\rightarrow\infty\), thus \(\mathbb{E}[B_{k}(t-1)]=\xi\).
Since \(p_{\text{tx},k}(t)\) is not known a priori, and to allow deviations of the energy stored in the battery about \(\xi\), we use \(\mathbb{E}[p_{\text{tx},k}(t)]\) in (16) instead of \(p_{\text{tx},k}(t)\). According to (4), out of the \(K\) users, \(M\) updates per iteration are transmitted on average to the BS, thus, \(\mathbb{E}[p_{\text{tx},k}(t)]=M/K\). Then, with (17) and (16) we have
\[\alpha\geq\left(\mathbb{E}_{k}[\min(\zeta_{k}(t),B_{\max}-\xi)]-\frac{M}{K}E^{ \text{tx}}\right)\frac{B_{\max}}{\xi(E^{\text{cmp}}+E^{\text{rx}})}. \tag{18}\]
At the beginning of the training process, the BS broadcasts the value of \(\alpha\) solved by assuming equality in (18).
### _Mean EH Knowledge_
We also consider a simpler variation of the method where we exploit only the average EH information, i.e., we use \(E_{h}=\mathbb{E}[\zeta_{k}(t)]\) and \(\mathbb{E}[p_{\text{tx},k}(t)]=M/K\) in (16), thus
\[\alpha=\left(\mathbb{E}[\zeta_{k}(t)]-\frac{M}{K}E^{\text{tx}}\right)\frac{B_{ \max}}{\xi(E^{\text{cmp}}+E^{\text{rx}})}. \tag{19}\]
The energy mean knowledge (EMK) approach in (19) disregards the impact of the maximum battery capacity, different from the energy distribution knowledge (EDK) in (18).
## V Simulation Results
We analyze the performance of the proposed method compared to the Largest Updates' Norms (LUN) baseline, where users transmit the updates with the largest norms according to [4]. Additionally, to illustrate the necessity of the adaptive control presented in (3) and (4), we include a baseline method that assigns a uniform transmission probability \(p_{\text{tx},k}=M/K\) to all users (to distinguish, we use the acronym AC for adaptive control). We assume a linear regression problem with the following loss function: \(f_{k}(\mathbf{w})=0.5\mathbf{x}_{k}^{\text{T}}\mathbf{w}(t)-y_{k}|^{2}\)[4], where \(\mathbf{x}_{k}\sim\mathcal{N}(\mathbf{v}_{k},\mathbf{I})\), \(y_{k}=\mathbf{x}_{k}^{\text{T}}\mathbf{w}\), and \(\mathbf{w}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Note that \(\mathbf{w}(t)\) are the training weights, while \(\mathbf{w}\) corresponds to the true weights. Also, parameter \(v_{k}\sim\mathcal{N}(0,\beta_{k})\) is utilized to generate a non-IID dataset, with \(\beta_{k}=\mathbf{I}\) indicating the non-IID degree.
Similar to [9], the energy income at each user is modeled by a compound Poisson stochastic process, i.e., the interarrival
time is modeled by an exponential distribution with rate \(r\) and the amount of energy harvested in each arrival is modeled by a Poisson process with parameter \(m/r\), thus, \(\mathbb{E}_{t}[\xi_{k}(t)]=m\). This model is defined by discrete units of energy. We scale one unit of energy to the total cost of an iteration in J, i.e., \(E_{k}^{\text{commms}}+E_{k}^{\text{cmp}}\). Unless stated otherwise, we set \(r=0.02\) and \(m=0.2\) units of energy. Note that \(r\) is the mean of the exponential distribution, corresponding to an energy arrival every 50 iterations on average, similar to [14]. Moreover, we set \(K=100\), \(M=10\), \(L=10\), \(\mu=0.01\), and \(\mu_{1}=0.1\) as in [4], while \(P_{k}^{\text{tx}}=3.3\) dB, \(P_{k}^{\text{rx}}=1.9\) mW, \(\eta=0.33\), \(P_{\text{circ}}=1.33\) mW, which correspond to a BLE transceiver [15]. Moreover, \(R_{b}^{\text{tx}}=R_{b}^{\text{rx}}=1\) Mbps, \(W=4L\), \(f_{\text{clk},k}=0.25\) GHz, \(C_{k}=20\)[16], and the effective capacitance is \(\psi_{k}=10^{-20}\)[17], while the initial battery level of the devices is given by a uniform distribution \(U(0,B_{\text{max}})\), where \(B_{\text{max}}=10^{-1}\) J.
First, we set the desired threshold to \(\xi=0.4B_{\text{max}}\) and analyze the average stored energy over iterations in Fig. (a)a, which converges to the threshold when we exploit full knowledge of the energy income distribution (EDK; EDK-AC) or just its mean (EMK; EMK-AC). For the LUN approach, the average stored energy stabilizes near zero, as most users run out of energy. The network naturally reaches a stable state since all users, included those that run out of energy, continue to harvest energy. However, only users with sufficient energy actively participate in the training. Fig. (b)b shows that relying solely on the energy income source, without energy management, directly affects the learning process. Indeed, the LUN approach starts the training well, but soon devices die and are unable to resume learning until enough energy is harvested. Meanwhile, with the proposed energy management, devices can participate more frequently, resulting in a smaller error for EDK-AC and EMK-AC. Also, the error without the adaptive control is much higher, since it does not consider the norm of local updates, a persistent trend throughout the simulations.
Next we investigate the effect of the mean of the energy income process on the energy availability when \(\xi=0.4B_{\text{max}}\). Fig. (a)a displays the results for \(t=1000\), revealing that the EDK, EDK-AC, EMK, and EMK-AC curves stay fairly close to the threshold. The variation is due to the inequality in (17), which, similar to the EMK approach, cannot fully incorporate the battery capacity considerations within this operational region. As we increase \(m\), the EDK and EDK-AC curves depart from the EMK and EMK-AC curves, since the battery capacity limitation is more relevant. Besides, an energy surplus occurs within the network with respect to the threshold, since only \(M\) devices transmit on average. In Fig. (b)b, we plot the corresponding average error. For a small \(m\), the threshold is too demanding, resulting in similar errors for all AC approaches. However, as the energy income increases, the proposed method with adaptive control outperforms LUN. As the energy levels continue to rise, the differences between the AC methods and the LUN approach diminish.
In Fig. (a)a we set \(m=0.2\), \(\xi=0.4B_{\text{max}}\), \(t=1000\), for varying number of devices. The average battery level remains relatively unaffected, which is not true for the average error in Fig. (b)b. Here, more users are able to engage in the learning process when using the proposed approaches. In contrast, the LUN method shows limited improvement with the number of users, since it lacks energy awareness, different from the methods that consider the average network energy. Thus, many users continue to consume energy by performing computations without transmitting, leading to rapid battery depletion. Moreover, since users in methods without AC have the same transmission probability, i.e., the methods disregard the informativeness of updates, the same performance improvements exhibited by methods with AC cannot be observed.
Finally, we examine the impact of the energy threshold.
Fig. 2: (a) Normalized average battery level and (b) average error, i.e., \(\sum_{k}||\mathbf{w}_{k}(t)-\mathbf{w}||/K\), as a function of the number of iterations for \(\xi=0.4B_{\text{max}}\), \(m=0.2\), and \(K=100\).
In Fig. 5a it can be observed that the average battery level follows a nearly linear trend for EDK and EDK-AC, with slight variations due to (17). When the threshold is set to lower or higher values, where the constraint is either insignificant or more dominant, the battery level precisely aligns with the threshold when using EDK and EDK-AC. However, with EMK and EMK-AC the battery cannot stabilize at the expected level for higher thresholds. As for the error, in Fig. 5b, it becomes apparent that an optimal threshold exists, when considering the AC methods. If the threshold is too low, some devices deplete their energy and the error increases, while if the threshold is very demanding, the error rises since devices are often saving energy, reaching a point where LUN outperforms the proposed methods. It is worth mentioning that in the exceptional case where all users must maintain full battery, no training occurs as (energy-consuming) transmissions are not allowed.
## VI Conclusion
We proposed an energy-aware method for FL networks under the principle of energy neutrality. Our approach mitigates battery depletion and achieves convergence to a sustainable energy level, enabling the execution of future tasks. The method requires distribution knowledge of the energy income, but relying only on average information was shown to be sufficient. In critical energy income regions and reasonable energy thresholds, our method outperforms the typical norm-based strategy, in terms of convergence time and battery level. In future works, we aim to include physical layer modeling and assess the impact of non-orthogonal multiple access techniques in the power domain and rate allocation procedures.
|
2309.09300 | AutoAM: An End-To-End Neural Model for Automatic and Universal Argument
Mining | Argument mining is to analyze argument structure and extract important
argument information from unstructured text. An argument mining system can help
people automatically gain causal and logical information behind the text. As
argumentative corpus gradually increases, like more people begin to argue and
debate on social media, argument mining from them is becoming increasingly
critical. However, argument mining is still a big challenge in natural language
tasks due to its difficulty, and relative techniques are not mature. For
example, research on non-tree argument mining needs to be done more. Most works
just focus on extracting tree structure argument information. Moreover, current
methods cannot accurately describe and capture argument relations and do not
predict their types. In this paper, we propose a novel neural model called
AutoAM to solve these problems. We first introduce the argument component
attention mechanism in our model. It can capture the relevant information
between argument components, so our model can better perform argument mining.
Our model is a universal end-to-end framework, which can analyze argument
structure without constraints like tree structure and complete three subtasks
of argument mining in one model. The experiment results show that our model
outperforms the existing works on several metrics in two public datasets. | Lang Cao | 2023-09-17T15:26:21Z | http://arxiv.org/abs/2309.09300v1 | # AutoAM: An End-To-End Neural Model for Automatic and Universal Argument Mining
###### Abstract
Argument mining is to analyze argument structure and extract important argument information from unstructured text. An argument mining system can help people automatically gain causal and logical information behind the text. As argumentative corpus gradually increases, like more people begin to argue and debate on social media, argument mining from them is becoming increasingly critical. However, argument mining is still a big challenge in natural language tasks due to its difficulty, and relative techniques are not mature. For example, research on non-tree argument mining needs to be done more. Most works just focus on extracting tree structure argument information. Moreover, current methods cannot accurately describe and capture argument relations and do not predict their types. In this paper, we propose a novel neural model called AutoAM to solve these problems. We first introduce the argument component attention mechanism in our model. It can capture the relevant information between argument components, so our model can better perform argument mining. Our model is a universal end-to-end framework, which can analyze argument structure without constraints like tree structure and complete three subtasks of argument mining in one model. The experiment results show that our model outperforms the existing works on several metrics in two public datasets.
Keywords:Argument Mining Information Extraction Natural Language Processing
## 1 Introduction
Argument mining (AM) is a technique for analyzing argument structure and extracting important argument information from unstructured text, which has gained popularity in recent years [12]. An argument mining system can help people automatically gain causal and logical information behind the text. The argument mining techniques benefit plenty of many fields, like legal [31], public opinions [19], finance, etc. Argument mining is beneficial to human society, but there is still much room for development. Argument mining consists of several tasks and has a variety of different paradigms [12]. In this paper, we focus on the most common argument structure of monologue. It is an argumentative text from one side, not an argument from two sides. The microscopic structure of
argumentation is the primary emphasis of the monologue argument structure, which primarily draws out the internal relations of reasoning.
In this setting, an argumentative paragraph can be viewed as an argument graph. An argument graph can efficiently describe and reflect logical information and reasoning paths behind the text. An example of AM result after extraction is shown in Figure 1. The two important elements in an argument graph are the argument component (AC) and the argument relation (AR). ACs are nodes in this graph, and ARs are edges. The goal of an AM system is to construct this argument graph from unstructured text automatically. The process of the AM system definition we use is as following steps:
1. Argument Component Identification (ACI): Given an argumentative paragraph, AM systems will detect ACs from it and separate this text.
2. Argument Component Type Classification (ACTC): AM systems will determine the types of these ACs.
3. Argument Relation Identification (ARI): AM systems will identify the existence of a relationship between any ACs.
4. Argument Relation Type Classification (ARTC): AM systems will determine the type of ARs, which are the existing relations between ACs.
Subtask 1) is a token classification task, which is also a named entity recognition task. This task has a large amount of research work on it. Most of the previous argument mining works [25][10][3] assume that the subtask 1) argument component identification has been completed, which is the argument component has been identified and can be obtained from the argumentative text. Therefore, the emphasis of argument mining research is placed on other subtasks. Following previous works, we also make such assumptions in this paper. On this basis, we design an end-to-end model to complete ACTC, ARI, and ARTC subtasks simultaneously.
ARI and ARTC are the hardest parts of the whole argument mining process. An AR is represented by two ACs. It is difficult to represent AR precisely and
Figure 1: An example of argument mining result after extraction in the CDCP dataset [19]. It forms an argument graph. In this graph, every node AC represents an argument component. Fact, Value, and Policy are three types of ACs. Every edge AR denotes argument relation, and Reason is one type of AR.
capture this relation. Most ACs pairs do not have a relationship at all, which leads to a serious sample imbalance problem. Among the whole process, ARI and ARTC are parts of ACI and ACTC, so the performance of these tasks will be influenced. Due to these reasons, many previous works give up and ignore the classification of ARs. Besides, much research imposes some argument constraints to do argument mining. In most normal cases, they assume argument information is a tree structure, and they can use the characteristic of the tree to extract information. Tree structure argument information is common in an argumentative essay. However, argument information with no constraints is more normal in the real world, like a huge amount of corpus on social media. This information is just like the general argument graphs mentioned before and needs to be extracted in good quality.
In this paper, we solve the above problems with a novel model called **AutoAM** (the abbreviation of **Autom**actic and Universal **A**rgument **M**ining Model). This is an efficient and accurate argument mining model to complete the entire argument mining process. This model does not rely on domain-specific corpus and does not need to formulate special syntactic constraints, etc., to construct argument graphs from argumentative text. To improve the performance of non-tree structured argument mining, we first introduce the argument component attention mechanism (**ArguAtten**) in this model, which can better capture the relevant information of argument components in an argumentative paragraph. It benefits the overall performance of argument mining. We use a distance matrix to add the key distance feature to represent ARs. A stratified learning rate is also a critical strategy in the model to balance multi-task learning. To the best of our knowledge, we are the first to propose an end-to-end universal AM model without structure constraints to complete argument mining. Meanwhile, we combine our novelty and some successful experience to achieve the state of the art in two public datasets.
In summary, our contributions are as follows:
* We propose a novel model **AutoAM** for argument mining which can efficiently solve argument mining in all kinds of the argumentative corpus.
* We introduce **ArguAtten** (argument component attention mechanism) to better capture the relation between argument components and improve overall argument mining performance.
* We conduct extensive experiments on two public datasets and demonstrate that our method substantially outperforms the existing works. The experiment results show that the model proposed in this paper achieves the best results to date in several metrics. Especially, there is a great improvement over the previous studies in the tasks of ARI (argument relation identification) and ARTC (argument relation type classification).
## 2 Related Work
Since argument mining was first proposed [16], much research has been conducted on it. At first, people used rule-based or some traditional machine learning
methods. With the help of deep learning, people begin to get good performance on several tasks and start to focus on non-tree structured argument mining. We discuss related work following the development of AM.
### Early Argument Mining
The assumption that the argument structure could be seen as a tree or forest structure was made in the majority of earlier work, which made it simpler to tackle the problem because various tree-based methods with structural restrictions could be used. In the early stage of the development of argument mining, people usually use rule-based structural constraints and traditional machine learning methods to conduct argumentative mining. In 2007, Moens et al. [16] conducted the first argument mining research on legal texts in the legal field, while Kwon et al. [11] also conducted relevant research on commentary texts in another field. However, the former only identified the content of the argument and did not classify the argument components. Although the latter one further completed the classification of argument components, it still did not extract the relationship between argument components, and could not explore the argument structure in the text. It only completed part of the process of argument mining.
### Tree Structured Argument Mining with Machine Learning
According to the argumentation paradigm theory of Van Eemeren et al. [6], Palau and Moens [15] modeled the argument information in legal texts as a tree structure and used the hand-made Context-Free Grammar (CFG) to parse and identify the argument structure of the tree structure. This method is less general and requires different context-free grammars to be formulated for different structural constraints of argument. By the Stab and Gurevych [27][28] tree structure of persuasive Essay (Persuasive Essay, PE) dataset has been in argument mining has been applied in many studies and practices. In this dataset, Persing and Ng [23] and Stab and Gurevych [28] used the Integer Linear Programming (ILP) framework to jointly predict the types of argument components and argument relations. Several structural constraints are defined to ensure a tree structure. The arg-micro text (MT) dataset created by Peldszus [21] is another tree-structured dataset. In studies using this dataset, decoding techniques based on tree structure are frequently used, such as Minimum Spanning tree (MST) [22], and ILP [1].
### Neural Network Model in Argument Mining
With the popularity of deep learning, neural network models have been applied to various natural language processing tasks. For deep learning methods based on neural networks, Eger et al. [7] studied argument mining as a sequence labeling problem that relies on parsing multiple neural networks. Potash et al. [25] used sequence-to-sequence pointer network [30] in the field of argument mining
and identified the different types of argument components and the presence of argument relations using the output of the encoder and decoder, respectively. Kuribayashi et al. [10] developed a span representation-based argumentation structure parsing model that employed ELMo [24] to derive representations for ACs.
### Recent Non-Tree Structured Argument Mining
Recently, more works have focused on the argument mining of non-tree structures. The US Consumer Debt Collection Practices (CDCP) dataset [18][19] greatly promotes the development of non-tree structured argument mining. The argument structures contained in this dataset are non-tree structures. On this dataset, Niculae et al. [18] carry out a structured learning method based on a factor graph. This method can also handle the tree structure of datasets. It can also be used in the PE dataset, but the factor diagram needs a specific design according to the different types of the argument structure. Galassi et al. [8] used the residual network on the CDCP dataset. Mor IO et al. [17] developed an argument mining model, which uses a task-specific parameterized module to encode argument. In this model, there is also a bi-affine attention module [5] to capture the argument. Recently, Jianzhu Bao et al. [2]tried to solve both tree structure argument and non-tree structure argument by introducing the transformation-based dependency analysis method [4][9]. This work gained relatively good performance on the CDCP dataset but did not complete the ARTC task in one model and did not show the experiment results of ARTC.
However, these methods either do not cover the argument mining process with a good performance or impose a variety of argument constraints. There is no end-to-end model for automatic and universal argument mining before. Thus, we solve all the problems above in this paper.
## 3 Methodology
As shown in Figure 2, we propose a new model called AutoAM. This model adopts the joint learning approach. It uses one model to simultaneously learn the ACTC, ARI, and ARTC three subtasks in argument mining. For the argument component extraction, the main task is to classify the argument component type, and the argument component identification task has been completed by default on both the PE and the CDCP datasets. For argument relation extraction, the model regards ARI and ARTC as one task. The model classifies the relationship between the argument components by a classifier and then gives different prediction results for two tasks by post-processing prediction labels.
### Task Formulation
The input data contains two parts: a) A set of \(n\) argumentative text \(T=\{T_{1},T_{2},...,T_{n}\}\), b) for the \(i\)th argumentative text, there are \(m\) argument component spans \(S=\{S_{1},S_{2},...,S_{m}\}\), where every span marks the start and end scope
of each AC \(S_{i}=(start_{i},end_{i})\). Our aim is to train an argument mining model and use it to get output data: a) types of \(m\) ACs provided in the input data \(ACs=\{AC_{1},AC_{2},...,AC_{m}\}\), b) \(k\) existing ARs \(ARs=\{AR_{1},AR_{2},...,AR_{k}\}\) and their types, where \(AR_{i}=(AC_{a}\to AC_{b})\).
### Argument Component Extraction
By default, the argument component identification task has been completed. The input of the whole model is an argumentative text and a list of positional spans corresponding to each argument component \(S_{i}=(start_{i},end_{i})\).
We input argumentative text \(T\) into pre-trained language models (PLMs) to get contextualized representations \(H\in\mathbb{R}^{m\times d_{b}}\), where \(d_{b}\) is the dimension of the last hidden state from PLMs. Therefore, we represent argumentative text as \(H=(h_{1},h_{2},...,h_{m})\), where \(h_{i}\) denotes the \(i\)th token contextualized representation.
We separate argument components from the paragraph using argument component spans \(S\). In the PE dataset, the argument components do not appear continuously. We use mean pooling to get the representation of each argument component. Specifically, the \(i\) argument component can be represented as:
\[AC_{i}=\frac{1}{end_{i}-start_{i}+1}\sum_{j=start_{i}}^{end_{i}}h_{i}, \tag{1}\]
where \(AC_{i}\in\mathbb{R}^{d_{b}}\). Therefore, all argument components in the argumentative text can be represented as \(ACs=(AC_{1},AC_{2},...,AC_{n})\). For each argument component, we input it into AC Type Classifier \(MLP_{a}\) in order. This classifier contains
Figure 2: The framework of our proposed model called AutoAM.
a multi-layer perceptron. A Softmax layer is after it. The probability of every type of argument component can be get by:
\[p(y_{i}|AC_{i})=Softmax(MLP_{a}(AC_{i})), \tag{2}\]
where \(y_{i}\) represent the predicted labels of the \(i\)th argument component. We get the final predicted label of its argument component as:
\[\hat{y}_{i}=Argmax(p(y_{i}|AC_{i})). \tag{3}\]
### Argument Relation Extraction
This model views ARI and ARTC as having the same task and distinguish them by post-processing predictions. We classify every argument component pair (\(AC_{i}\to AC_{j}\)). Argument component pairs are different of (\(AC_{i}\to AC_{j}\)) and (\(AC_{j}\to AC_{i}\)). We add a label, 'none' here. 'none' represents that there is no relation of \(AC_{i}\to AC_{j}\).
In the argument relation extraction part, we use the enumeration method. We utilize output results from the ACTC step. We combine two argument components and input them into AR Type Classifier to get the predicted output.
First, the model uses ArguAtten (Argument Component Attention mechanism) to enhance the semantic representation of argument components. The self-attention mechanism is first proposed in the Transformer [29]. The core of this mechanism is the ability to capture how each element in a sequence relates to the other elements, i.e., how much attention each of the other elements pays to that element. When the self-attention mechanism is applied to natural language processing tasks, it can often capture the interrelationship of all lexical elements in a sentence and better strengthen the contextual semantic representation. In the task of argument mining, all argument components in an argumentative text also meet this characteristic. The basic task of argument mining is to construct an argument graph containing nodes and edges, where nodes are argument components and edges are argument relations. Before the argument relation extraction task, the self-attention mechanism of argument components can be used to capture the mutual attention of argument components. It means that it can better consider and capture the argument information of the full text. This mechanism is conducive to argument relations extraction and the construction of an argument graph. We define ArguAtten as:
\[ArguAtten(Q,K,V)=Softmax(\frac{QK^{T}}{\sqrt{d_{k}}})\times V, \tag{4}\]
where \(Q\), \(K\), \(V\) are got by multiplying ACs with \(W_{Q}\), \(W_{K}\), \(W_{V}\). They are three parameter matrices \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{d_{k}\times d_{k}}\), and \(d_{k}\) is the dimension of attention layer. Besides, we also use ResNet and layer normalization (LN) after the attention layer to avoid gradient explosion:
\[ResNetOut=LN(ACs+ArgutAtten(ACs)).\]
Through the self-attention of argument components, we obtain a better contextualized representation of argument components and then begin to construct argument pairs to perform argument relation extraction.
We consider that the relative distance between two argument components has a decisive influence on the type of argument relations between them. By observing the dataset, we can find that there is usually no argument relation between the two argument components, which are relatively far apart. It can significantly help the model to classify the argument relation types. Therefore, we incorporate this feature into the representation of argument relations. At first, the distance vector is introduced, and the specific definition is shown as:
\[V_{dist}=(i-j)\times W_{dist}, \tag{6}\]
where \((i-j)\) represents a relative distance, it can be positive or negative. \(W_{dist}\in\mathbb{R}^{1\times d_{dist}}\) is a distance transformation matrix, and it can transform a distance scalar to a distance vector. \(d_{dist}\) is the length of the distance vector.
For each argument relation, it comes from the source argument component (Src AC), the target argument component (Trg AC), and the distance vector (Dist Vec). We concatenate them to get the representation of an argument relation as:
\[AR_{i,j}=[AC_{i},AC_{j},V_{dist}], \tag{7}\]
where \(AR_{i,j}\in\mathbb{R}^{d_{b}\times 2+d_{dist}}\), \(d_{dist}\) is the length of distance vector.
Therefore, argument relations in an argumentative text can be represented as \(ARs=(AC_{1,2},AC_{1,3},...,AC_{n,n-1},)\), contains \(n\times(n-1)\) potential argument relations in total. We do not consider self-relation like \(AR=(AC_{i}\to AC_{i})\).
For each potential argument relation, we separately and sequentially input them into the AR Type Classifier \(MLP_{b}\). The classifier uses a Multi-Layer Perceptron (MLP) containing a hidden layer of 512 dimensions. The output of the last layer of the Multi-layer Perceptron is followed by a Softmax layer to obtain the probability of an argument relationship in each possible type label, as shown in:
\[p(y_{i,j}|AR_{i,j})=Softmax(MLP_{b}(AR_{i,j})), \tag{8}\]
where \(y_{i,j}\) denotes the predicted label of the argument relation from the \(i\)th argument component to the \(j\)th argument component. The final predicted labels are:
\[\hat{y}_{i,j}=Argmax(p(y_{i,j}|AR_{i,j})). \tag{9}\]
To get the predicted labels of ARI and ARTC, we post-processed the prediction of the model. The existence of an argument relation in the ARI task is defined as:
\[\hat{y}_{ARI}=\begin{cases}0&\text{if }\hat{y}_{AR}=0\\ 1&\text{if }\hat{y}_{AR}\neq 0\end{cases} \tag{10}\]
where \(\hat{y}_{AR}\) is the predicted label from the model output.
When we gain the type of an existing argument relation in the ARTC task, we assign the probability of 'none' to zero and select the other label with the higher probability. They are represented as:
\[\hat{y}_{ARTC}=Argmax(p(y_{AR}|AR_{i,j})),\quad y^{none}=0, \tag{11}\]
where \(y^{none}\) is the model output of the label 'none'.
### Loss Function Design
This model jointly learns the argument component extraction and the argument relation extraction. By combining these two tasks, the training objective and loss function of the final model is obtained as:
\[L(\theta)=\sum_{i}log(p(y_{i}|AC_{i}))+\sum_{i,j}p(y_{i,j}|AR_{i,j})+\frac{ \lambda}{2}||\theta||^{2}, \tag{12}\]
where \(\theta\) represents all the parameters in the model, and \(\lambda\) represents the coefficient of L2 regularization. According to the loss function, the parameters in the model are updated repeatedly until the model achieves better performance results to complete the model training.
## 4 Experiments
### Datasets
We evaluate our proposed model on two public datasets: Persuasive Essays (PE) [28] and Consumer Debt Collection Practices (CDCP) [18].
The PE dataset only has tree structure argument information. It has three types of ACs: _Major-Claim_, _Claim_, and _Premise_, and two types of AR: _support_ and _attack_.
The CDCP dataset has general structure argument information, not limited to a tree structure. It is different from the PE dataset and is more difficult. The argument information in this dataset is more similar to the real world. There are five types of ACs (propositions): _Reference_, _Fact_, _Testimony_, _Value_, and _Policy_. Between these ACs, there are two types of ARs: _reason_ and _evidence_.
We both use the original train-test split of two datasets to conduct experiments.
### Setups
In the model training, roberta-base [13] was used to fine-tune, and AdamW optimizer [14] was used to optimize the parameters of the model during the training. We apply a stratified learning rate to obtain a better semantic representation of BERT context and downstream task effect. The stratified learning rate is important in this task because this multi-task learning is complex and have three
subtasks. The ARI and ARTC need a relatively bigger learning rate to learn the data better. The initial learning rate of the BERT layer is set as 2e-5. The learning rate of the AC extraction module and the AR extraction module is set as 2e-4 and 2e-3, respectively. After BERT output, the Dropout Rate [26] is set to 0.2. The maximum sequence length of a single piece of data is 512. We cut off ACs and ARs in the over-length text. The batch size in each training step is set to 16 in the CDCP dataset and 2 in the PE dataset. The reason is that there are more ACs in one argumentative text from the PE dataset than in the CDCP dataset.
In training, we set an early stop strategy with 5 epochs. We set the minimum training epochs as 15 to wait for the model to become stable. We use \(MacroF1_{ARI}\) as monitoring indicators in our early stop strategy. That is because AR extraction is our main improvement direction. Furthermore, the ARI is between the ACTC and the ARTC, so we can better balance the three tasks' performance in the multi-task learning scenario.
The code implementation of our model is mainly written using PyTorch [20] library, and the pre-trained model is loaded using Transformers [32] library. In addition, model training and testing were conducted on one NVIDIA GeForce RTX 3090.
### Compared Methods
We compare our model with several baselines to evaluate the performance:
* **Joint-ILP**[28] uses Integer Linear Programming (ILP) to extract ACs and ARs. We compare our model with it in the PE dataset.
* **St-SVM-full**[18] uses full factor graph and structured SVM to do argument mining. We compare our model with it in both the PE and the CDCP datasets.
* **Joint-PN**[25] employs a Pointer Network with an attention mechanism to extract argument information. We compare our model with it in the PE dataset.
* **Span-LSTM**[10] use LSTM-based span representation with ELMo to perform argument mining. We compare our model with it in the PE dataset.
* **Deep-Res-LG**[8] uses Residual Neural Network on AM tasks. We compare our model with it in the CDCP dataset.
* **TSP-PLBA**[17] introduces task-specific parameterization and bi-affine attention to AM tasks. We compare our model with it in the CDCP dataset.
* **BERT-Trans**[2] use transformation-based dependency analysis method to solve AM problems. We compare our model with it in both the PE and the CDCP datasets. It is also the state of the art on two datasets.
### Performance Comparison
The evaluation results are summarized in Table 1 and Table 2. In both tables, '-' indicates that the original paper does not measure the performance of this
metric for its model. The best results are in bold, and the second-best results are in italics.
On the CDCP dataset, we can see our model achieves the best performance on all metrics in ACTC, ARI, and ARTC tasks. We are the first to complete all the tasks and get ideal results on the CDCP dataset. Our model outperforms the state of the art with an improvement of 2.1 in ACTC and 0.6 in ARI. The method BERT-Trans does not perform ARTC with other tasks at the same time, and it does not report results of ARTC, maybe due to unsatisfactory performance. In particular, compared with the previous work, we have greatly improved the task performance of ARTC and achieved ideal results.
On the PE dataset, our model also gets ideal performance. However, we get the second-best scores in several metrics. The first reason is that the PE dataset is tree-structured, so many previous work impose some structure constraints. Their models incorporate more information, and our model assumes they are general argument graphs in contrast. Another reason is that the models BERT-Trans, Span-LSTM, and Joint-PN combine extra features to represent ACs, like paragraph types, BoW, position embedding, etc. This information will change in the different corpus, and we want to build an end-to-end universal model. For example, there is no paragraph type information in the CDCP dataset. Therefore, we do not use them in our model. Even if our model does not take these factors into account, we achieve similar results to the state of the art.
\begin{table}
\begin{tabular}{l|c c c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{ACTC} & \multicolumn{4}{c|}{ARI} & \multicolumn{4}{c|}{ARTC} & \multirow{2}{*}{AVG} \\ & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Policy} & \multicolumn{1}{c}{Testi} & \multicolumn{1}{c}{Fact} & \multicolumn{1}{c|}{Refer.} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c|}{Rel.} & \multicolumn{1}{c|}{Non-rel.} & \multicolumn{1}{c|}{Macro} & \multicolumn{1}{c|}{Reason} & \multicolumn{1}{c|}{Evidence} \\ \hline St-SVM-strict & 73.2 & 76.4 & 76.8 & 71.5 & 41.3 & 100.0 & - & 26.7 & - & - & - & - & - \\ Deep-Res-LG & 65.3 & 72.2 & 74.4 & 72.9 & 40.3 & 66.7 & - & 29.3 & - & 15.1 & 30.2 & 0.0 & - \\ TSP-PLBA & 78.9 & - & - & - & - & - & - & 34.0 & - & - & - & 18.7 & - \\ BERT-Trans & 82.5 & 83.2 & 86.3 & 84.9 & 58.3 & 100.0 & 67.8 & 37.3 & 98.3 & - & - & - & - \\ \hline
**AutoAM (Ours)** & **84.6** & **85.0** & **86.8** & **86.1** & **65.9** & **100.0** & **68.4** & **38.5** & **98.4** & **71.3** & **98.1** & **44.4** & **74.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of comparison experiments on the CDCP dataset. All numbers in the table are f1 scores (%). The best scores are in bold. ‘-’ represents that the original paper does not report.
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{ACTC} & \multicolumn{4}{c|}{ARI} & \multicolumn{4}{c|}{ARTC} & \multirow{2}{*}{AVG} \\ & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{MC} & \multicolumn{1}{c}{Claim} & \multicolumn{1}{c|}{Premise} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{Rel.} & \multicolumn{1}{c|}{Non-rel.} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c|}{Support} & \multicolumn{1}{c}{Attack} \\ \hline Joint-ILP & 82.6 & 89.1 & 68.2 & 90.3 & 75.1 & 58.5 & 91.8 & 68.0 & 94.7 & 41.3 & 75.2 \\ St-SVM-strict & 77.6 & 78.2 & 64.5 & 90.2 & - & 60.1 & - & - & - & - & - \\ Joint-PN & 84.9 & 89.4 & 73.2 & 92.1 & 76.7 & 60.8 & 92.5 & - & - & - & - \\ Span-LSTM & 85.7 & 91.6 & 73.3 & 92.1 & 80.7 & 68.8 & 93.7 & _79.0_ & _96.8_ & **61.1** & 81.8 \\ BERT-Trans & _88.4_ & **93.2** & _78.8_ & _93.1_ & **82.5** & **70.6** & _94.3_ & **81.0** & - & - & **83.4** \\ \hline
**AutoAM (Ours)** & **88.7** & _91.9_ & **80.3** & **93.9** & _81.6_ & _65.8_ & **98.5** & 75.4 & **97.6** & _53.2_ & _81.9_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of comparison experiments on the PE dataset. All numbers in the table are f1 scores (%). The best results are in bold. The second best results are in italics. ‘-’ represents that the original paper does not report.
### Ablation Study
The ablation study results are summarized in Table 3. We conduct an ablation study on the CDCP dataset to see the impact of key modules in our model. It can be observed that the stratified learning rate is the most critical in this model. It verifies the viewpoint that multi-task learning is complex in this model and ARs extraction module needs a bigger learning rate to perform well. We can see ArguAtten improve the ACTC and ARTC performance by 1.7 and 13.6. However, the ARI matrix decreases a little bit. Even though the numbers are small, we think that the reason is the interrelationship between ACs has little impact on the prediction of ARs' existence. ArguAtten mainly plays an effect in predicting the type of ARs. From this table, we can also find that the distance matrix brings the important distance feature to AR representation with an overall improvement of 6.5.
## 5 Conclusion and Future Work
In this paper, we propose a novel method for argument mining and first introduce the argument component attention mechanism. This is the first end-to-end argument mining model that can extract argument information without any structured constraints and get argument relations of good quality. In the model, ArguAtten can better capture the correlation information of argument components in an argumentative paragraph so as to better explore the argumentative relationship. Our experiment results show that our method achieves the state of the art. In the future, we will continue to explore designing a better model to describe and capture elements and relationships in argument graphs.
|
2309.11229 | Trace Monomial Boolean Functions with Large High-Order Nonlinearities | Exhibiting an explicit Boolean function with a large high-order nonlinearity
is an important problem in cryptography, coding theory, and computational
complexity. We prove lower bounds on the second-order, third-order, and
higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions
$\mathrm{tr}_n(x^7)$ and $\mathrm{tr}_n(x^{2^r+3})$ where $n=2r$. Among all
trace monomials, our bounds match the best second-order nonlinearity lower
bounds by \cite{Car08} and \cite{YT20} for odd and even $n$ respectively. We
prove a lower bound on the third-order nonlinearity for functions
$\mathrm{tr}_n(x^{15})$, which is the best third-order nonlinearity lower
bound. For any $r$, we prove that the $r$-th order nonlinearity of
$\mathrm{tr}_n(x^{2^{r+1}-1})$ is at least
$2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}- O(2^{\frac{n}{2}})$. For $r \ll
\log_2 n$, this is the best lower bound among all explicit functions. | Jinjie Gao, Haibin Kan, Yuan Li, Jiahua Xu, Qichun Wang | 2023-09-20T11:40:19Z | http://arxiv.org/abs/2309.11229v1 | # Trace Monomial Boolean Functions with Large High-Order Nonlinearities
###### Abstract
Exhibiting an explicit Boolean function with a large high-order nonlinearity is an important problem in cryptography, coding theory, and computational complexity. We prove lower bounds on the second-order, third-order, and higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{2^{r}+3})\) where \(n=2r\). Among all trace monomials, our bounds match the best second-order nonlinearity lower bounds by [1] and [20] for odd and even \(n\) respectively. We prove a lower bound on the third-order nonlinearity for functions \(\operatorname{tr}_{n}(x^{15})\), which is the best third-order nonlinearity lower bound. For any \(r\), we prove that the \(r\)-th order nonlinearity of \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\) is at least \(2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}})\).
For \(r\ll\log_{2}n\), this is the best lower bound among all explicit functions.
**Keywords**: high-order nonlinearity, trace monomial, lower bound, Boolean function, linear kernel
## 1 Introduction
Exhibiting an _explicit_ Boolean function with a large _high-order nonlinearity_ is an important task in areas including cryptography, coding theory, and computational complexity. In cryptography, a high nonlinearity is an important cryptographic criterion for Boolean functions used in symmetric-key cryptosystems to resist correlation attacks [1]. In coding theory, the largest \(r\)-th order nonlinearity among all \(n\)-variable Boolean functions is exactly the covering radius of Reed-Muller codes \(\operatorname{RM}(r,n)\); computing (high-order) nonlinearity is related to the problem of decoding Reed-Muller codes. In computational complexity, one _must_ prove large enough nonlinearity lower bound (for a function in NP) to prove that NP does not have circuits of quasi-polynomial size [12, 21]. In addition, this problem is related to pseudorandom generators, communication complexity, and circuit complexity; we send interested readers to the survey by Viola [20].
Known techniques for proving nonlinearity lower bound include Hilbert function [22, 23], the "squaring trick" [1, 18, 21, 22], XOR lemmas [1, 1, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 52, 54, 53, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 63, 64, 65, 67, 68, 69, 70, 74, 75, 76, 79, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
of any _quadratic_ function by the dimension of its _linear kernel_. In this way, the problem of lowering bound nonlinearity essentially reduces to the problem of estimating the number of roots of certain equations over finite fields. Along this line, nonlinearity lower bounds for trace monomial Boolean functions are proved in [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. We summarize the second-order nonlinearity lower bounds in Table 1.
\begin{table}
\begin{tabular}{c|c} \hline
**Function** & \(\mathrm{nl}_{2}\) **lower bound** \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{t}-1}+g(x))\), \(\mu\in\mathbb{F}_{2^{n}}^{*}\), & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)(2^{t}-4)2^{\frac{n}{2}}+2^{n}}\) \\ \(t\leq n\)[14]\(\geq\) & \(2^{n-1}-2^{\frac{3n}{4}+\frac{t}{2}-1}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{r}+3}),n=2r+1\)[14] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)2^{\frac{n+5}{2}}+2^{n}}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n+1}{4}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{r}+3}),n=2r-1\)[14] & \(\begin{cases}2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+1}{2}}+2^{\frac{3n-1}{2}}+2^ {n}-2^{n}-2^{\frac{n+3}{2}}},&\text{if }3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3\cdot 2^{n+\frac{1}{2}}-2^ {\frac{n+3}{2}}},&\text{if }3\mid n\\ =\) & \(2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log{2}{3}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{n}-2})\)[14] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)2^{\frac{n}{2}+2}+3\cdot 2^{n}}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n}{4}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{\frac{n}{2}}(xy^{2^{n}-2})\) \(x,y\in\mathbb{F}_{\frac{n}{2}}^{*}\), if n is even [14] & \(2^{n-1}-\frac{1}{2}\sqrt{2^{n}+(2^{n+2}+2^{\frac{3n}{4}+1}+2^{\frac{n}{2}+1})( 2^{\frac{n}{2}}-1)}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n}{4}}-O(2^{\frac{n}{2}})\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+1}),\mu\in\mathbb{F}_{2^{n}}^{*}\)[15] & \(\left\{\begin{array}{ll}2^{n-1}-2^{\frac{3n+2i-4}{4}},&\text{if n is even}\\ 2^{n-1}-2^{\frac{3n+2i-5}{4}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{2i}+2^{i}+1}),\mu\in\mathbb{F}_{2^{n}}^{*}\), \(\mathrm{gcd}(n,i)=1\), \(n>4\)[15] & \(\left\{\begin{array}{ll}2^{n-1}-2^{\frac{3n}{4}},&\text{if n is even}\\ 2^{n-1}-2^{\frac{3n-1}{4}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\lambda x^{2^{2r}+2^{r}+1})\), \(n=6r\), \(\lambda\in\mathbb{F}_{2^{n}}^{*}\)[16] & \(=\) \(2^{n-1}-2^{\frac{3n}{4}+r-1}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{r}(xy^{2^{i}+1})\) \(n=2r\), \(x,y\in\mathbb{F}_{2^{r}}\), \(1\leq i<r\), \(\mathrm{gcd}(2^{r}-1,2^{i}+1)=1\), \(\mathrm{gcd}(i,r)=j\)[16] & \(=\) \(2^{n-1}-2^{\frac{3n}{4}+\frac{j}{2}-1}-O(2^{\frac{n}{2}})\) \\ \hline \(\mathrm{tr}_{n}(\lambda x^{2^{2r}+2^{r}+1})\), \(n=5r\), \(\lambda\in\mathbb{F}_{2^{r}}^{*}\)[15] & \(2^{n-1}-2^{\frac{3n+3r-4}{4}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Second-order nonlinearity lower bounds
\(\begin{array}{|c|c|c|}\hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=3r,&2^{n-1}-2^{\frac{3n+r-4}{4}}\\ \lambda\in\mathbb{F}_{2^{r}}^{*}\ [\mathrm{Sin11}]&\end{array}\)
\(\begin{array}{|c|c|c|}\hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=4r,&2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{7n}{4}}+2^{\frac{5n}{4}}-2^{n}}\\ \lambda\in\mathbb{F}_{2^{r}}^{*}\ [\mathrm{SW11}]&=&2^{n-1}-2^{\frac{7n}{8}-1}-O(2^{ \frac{3n}{8}})\\ \hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=6r,&2^{n-1}-\frac{1}{2} \sqrt{2^{\frac{5n}{3}}+2^{\frac{4n}{3}}-2^{\frac{5n}{6}}+2^{n}-2^{\frac{5n}{6 }}}\\ \lambda\in\mathbb{F}_{2^{n}}^{*}\ [\mathrm{Tan+20}]^{\mathrm{b}}&=&2^{n-1}-2^{ \frac{5n}{6}-1}-O(2^{\frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r+1}+3}),n=2r\ [\mathrm{YT20}]&\begin{cases}2^{n-1}- \frac{1}{2}\sqrt{2^{\frac{3n}{2}}+1}+\frac{2}{4}\frac{5n}{4}+\frac{1}{2}-2^{n }-2^{\frac{3n}{4}+\frac{1}{2}},&\text{if r is odd}\\ 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{2}}+1}+\frac{1}{3}\cdot 2^{\frac{5n}{4}+2}-2^{n }-\frac{1}{3}\cdot 2^{\frac{3n}{4}+2},&\text{if r is even}\\ =&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r}+2^{\frac{r+1}{2}}+1}),n=2r,&2^{n-1}-\frac{1}{2} \sqrt{2^{\frac{3n}{2}}+1}+2^{\frac{5n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{5n}{4}+ \frac{1}{2}}\\ \text{for odd r }[\mathrm{YT20}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{2r}+2^{r+1}+1}),n=4r,&2^{n-1}-\frac{1}{2}\sqrt{2^{ \frac{3n}{2}}+1}+\frac{1}{3}\cdot 2^{\frac{5n}{4}+2}-2^{n}-\frac{1}{3}\cdot 2^{ \frac{3n}{4}+2}\\ \text{for even r }[\mathrm{YT20}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r+1}+2^{r}+1}),n=2r+2,&2^{n-1}-\frac{1}{2}\sqrt{2^{ \frac{3n}{2}}+1}+2^{\frac{5n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}} \\ \text{for even r }[\mathrm{Liu21}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\end{cases}\)
\(\begin{array}{|c|c|}\hline\mathrm{tr}_{n}(x)\text{ is a univariate polynomial of degree }\leq 2^{t}-2\text{ over }\mathbb{F}_{2^{n}}.\\ \hline\lambda\in\{yz^{d}:y\in U,z\in\mathbb{F}_{2^{n}}^{*}\},&U=\{y\in\mathbb{F }_{2^{3r}}^{*}:\mathrm{tr}_{\mathbb{F}_{2^{3r}}/\mathbb{F}_{2^{r}}}(y)=0\}, \text{ where the function }\mathrm{tr}_{\mathbb{F}_{2^{3r}}/\mathbb{F}_{2^{r}}}(y)\text{ is a mapping from }\mathbb{F}_{2^{3r}}\text{ to } \mathbb{F}_{2^{r}}.\\ \hline\end{array}\)
Among all trace monomials, the best second-order nonlinearity lower bound was proved for functions \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r-1\), by Carlet [1], when \(n\) is odd, and for functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\), by Yan and Tang [15], when \(n\) is even. Note that the best second-order nonlinearity lower bound is, \(2^{n-1}-2^{\frac{3}{n}}-2^{\frac{1}{3}n-2^{\frac{1}{3}n-1}}\), proved by Kolokotronis and Limniotis [16], for the Maiorana-McFarland cubic functions (which are _not_ trace monomials).
For the third-order nonlinearity, lower bounds are proved for the inverse function \(\mathrm{tr}_{n}(x^{2^{n}-2})\), the Kasami functions \(\mathrm{tr}_{n}(\mu x^{57})\), functions of the form \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+2^{k}+1})\). Previous to our results, the best third-order nonlinearity lower bound was proved for functions \(\mathrm{tr}_{n}(\mu x^{2^{3i}+2^{2i}+2^{i}+1})\), where \(\mu\in\mathbb{F}_{2^{n}}^{*}\), and \(\gcd(i,n)=1\), by Singh [16]. Please see Table 2 for a summary.
Garg and Khalyavin [1] proved that the \(r\)-th order nonlinearity for the Kasami function \(f(x)=\mathrm{tr}_{n}(\lambda x^{k})\), where \(k=2^{2r}-2^{r}+1\), \(\lambda\in\mathbb{F}_{2^{n}}^{*}\), \(n\geq 2r\) and \(\gcd(n,r)=1\), is bounded by
\[\begin{cases}2^{n-r}-2^{\frac{n+2r-2}{2}},&\text{for even $n$}\\ 2^{n-r}-2^{\frac{n+2r-3}{2}},&\text{for odd $n$}\end{cases}.\]
Garg [1] proved that the \((\frac{n}{2}-1)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{2^{\frac{n}{2}-1}})\) for \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) is at least \(2^{\frac{n}{2}}\). Tiwari and Sharma [15] proved that the \((\frac{n}{2}-1)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{d})\), where \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) and \(d=3(2^{\frac{n}{2}}-1)+1\) for even \(n\), is at least \(2^{\frac{n}{2}+1}-2^{\frac{n}{2}+1}\); the \((\frac{n}{2}-2)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{d})\), where \(d=2^{\frac{n}{2}}-2\), is at least \(2^{\frac{n}{2}+2}-2^{\frac{n}{4}+\frac{3}{2}}\). Saini and Garg [1] proved that the \(\frac{n}{4}\)-th order nonlinearity of functions \(\mathrm{tr}_{n}(\alpha_{1}x^{d_{1}}+\alpha_{2}x^{d_{2}})\) is at least \(2^{\frac{n}{4}}-2^{\frac{n}{4}-2}\), where \(\alpha_{1},\alpha_{2}\in\mathbb{F}_{2^{n}}\), \(d_{1}=\frac{1}{2}\cdot(2^{\frac{n}{2}}-1)+1\), \(d_{2}=\frac{1}{6}\cdot(2^{\frac{n}{2}}-1)+1\), and \(4\mid n\).
Proving large high-order nonlinearity lower bound for any explicit function is an outstanding open problem in the computational complexity. For example, the problem whether there exists a function in NP with \(\log_{2}n\)-th order nonlinearity at least \(2^{n-1}(1-\frac{1}{\sqrt{n}})\) is open [14]. For the majority and mod functions, Razborov and Smolensky [13, 14, 15] proved that the \(r\)-th order nonlinearities of them are at least \(2^{n-1}(1-O(\frac{r}{\sqrt{n}}))\). For \(r\ll\log n\), Babai, Nisan and Szegedy [14] proved that the generalized inner product function has \(r\)-th order nonlinearity bounded by \(2^{n-1}(1-\exp(-\Omega(\frac{n}{r\cdot d^{r}})))\). Bourgain [1] proved a similar result for \(\mathrm{mod}_{3}\) function; a mistake in his proof is corrected by Green, Roy and Straubing [12]. An improvement was achieved by Viola and Widgerson [14, 15] by exhibiting a polynomial-time computable function with \(r\)-th order nonlinearity lower bounded by \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\), where constant \(\alpha<\frac{1}{4}\cdot\log_{2}e\). Gopalan, Lovett and Shpilka [1] proved that, if the mod-\(p\) degree, for any prime \(p>2\), of \(f\) is \(d=o(\log n)\), then the \(r\)-th order nonlinearity of \(f\) is at least \(2^{n-1}(1-p^{-O(d)})\). Chattopadhyay _et al._[1] proved that the \(O(1)\)-th nonlinearity for the \(k\) XORs of the majority function is lower bounded by \(2^{kn-1}\left(1-\left(\frac{\mathrm{poly}(k,\log n)}{\sqrt{n}}\right)^{k}\right)\). Chen and Lyu [1] proved that there exists a function \(f\in\mathrm{E}^{\mathrm{NP}}\) which has
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Function** & \(\mathrm{nl}_{3}\) **lower bound** \\ \hline \(\mathrm{tr}_{n}(x^{2^{n}-2})\)[1] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{2n}{2}+3}+3\cdot 2^{n+1}-2^{ \frac{n}{2}+3}+16}+2^{n}}\) \\ & \(=\) & \(2^{n-1}-2^{\frac{7n}{8}-\frac{1}{4}}-O(2^{\frac{3n}{8}})\) \\ \hline \(\begin{array}{c}\mathrm{tr}_{n}(\mu x^{57}),\mu\in\mathbb{F}_{2^{n}}^{*}\\ n>10\ \mathrm{[GG10]}\end{array}\) & \(\left\{\begin{array}{ll}2^{n-3}-2^{\frac{n+4}{2}},&\text{if n is even}\\ 2^{n-3}-2^{\frac{n+3}{2}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+2^{k}+1})\), \(i>j>k\geq 1\), \(n>2i\), \(\mu\in\mathbb{F}_{2^{n}}^{*}\)[14] & \(\left\{\begin{array}{ll}2^{n-3}-2^{\frac{n+2i-6}{2}},&\text{if n is even}\\ 2^{n-3}-2^{\frac{n+2i-7}{2}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\begin{array}{c}\mathrm{tr}_{n}(\mu x^{2^{3i}+2^{2i}+2^{i}+1}),\mu\in \mathbb{F}_{2^{n}}^{*}\\ \gcd(i,n)=1\), \(n>6\ \mathrm{[Sin14]}\end{array}\right.\) & \(\left\{\begin{array}{ll}2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{3n}{2} +3}+2^{n+1}-2^{\frac{n}{2}+4}}+2^{n}},&\text{if n is even}\\ =2^{n-1}-2^{\frac{7n-2}{8}}-O(2^{\frac{3n}{8}})\\ 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{3n+5}{2}+2^{n+1}-2^{\frac{n+2} {2}}}}+2^{n}},&\text{if n is odd}\\ =2^{n-1}-2^{\frac{7n-3}{8}}-O(2^{\frac{3n}{8}})\end{array}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Third-order nonlinearity lower bounds
\(r\)-th order nonlinearity at least \(2^{n-1}(1-2^{-r})\) for \(r\leq o(\frac{n}{\log^{n}})^{\frac{1}{2}}\).
### Our results
In this work, we prove lower bounds on the high-order nonlinearities of certain trace monomial Boolean functions. We exhibit some trace monomial functions with large second-order, third-order or higher-order nonlinearities.
**Theorem 1**.: _Let \(f(x)=tr_{n}(x^{7})\). For even \(n\), we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3 }{2}n-1}+2^{n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}},&3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^{n+2}-\frac{1}{3} \cdot 2^{\frac{n}{2}+3}},&3\mid n\end{cases}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
_For odd \(n\), we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^ {n}-2^{\frac{n+3}{2}}},&3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3\cdot 2^{n+\frac{1}{2}}-2^ {\frac{n+3}{2}}},&3\mid n\end{cases}\] \[= 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Theorem 1 gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{7})\). Among all trace monomials, it matches the best lower bound when \(n\) is odd (i.e., the modified Welch function [1]).
**Theorem 2**.: _Let \(f(x)=\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\). Then we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{4}+1}+2^{\frac {3n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}}},&2\nmid r\\ 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3}{2}n+1}+\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}-2^ {n}-\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}},&2\mid r\end{cases}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
Theorem 2 gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\). When \(n\) is even, it matches the largest lower bound on the second-order nonlinearities among all trace monomial Boolean functions. That is, it is the same as functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\)[13]. Note that a larger lower bound is known for Maiorana-McFarland type functions. Kolokotronis and Liminiotis proved that, the second-order nonlinearity for a cubic Maiorana-McFarland type functions \(g(x)y^{t}\), where \((x,y)\in\mathbb{F}_{2^{n}}\times\mathbb{F}_{2^{m}}\) and \(g(x)\) is a quadratic perfect nonlinear function, and \(m\leq\frac{n}{2}\), is at least \(2^{n+m-1}-2^{n-1}-2^{\frac{n}{2}+m-1}+2^{\frac{n}{2}-1}\)[12].
We would like to point out that, this class of functions \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\), is studied for the first time in our work. A similar type of functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\), was studied in [1]; the lower bound proved in [13] is exactly the same as Theorem 2.
**Theorem 3**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). Then we have_
\[\mathrm{nl}_{3}(f)\geq\begin{cases}2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{ \frac{1}{3}\cdot 2^{\frac{3}{2}n+4}+\frac{7}{3}\cdot 2^{n+1}-\frac{1}{3}\cdot 2^{ \frac{n}{2}+5}}+2^{n}}\\ =2^{n-1}-2^{\frac{7n}{8}-\frac{1}{4}\log_{2}3}-O(2^{\frac{3n}{8}}),&2\mid n \\ \\ 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{\frac{29}{8}\cdot 2^{\frac{3n+1}{2}}+2^ {n+1}-7\cdot 2^{\frac{n+5}{2}}}+2^{n}}\\ =2^{n-1}-2^{\frac{7n}{8}-\frac{13}{8}+\frac{1}{4}\log_{2}29}-O(2^{\frac{3n}{8}}),&2\nmid n\end{cases}\]
_for \(n\geq 6\)._
Theorem 3 gives a lower bound on the third-order nonlinearity for the functions within \(\operatorname{tr}_{n}(x^{15})\) class; it is the largest lower bound on the third-order nonlinearity among all trace monomial Boolean functions.
**Theorem 4**.: _Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\) and \(r\geq 2\)._
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{ \frac{n}{2}}).\]
For \(r\ll\log_{2}n\), our lower bound in Theorem 4 is better than all previous results, for all explicit functions in P, not necessarily trace monomials.
Similarly, we prove the following lower bound on the \(r\)-th order nonlinearity for the inverse function, which is studied in [1]. We credit this to Carlet, who claims that the \(r\)-th order nonlinearity for the inverse function is asymptotically lower bounded by \(2^{n-1}-2^{(1-2^{-r})n}\).
**Theorem 5**.: _Let \(f_{\operatorname{inv}}=\operatorname{tr}_{n}(x^{2^{n}-2})\). For any \(r\geq 1\), we have \(\operatorname{nl}_{r}(f_{\operatorname{inv}})\geq 2^{n-1}-2^{(1-2^{-r})n-2^{-( r-1)}}-O(2^{\frac{n}{2}})\)._
**Techniques.** Our proof of the lower bounds follows from Carlet's methods [1]. That is, to lower bound the \(r\)-th order nonlinearity, we estimate the (first-order) nonlinearity of its \((r-1)\)-th order derivatives. Taking a (nontrivial) \((r-1)\)-th order derivative, our target function becomes a quadratic function. Then, we rely on a result by Canteaut _et al._[1] that relates the nonlinearity of a quadratic function with the dimension of its _linear kernel_. As such, the problem essentially reduces to estimating _the number of roots_ of certain equations over the finite field \(\mathbb{F}_{2^{n}}\).
As for Theorem 1, we use the following ingredients to estimate the number of roots of a certain equation (associated with the linear kernel): we factor the equation into irreducible ones; we apply the known results concerning the number of roots of \(q\)_-polynomials_, and the number of roots of _quadratic equations_ and _quartic equations_ (over finite fields); we use the Weil bound to estimate the weight of trace monomial functions. As for Theorem 2, our proof is similar to [2], and the lower bounds are exactly the same. (The target function, which has a simple form and good behavior, is somehow missed by previous works.)
As for the third-order nonlinearity lower bound, i.e., Theorem 3, our strategy is, again, to estimate the number of roots of a certain equation (associated with the linear kernel). We factor the equation into irreducible ones, and analyze the number of roots for each component separately. The proof relies on the known results about the number of roots of \(q\)-polynomials, and _quartic equations_ (over finite fields). A critical step is to estimate the algebraic degree of a (trace) equation over \(\mathbb{F}_{2^{n}}\). (With the algebraic degree known, we can apply the well-known fact that the number of roots is bounded by the degree.)
In Theorem 4, we study the \(r\)-th order nonlinearity of functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\), a natural generalization of \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{15})\). We prove a lower bound on the (first-order) nonlinearity of all nontrivial \((r-1)\)-th order derivatives of the target function, and the \(r\)-th order nonlinearity lower bound follows from the methods articulated by [1]. The equation (associated with the linear kernel for the derivative) turns out to have a nice explicit form, whose degree is at most \(2^{2r}\). Thus, the nonlinearity bound follows from a result in [1] (that relates the dimension of the kernel with the nonlinearity for any quadratic function).
The proof of Theorem 5 closely follows from [1], who already claimed that the lower bound is asymptotically \(2^{n-1}-2^{(1-2^{-r})n}\). We credit the result to Carlet, who obviously can, but did not have the occasion to write down the details.
## 2 Preliminary
Let \(\mathbb{F}_{2}\) be the finite field of size \(2\). Let \(\mathcal{B}_{n}\) denote the set of all \(n\)-variable Boolean functions. Any \(n\)-variable Boolean function can be represented as a unique polynomial in \(\mathbb{F}_{2}[x_{1},x_{2},\ldots,x_{n}]/\{x_{i}^{2}+x_{i}\}_{1\leq i\leq n}\), that is,
\[f(x_{1},x_{2},\ldots,x_{n})=\sum_{S\subseteq[n]}c_{S}\prod_{i\in S}x_{i},\]
which is called _algebraic normal form_ (ANF). The _algebraic degree_ of \(f\), denoted by \(\deg(f)\), is the number of variables in the highest order term with nonzero coefficient.
The _Hamming weight_ of a vector \(x\in\mathbb{F}_{2}^{n}\), denoted by \(\operatorname{wt}(x)\), is the number of nonzero coordinates. The _weight_ of a Boolean function \(f\), denoted by \(\operatorname{wt}(f)\), is the cardinality of the set \(\{x\in\mathbb{F}_{2}^{n}:f(x)=1\}\). The _distance_ between two functions \(f\) and \(g\) is the cardinality of the set \(\{x\in\mathbb{F}_{2}^{n}:f(x)\neq g(x)\}\), denoted by \(\operatorname{d}(f,g)\).
Let \(\mathbb{F}_{2^{n}}\) be the finite field of size \(2^{n}\). The _absolute trace function_ from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}\) can be defined as
\[\operatorname{tr}_{n}(x)=x+x^{2}+x^{2^{2}}+\ldots+x^{2^{n-1}},\]
where \(x\in\mathbb{F}_{2^{n}}\). Let \(K=\mathbb{F}_{2^{r}}\) be a subfield of \(L=\mathbb{F}_{2^{n}}\). More generally, the trace function defined with respect to the field extension \(L/K\) is
\[\operatorname{tr}_{L/K}(\alpha)=\alpha+\alpha^{2^{r}}+\ldots+\alpha^{2^{r-( \frac{\alpha}{p}-1)}},\]
where \(\alpha\in\mathbb{F}_{2^{n}}\). It is well known that (for instance, Theorem 2.23 in [10]) the trace function satisfies the following properties
* \(\operatorname{tr}_{L/K}(x+y)=\operatorname{tr}_{L/K}(x)+\operatorname{tr}_{L/ K}(y)\) for any \(x,y\in\mathbb{F}_{2^{n}}\).
* \(\operatorname{tr}_{L/K}(x^{2})=\operatorname{tr}_{L/K}(x)\) for any \(x\in\mathbb{F}_{2^{n}}\).
* For any \(\alpha\in\mathbb{F}_{2^{r}}\), there are exactly \(2^{n-r}\) elements \(\beta\) with \(\operatorname{tr}_{L/K}(\beta)=\alpha\).
Any \(n\)-variable Boolean function can be written as \(f(x)=\operatorname{tr}_{n}(g(x))\), where \(g(x)=\sum_{i=0}^{2^{n}-1}\beta_{i}x^{i}\) is a mapping from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\) for \(\beta_{i}\in\mathbb{F}_{2^{n}}\). A trace _monomial_ Boolean function is of the form \(\operatorname{tr}_{n}(\lambda x^{d})\) where \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) and \(d\) is an integer. It is well known that the degree of the trace monomial function \(\operatorname{tr}_{n}(\lambda x^{d})\) is the Hamming weight of the binary representation of \(d\)[1].
For \(1\leq r\leq n\), the \(r\)-th order nonlinearity of an \(n\)-variable Boolean function \(f\), denoted by \(\operatorname{nl}_{r}(f)\), is the minimum distance between \(f\) and functions with degree at most \(r\), i.e.,
\[\operatorname{nl}_{r}(f)=\min_{\deg(g)\leq r}\operatorname{d}(f,g).\]
We denote by \(\operatorname{nl}(f)\) the first-order nonlinearity of \(f\).
The _Walsh transform_ of \(f\in\mathcal{B}_{n}\) at \(\alpha\in\mathbb{F}_{2^{n}}\) is defined as
\[W_{f}(\alpha)=\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{f(x)+\operatorname{tr}_{n}( \alpha x)}.\]
The _Walsh spectrum_ of \(f\) is the multi-set consisting of the values \(W_{f}(\alpha)\) for all \(\alpha\in\mathbb{F}_{2^{n}}\). The nonlinearity of any Boolean function in \(n\) variable can be calculated as
\[\operatorname{nl}(f)=2^{n-1}-\frac{1}{2}\max_{\alpha\in\mathbb{F}_{2^{n}}}|W_ {f}(\alpha)|. \tag{1}\]
We denote by \(D_{a}f\) the _derivative_ of the \(f\in\mathcal{B}_{n}\) with respect to \(a\in\mathbb{F}_{2^{n}}\), which is defined to be
\[D_{a}f(x)=f(x)+f(x+a).\]
The \(k\)_-th order derivative_ of \(f\), denoted by \(D_{a_{1}}D_{a_{2}}\ldots D_{a_{k}}f\), is obtained by applying such derivation successively to the function \(f\) with respect to \(a_{1},a_{2},\ldots,a_{k}\in\mathbb{F}_{2^{n}}\).
In [1], Carlet provided a method to lower bound the \(r\)-th order nonlinearity relying on the \((r-1)\)-th order nonlinearity of all its derivatives.
**Proposition 1**.: _[_1_]_ _Let \(f\) be any \(n\)-variable Boolean function and \(r\) a positive integer smaller than \(n\). We have_
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in \mathbb{F}_{2^{n}}}\operatorname{nl}_{r-1}(D_{a}f)}.\]
The _quadratic functions_ are the set of the Boolean functions of algebraic degree at most 2. The _linear kernel_ is the central object for the calculation of the nonlinearity of quadratic functions.
**Definition 1**.: _[_10_]_ _Let \(q:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) be a quadratic function. The linear kernel of \(q\), denoted by \(\mathcal{E}_{q}\), can be defined as_
\[\mathcal{E}_{q}=\mathcal{E}_{0}\cup\mathcal{E}_{1}\]
_where_
\[\mathcal{E}_{0}=\{b\in\mathbb{F}_{2^{n}}\mid D_{b}q=q(x)+q(x+b)=0,\text{ for all }x\in\mathbb{F}_{2^{n}}\},\]
\[\mathcal{E}_{1}=\{b\in\mathbb{F}_{2^{n}}\mid D_{b}q=q(x)+q(x+b)=1,\text{ for all }x\in\mathbb{F}_{2^{n}}\}.\]
The _bilinear form_ associated with a quadratic function \(q\) is defined as
\[B(x,y)=q(0)+q(x)+q(y)+q(x+y).\]
The _linear kernel_\(\mathcal{E}_{q}\) of a quadratic function \(q\) is the _linear kernel_ of its associated bilinear form \(B(x,y)\) by definition, that is
\[\mathcal{E}_{q}=\{x\in\mathbb{F}_{2^{n}}\mid B(x,y)=0\text{ for any }y\in \mathbb{F}_{2^{n}}\}.\]
**Lemma 1**.: _[_10_]_ _Let \(q:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) be an \(n\)-variable Boolean function of degree at most 2. Then the Walsh spectrum of \(q\) depends on the dimension \(k\) of the linear kernel of \(q\). Moreover, for any \(\mu\in\mathbb{F}_{2^{n}}\), we have_
\begin{tabular}{c|c} \hline \(W_{q}(\mu)\) & The number of \(u\in\mathbb{F}_{2^{n}}\) \\ \hline
0 & \(2^{n}-2^{n-k}\) \\ \hline \(2^{\frac{n+k}{2}}\) & \(2^{n-k-1}+(-1)^{q(0)}2^{\frac{n-k-2}{2}}\) \\ \hline \(-2^{\frac{n+k}{2}}\) & \(2^{n-k-1}-(-1)^{q(0)}2^{\frac{n-k-2}{2}}\) \\ \hline \end{tabular}
**Lemma 2**.: _[_10_]_ _Let \(V\) be a vector space over a field \(\mathbb{F}_{2^{n}}\) and \(Q:V\rightarrow\mathbb{F}_{2^{n}}\) be a quadratic form. Then the dimension of \(V\) and the dimension of the kernel of \(Q\) have the same parity._
That is, if \(f:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) is a quadratic function, then the parity of the dimension of its linear kernel is the same as the parity of \(n\).
A _q-polynomial_ over \(\mathbb{F}_{q^{n}}\) is the polynomial in the form
\[P(x)=\sum_{i=0}^{n-1}a_{i}x^{q^{i}},\]
where the coefficients \(a_{i}\in\mathbb{F}_{q^{n}}\). It is a _linearized polynomial_ which satisfies the following properties [11, page 108]:
\[P(b+c)=P(b)+P(c),\quad\text{for all }b,c\in\mathbb{F}_{q^{n}} \tag{2}\]
\[P(tb)=tP(b),\quad\text{for all }t\in\mathbb{F}_{q},\,\text{all }b\in\mathbb{F}_{q^{n}}. \tag{3}\]
Equation (2) follows from the fact that \((a+b)^{q^{i}}=a^{q^{i}}+b^{q^{i}}\) for \(a,b\in\mathbb{F}_{q^{n}}\) and \(i\geq 0\)[11, Theorem 1.46]; equation (3) follows from that \(t^{q^{i}}=t\) for \(t\in\mathbb{F}_{q}\) and any \(i\geq 0\). Hence, if \(\mathbb{F}_{q^{n}}\) is regarded as a vector space over \(\mathbb{F}_{q}\), then a \(q\)-polynomial is a linear map of this vector space.
Second-order nonlinearity
In this section, we deduce that the lower bound on the second-order nonlinearity for two classes of trace monomial Boolean functions in the form \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\).
### The functions \(\operatorname{tr}_{n}(x^{7})\)
We will lower bound the second-order nonlinearity of the monomial cubic functions \(\operatorname{tr}_{n}(x^{7})\). The algebraic degree of the derivatives of \(\operatorname{tr}_{n}(x^{7})\) is at most \(2\) since the degree of \(\operatorname{tr}_{n}(x^{7})\) is exactly \(3\). By using Carlet's method (i.e., Proposition 1), our goal is to calculate the nonlinearities of all its derivatives.
**Proposition 2**.: _Let \(f:\mathbb{F}_{2^{n}}\to\mathbb{F}_{2}\) be a quadratic function. For any \(a\in\mathbb{F}_{2^{n}}^{*}\), we have_
\[\mathcal{E}_{f}=\mathcal{E}_{f(ax)},\]
_where \(\mathcal{E}_{f}\) denotes the linear kernel of the \(f\) and \(\mathcal{E}_{f(ax)}\) denotes the linear kernel of the \(f(ax)\)._
Proof.: Let us prove \(\mathcal{E}_{f}\subseteq\mathcal{E}_{f(ax)}\) first. By definition, if \(b\in\mathcal{E}_{f}\), then \(f(x)+f(x+b)=0\) for all \(x\in\mathbb{F}_{2^{n}}\) or \(f(x)+f(x+b)=1\) for all \(x\in\mathbb{F}_{2^{n}}\). Note that \(x\mapsto ax\) is a bijection over \(\mathbb{F}_{2^{n}}\) for any \(a\in\mathbb{F}_{2^{n}}^{*}\), then we have
\[f(ax)+f(ax+b)=0\text{ for all }x\in\mathbb{F}_{2^{n}}\]
or
\[f(ax)+f(ax+b)=1\text{ for all }x\in\mathbb{F}_{2^{n}}\.\]
So \(b\in\mathcal{E}_{f(ax)}\).
Now let us prove \(\mathcal{E}_{f(ax)}\subseteq\mathcal{E}_{f}\). Let \(g(x)=f(ax)\). From the above, we have \(\mathcal{E}_{g}\subseteq\mathcal{E}_{g(a^{-1}x)}\), that is, \(\mathcal{E}_{f(ax)}\subseteq\mathcal{E}_{f}\).
We will need the following lemmas in the proof of Theorem 6.
**Lemma 3**.: _[_15_, Theorem 3.50]_ _Let \(q\) be a prime. Let \(P(x)=\sum_{i=0}^{n-1}a_{i}x^{q^{i}}\) be a \(q\)-polynomial, where \(a_{i}\in\mathbb{F}_{q^{n}}\). Then the distinct number of roots of \(P(x)\) in \(\mathbb{F}_{q^{n}}\) is a power of \(q\)._
**Lemma 4**.: _[_16_, page 37]_ _The number of solutions in \(\mathbb{F}_{2^{n}}\) of the quartic function_
\[x^{4}+ax+b=0,\ \ a,b\in\mathbb{F}_{2^{n}},\ a\neq 0. \tag{4}\]
* _If_ \(n\) _is odd, then (_4_) has either no solution or exactly two solutions._
* _If_ \(n\) _is even and_ \(a\) _is not a cube, then (_4_) has exactly one solution._
* _If_ \(n\) _is even, and_ \(a\) _is a cube, then (_4_) has four solutions if_ \(\operatorname{tr}_{\mathbb{F}_{2^{n}}/\mathbb{F}_{4}}(\frac{b}{a^{\frac{3}{3} }})=0\)_, and no solutions if_ \(\operatorname{tr}_{\mathbb{F}_{2^{n}}/\mathbb{F}_{4}}(\frac{b}{a^{\frac{3}{3} }})\neq 0\)_._
We need some properties of trace functions in the proof.
**Theorem 6**.: _Let \(f(x)=\operatorname{tr}_{n}(x^{7})\). Let \(\mathcal{E}_{D_{a}f}\) be the linear kernel of \(D_{a}f\). We denote by \(\dim(\mathcal{E}_{D_{a}f})\) the dimension of \(\mathcal{E}_{D_{a}f}\). The distribution of \(\dim(\mathcal{E}_{D_{a}f})\) for all \(a\in\mathbb{F}_{2^{n}}^{*}\) is as follows:_
Proof.: For any \(a\in\mathbb{F}_{2^{n}}^{*}\), we have
\[(D_{a}f)(ax) = \mathrm{tr}_{n}((ax)^{7})+\mathrm{tr}_{n}((ax+a)^{7})\] \[= \mathrm{tr}_{n}((ax)^{7}+(ax+a)^{7})\] \[= \mathrm{tr}_{n}(a^{7}(x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1)).\]
Let \(g(x)=D_{a}f(ax)\). By Proposition 2, we know \(\mathcal{E}_{g}=\mathcal{E}_{D_{a}f}\), and \(\dim(\mathcal{E}_{g})\) equals the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(D_{b}g\) is a constant. Note that
\[D_{b}g(x) = \mathrm{tr}_{n}(a^{7}(\sum_{i=0}^{6}x^{i}))+\mathrm{tr}_{n}(a^{7} (\sum_{i=0}^{6}(x+b)^{i}))\] \[= \mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2}) x))+\mathrm{tr}_{n}(\sum_{i=1}^{6}b^{i}).\]
So \(\dim(\mathcal{E}_{g})\) equals the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(\mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{2}+b^{4})x))\) is a constant.
Using the properties of the trace function, we have
\[\mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2}) x)) \tag{5}\] \[= \mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}))+\mathrm{tr}_{n}(a^{7}((b^ {4}+b)x^{2}))+\mathrm{tr}_{n}(a^{7}((b^{4}+b^{2})x))\] \[= \mathrm{tr}_{n}((a^{7})^{-4}((b^{2}+b)^{-4}x))+\mathrm{tr}_{n}((a ^{7})^{-2}((b^{4}+b)^{-2}x))+\mathrm{tr}_{n}(a^{7}((b^{4}+b^{2})x))\] \[= \mathrm{tr}_{n}(((a^{7})^{-4}(b^{2}+b)^{-4}+(a^{7})^{-2}(b^{4}+b) ^{-2}+a^{7}(b^{4}+b^{2})x).\]
Thus, (5) is a constant if and only if the coefficient of \(x\) is zero, that is,
\[(a^{7})^{-4}(b^{2}+b)^{-4}+(a^{7})^{-2}(b^{4}+b)^{-2}+a^{7}(b^{4}+b^{2})=0. \tag{6}\]
Taking the 4th power to both sides of (6), we have
\[0 = a^{7}(b^{2}+b)+a^{14}(b^{4}+b)^{2}+a^{28}(b^{4}+b^{2})^{4}\] \[= a^{28}(b^{2}+b)^{8}+a^{14}(b^{2}+b)^{4}+a^{14}(b^{2}+b)^{2}+a^{7 }(b^{2}+b)\] \[= \left((a^{7})^{2}(b^{2}+b)^{4}\right)^{2}+\left((a^{7})^{2}(b^{2} +b)^{4}\right)+\left(a^{7}(b^{2}+b)\right)^{2}+\left(a^{7}(b^{2}+b)\right)\] \[= \left((a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\right)\left((a^{7}) ^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)+1\right).\]
\begin{table}
\begin{tabular}{c|c|c|c} \hline n & dim(\(\mathcal{E}_{D_{a}f}\)) & The number of \(a\in\mathbb{F}_{2^{n}}^{*}\) \\ \hline \multirow{4}{*}{even \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & 2 & \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\) \\ \cline{3-4} & & 4 & \(\frac{1}{3}\cdot 2^{n-2}-\frac{1}{3}\) \\ \cline{2-4} & \multirow{2}{*}{\(3\mid n\)} & 2 & \(\frac{2}{3}(2^{n}-1)+\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{3-4} & & 4 & \(\frac{1}{3}(2^{n}-1)-\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \multirow{4}{*}{odd \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & 1 & \(2^{n-1}\) \\ \cline{3-4} & & 3 & \(2^{n-1}-1\) \\ \cline{3-4} & & 1 & \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{3-4} & & 3 & \(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \end{tabular}
\end{table}
Table 3: The distribution of \(\dim(\mathcal{E}_{D_{a}f})\)
For convenience, let \(P(a,b)=\left((a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\right)\left((a^{7})^{2}(b^{2} +b)^{4}+a^{7}(b^{2}+b)+1\right)=Q(a,b)(Q(a,b)+1)\), where \(Q(a,b)=(a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\).
We denote by \(\mathrm{N}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(P(a,b)=0\); denote by \(\mathrm{N}_{1}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(Q(a,b)=0\); denote by \(\mathrm{N}_{2}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(Q(a,b)+1=0\). Obviously, \(\mathrm{N}(a)=\mathrm{N}_{1}(a)+\mathrm{N}_{2}(a)\).
It is clear that \(b=0,1\) are two solutions of \(Q(a,b)=0\). If \(b^{2}+b\neq 0\), \(Q(a,b)=0\) is equivalent to
\[(b^{2}+b)^{3}=(a^{7})^{-1}. \tag{7}\]
Observe that the degree (in variable \(b\)) of the polynomial \(P(a,b)\) is 16. So \(\mathrm{N}(a)\leq 16\). Since \(b=0\) or \(1\) are two distinct roots of \(Q(a,b)=0\), we have \(\mathrm{N}(a)\geq 2\). For any fixed \(a\in\mathbb{F}_{2^{n}}^{*}\), note that \(P(a,b)\) is a 2-polynomial in variable \(b\). By Lemma 3, \(\mathrm{N}(a)=2^{k}\) for some \(1\leq k\leq 4\). By Lemma 2, we know that \(\dim(\mathcal{E}_{g})\) and \(n\) have the same parity. Hence, we have \(\mathrm{N}(a)\in\{2^{2},2^{4}\}\) when \(n\) is even; \(\mathrm{N}(a)\in\{2^{1},2^{3}\}\) when \(n\) is odd.
Next, we will consider the cases according to the parity of \(n\) to determine the distribution of \(N(a)\), i.e., the distribution of \(\dim(\mathcal{E}_{D_{a}f})\).
**Case 1:**\(n\) is even. In this case, \(\mathrm{N}(a)\in\{2^{2},2^{4}\}\); it suffices to count the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) where \(\mathrm{N}(a)=16\). Note that the degree of \(Q(a,b)\), for any fixed \(a\in\mathbb{F}_{2^{n}}^{*}\), is 8. So we have \(\mathrm{N}_{1}(a)\leq 8\) and \(\mathrm{N}_{2}(a)\leq 8\). Hence, \(\mathrm{N}(a)=16\) if and only if \(\mathrm{N}_{1}(a)=\mathrm{N}_{2}(a)=8\).
For even \(n\), we have \(\gcd(2^{n}-1,3)=3\) and \(\gcd(2^{n}-2,3)=1\) since \(2^{n}\equiv 1\pmod{3}\). Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\}\) be a multiplicative group of order \(\frac{2^{n-1}}{3}\), where \(g\) is a primitive element of \(\mathbb{F}_{2^{n}}^{*}\). If \(a^{7}\notin G\), there is no solution to (7), which implies that \(\mathrm{N}_{1}(a)=2\). If \(a^{7}\in G\), letting \(a^{7}=g^{3s}\), where \(0\leq s\leq\frac{2^{n}-4}{3}\), we have
\[b^{2}+b=g^{-s+\frac{(2^{n}-1)i}{3}}, \tag{8}\]
for \(i=0,1,2\). If \(\mathrm{N}_{1}(a)=8\), then (8) must have 2 solutions for each \(i=0,1,2\). As a result, \(\mathrm{tr}_{n}(g^{-s+\frac{(2^{n}-1)i}{3}})=0\) must hold for each \(i\). (It is known that \(x^{2}+x=b\) has two solutions if and only if \(\mathrm{tr}_{n}(b)=0\), for instance, see the theorem in [1, page 536].)
Let \(c=g^{-s}\) and \(d=g^{\frac{2^{n}-1}{3}}\). We have \(g^{2^{n}-1-s}=c\), \(g^{\frac{2^{n}-1}{3}-s}=cd\) and \(g^{\frac{2(2^{n}-1)}{3}-s}=cd^{2}\). Furthermore, we have
\[\mathrm{tr}_{n}(c)+\mathrm{tr}_{n}(cd)+\mathrm{tr}_{n}(cd^{2})\] \[= \mathrm{tr}_{n}(c(1+d+d^{2})\] \[= \mathrm{tr}_{n}(c(1+d+d^{2})(1+d)(1+d)^{-1})\] \[= \mathrm{tr}_{n}(c(1+d^{3})(1+d)^{-1})\] \[= 0,\]
since \(d^{3}=g^{2^{n}-1}=1\). In other words,
\[\mathrm{tr}_{n}(g^{2^{n}-1-s})+\mathrm{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})+ \mathrm{tr}_{n}(g^{2\frac{2^{n}-1}{3}-s})=0 \tag{9}\]
always holds for any \(0\leq s\leq\frac{2^{n}-4}{3}\). By (9), there are two possibilities:
* \(\mathrm{tr}_{n}(g^{2^{n}-1-s})=\mathrm{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})= \mathrm{tr}_{n}(g^{2\frac{2^{n}-1}{3}-s})=0\),
* \(\mathrm{tr}_{n}(g^{\frac{(2^{n}-1)i_{1}}{3}-s})=\mathrm{tr}_{n}(g^{\frac{(2^{n} -1)i_{2}}{3}-s})=1\) and \(\mathrm{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0\) for distinct \(i_{1},i_{2},i_{3}\in\{0,1,2\}\).
To proceed, we consider the following two subcases.
**Subcase 1.1**.: \(3\nmid n\) and \(n\) is even. In this case, \(\gcd(2^{n}-1,7)=1\), so the linear function \(a\mapsto a^{7}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(N_{1}(a)=8\) is exactly the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\mathrm{tr}_{n}(g^{-s+\frac{(2^{n}-1)i}{3}})=0\) for \(i=0,1,2\}\).
Denote \(s_{1}\) by the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{2^{n}-1-s})= \operatorname{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})=\operatorname{tr}_{n}(g^{\frac{2 (2^{n}-1)i_{3}}{3}-s})=0\}\); denote \(s_{2}\) by the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{ 3}}{3}-s})=\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{2}}{3}-s})=1\text{ and } \operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0\text{ for distinct }i_{1},i_{2},i_{3}\in\{0,1,2\}\}\). Observe that \(\operatorname{wt}(\operatorname{tr}_{n}(x))=2^{n-1}\) because \(\operatorname{tr}_{n}(x)\) is an affine function, and the set \(\{g^{-s+\frac{(2^{n}-1)i}{3}}\mid 0\leq s\leq\frac{2^{n}-4}{3},0\leq i\leq 2\}\) is exactly \(\mathbb{F}_{2^{n}}^{*}\). So we have
\[\begin{cases}3(s_{1}+s_{2})=2^{n}-1\\ 2s_{2}=2^{n-1},\end{cases} \tag{10}\]
where \(2s_{2}=\operatorname{wt}(\operatorname{tr}_{n}(x))=2^{n-1}\) is because \(2s_{2}\) is the weight of the function \(\operatorname{tr}_{n}(x)\). Solving equations (10), we have \(s_{1}=\frac{2^{n-2}-1}{3}\) and \(s_{2}=2^{n-2}\). Thus the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=8\) is \(\frac{2^{n-2}-1}{3}\). Therefore, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=16\) is \(\frac{2^{n-2}-1}{3}\) and the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{2}(a)=4\) is \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\).
**Subcase 1.2**. \(3\mid n\) and \(n\) is even. In this case, we have \(7\mid 2^{n}-1\); thus the function \(a\mapsto a^{7}\) is a 7-to-1 mapping from \(\mathbb{F}_{2^{n}}^{*}\) to \(\mathbb{F}_{2^{n}}^{*}\). So \(\{a^{7}\mid a^{7}\in G\}=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\text{ and }7\mid s\}\). Denote by \(s_{1}\) the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{-s+\frac{(2^{n}-1) i_{3}}{3}-s})=0\text{ for all }i=0,1,2,\text{ and }7\mid s\}\) and by \(s_{2}\) the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{ 3}}{3}-s})=\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=1\text{ and } \operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0,\text{for distinct }i_{1},i_{2},i_{3}\in\{0,1,2\},\text{ and }7\mid s\}\). One can easily verify that
\[\begin{cases}s_{1}+s_{2}=\frac{2^{n}-1}{21},\\ 14s_{2}=\operatorname{wt}(\operatorname{tr}_{n}(x^{7})).\end{cases} \tag{11}\]
Solving equations (11), we have \(s_{1}=\frac{2^{n}-1}{21}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}) )}{14}\) and \(s_{2}=\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{14}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=8\) is \(7s_{1}=\frac{2^{n}-1}{3}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7} ))}{12}\). The number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=16\) is \(\frac{2^{n}-1}{3}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{2}\), and the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=4\) is \(\frac{2}{3}(2^{n}-1)+\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{2}\).
**Case 2:**\(n\) is odd. In this case, we have \(\operatorname{N}(a)\in\{2^{1},2^{3}\}\); it suffices to count the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=8\). For odd \(n\), we have \(3\mid(2^{n}-2)\) and \(\gcd(3,2^{n}-1)=1\). So \(a\mapsto a^{3}\) is a bijection in \(\mathbb{F}_{2^{n}}^{*}\). By (7), we have
\[b^{2}+b=(a^{7})^{\frac{2^{n}-2}{3}}. \tag{12}\]
When \(b\not\in\{0,1\}\), equation (12) has two distinct solutions if and only if \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=0\). Hence, the number of solutions of \(Q(a,b)=0\) is at most 4, i.e., \(\operatorname{N}_{1}(a)\leq 4\).
Note that \(Q(a,b)\) is a 2-polynomial (in variable \(b\)) of degree 8 and \(b=0,1\) are two roots of \(Q(a,b)=0\). So we have \(\operatorname{N}_{1}(a)\in\{2,2^{2}\}\). By Lemma 4, for odd \(n\), the number of distinct \(b^{2}+b\) satisfying \(Q(a,b)+1=0\) is 0 or 2. So the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(Q(a,b)+1=0\) is \(0,2,4\), that is, \(\operatorname{N}_{2}(a)\in\{0,2,4\}\). Thus \(\operatorname{N}(a)=\operatorname{N}_{1}(a)+\operatorname{N}_{2}(a)=8\) if and only if \(\operatorname{N}_{1}(a)=4\).
**Subcase 2.1**: \(3\nmid n\) and \(n\) is odd. In this case, we have \(\gcd(2^{n}-1,7)=1\). So mapping \(a\mapsto a^{7}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Since \(\gcd(2^{n}-1,2^{\frac{n}{3}})=1\). then mapping \(a\mapsto a^{\frac{2^{n}-2}{3}}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Note that \(\operatorname{N}_{1}(a)=4\) if and only if \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=0\). As such, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=4\) equals the size of the set \(\{x\in\mathbb{F}_{2^{n}}^{*}\mid\operatorname{tr}_{n}(x)=0\}\). Note that \(\operatorname{tr}_{n}(x)\) is an affine function. So the number of \(x\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{tr}_{n}(x)=0\) is \(2^{n-1}-1\). Thus the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=4\) equals \(2^{n-1}-1\).
**Subcase 2.2**: \(3\mid n\) and \(n\) is odd. In this case, we have \(\gcd(2^{n}-1,2^{n}-2)=1\). So mapping \(a\mapsto a^{\frac{2^{n}-2}{3}}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Since \(\gcd(2^{n}-1,7)=7\), then \(a\mapsto a^{7}\) is a 7-to-1 mapping from \(\mathbb{F}_{2^{n}}^{*}\) to \(\mathbb{F}_{2^{n}}^{*}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=\operatorname{tr}_{n}((a^{ \frac{2^{n}-2}{3}})^{7})=0\) equals the number of \(a\in\mathbb{F}_{2^{n}}^{
By Lemma 1 and Theorem 6, the following corollary is immediate.
**Corollary 1**.: _Let \(f=\mathrm{tr}_{n}(x^{7})\). Denote \(\mathrm{nl}(D_{a}f)\) by the nonlinearity of \(D_{a}f\). For any \(a\in\mathbb{F}_{2^{n}}^{*}\), the distribution of \(\mathrm{nl}(D_{a}f)\) is as follows:_
**Theorem 7**.: _(The Weil bound, for example, Theorem 5.38 in [15]) Let \(f\in\mathbb{F}_{q}[x]\) be of degree \(d\geq 1\), where \(\gcd(d,q)=1\). Let \(\mathcal{X}\) be a nontrivial additive character of \(\mathbb{F}_{q}\). Then_
\[\left|\sum_{x\in\mathbb{F}_{q}}\mathcal{X}(f(x))\right|\leq(n-1)q^{\frac{1}{2}}.\]
**Lemma 5**.: _Let \(d\geq 1\) be an odd number. We have \(\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))\geq 2^{n-1}-\frac{d-1}{2}\cdot 2^{\frac{n}{2}}\)._
Proof.: Let \(\mathcal{X}(x)=e^{\frac{2\pi i\mathrm{tr}_{n}(x)}{p}}=(-1)^{\mathrm{tr}_{n}(x)}\) for \(p=2\). Applying the Weil bound, i.e., Theorem 7, we have
\[\left|\sum_{x\in\mathbb{F}_{2^{n}}}\mathcal{X}(x^{d})\right| = \left|\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{tr}_{n}(x^{d})}\right|\] \[\leq (d-1)2^{\frac{n}{2}}.\]
Since \(\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))=2^{n-1}-\frac{1}{2}\mid\sum_{x\in\mathbb{ F}_{2^{n}}}(-1)^{\mathrm{tr}_{n}(x^{d})}\mid\), we have
\[\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))\geq 2^{n-1}-\frac{d-1}{2}\cdot 2^{\frac{n}{2}}.\]
Now we are ready to prove Theorem 1, which gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{7})\).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \multicolumn{1}{c|}{n} & \multicolumn{1}{c|}{\(\mathrm{nl}(D_{a}f)\)} & \multicolumn{1}{c}{The number of \(a\in\mathbb{F}_{2^{n}}^{*}\)} \\ \hline \multirow{5}{*}{even \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & \(2^{n-1}-2^{\frac{n}{2}}\) & \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\) \\ \cline{3-4} & & \(2^{n-1}-2^{\frac{n+2}{2}}\) & \(\frac{1}{3}\cdot 2^{n-2}-\frac{1}{3}\) \\ \cline{2-4} & & \(2^{n-1}-2^{\frac{n}{2}}\) & \(\frac{2}{3}(2^{n}-1)+\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{2-4} & & \(2^{n-1}-2^{\frac{n+2}{2}}\) & \(\frac{1}{3}(2^{n}-1)-\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \multirow{5}{*}{odd \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & \(2^{n-1}-2^{\frac{n-1}{2}}\) & \(2^{n-1}\) \\ \cline{3-4} & & \(2^{n-1}-2^{\frac{n+1}{2}}\) & \(2^{n-1}-1\) \\ \cline{1-1} \cline{2-4} & & \(2^{n-1}-2^{\frac{n-1}{2}}\) & \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{1-1} \cline{2-4} & & \(2^{n-1}-2^{\frac{n+1}{2}}\) & \(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The distribution of \(\mathrm{nl}(D_{a}f)\)
Proof.: (of Theorem 1) By Proposition 1 and Corollary 1, when \(n\) is even and \(3\nmid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2^{n}}} \mathrm{nl}(D_{a}f)}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2((2^{n-1}-2^{\frac{n}{2}})( \frac{11}{3}\cdot 2^{n-2}-\frac{2}{3})+(2^{n-1}-2^{\frac{n+2}{2}})(\frac{1}{3} \cdot 2^{n-2}-\frac{1}{3}))}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^ {n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Similarly, when \(n\) is even and \(3\mid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{\frac{1}{3}\cdot 2^{\frac{3}{2}n+3}+2^ {n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}-\mathrm{wt}(\mathrm{tr}_{n}(x^{7})) \cdot 2^{\frac{n}{2}}}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^ {n+2}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}),\]
where the second step is because \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\geq 2^{n-1}-3\cdot 2^{\frac{n}{2}}\) by Lemma 5.
By Proposition 1 and Corollary 1, for odd \(n\) and \(3\nmid n\) we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2^{n}}} \mathrm{nl}(D_{a}f)}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2((2^{n-1}-2^{\frac{n-1}{2}})(2^ {n-1})+(2^{n-1}-2^{\frac{n+1}{2}})(2^{n-1}-1))}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+1}{2}}+2^{\frac{3n-1}{2}}+2 ^{n}-2^{\frac{n+3}{2}}}\] \[\geq 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Similarly, when \(n\) is odd and \(3\mid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+3}{2}}+2^{n}-2^{\frac{n+3}{ 2}}-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\cdot 2^{\frac{n+1}{2}}}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3 \cdot 2^{n+\frac{1}{2}}-2^{\frac{n+3}{2}}}\] \[\geq 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}),\]
where the second step is because \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\geq 2^{n-1}-3\cdot 2^{\frac{n}{2}}\) by Lemma 5.
### Functions of the type \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\)
In [20], Yan and Tang proved lower bounds on the second-order nonlinearity of the functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\). This class of functions was first studied by Cusick and Dobbertin [1]. We study a similar, but different, class of functions, that is, \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\). In terms of techniques, our proof is similar to [20], and the lower bound is the same as that in [20]. Our main contribution is to _identify_ this class of functions for the first time.
Let \(f=\mathrm{tr}_{n}(x^{2^{r}+3})\). By Proposition 1, we can estimate the second-order nonlinearity \(\mathrm{nl}_{2}(f)\) by calculating the nonlinearity of the derivatives of \(f\), denoted by \(D_{a}f\). We have
\[D_{a}f(x) = \mathrm{tr}_{n}(x^{2^{r}+3}+(x+a)^{2^{r}+3})\] \[= \mathrm{tr}_{n}(a^{2^{r}}x^{3}+a^{2}x^{2^{r}+1}+ax^{2^{r}+2})+ \mathrm{tr}_{n}(a^{3}x^{2^{r}}+a^{2^{r}+1}x^{2}+a^{2^{r}+2}x+a^{2^{r}+3}),\]
where \(\mathrm{tr}_{n}(a^{3}x^{2^{r}}+a^{2^{r}+1}x^{2}+a^{2^{r}+2}x+a^{2^{r}+3})\) is an affine function.
**Theorem 8**.: _Let \(\mathcal{E}_{D_{a}f}\) be the linear kernel of \(D_{a}f(x)\). For odd \(r\), we have_
\[\dim(\mathcal{E}_{D_{a}f})=\begin{cases}r+1,&a\in\mathbb{F}_{2^{r}}^{*},\\ 2,&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}.\end{cases}\]
_Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). For even \(r\), we have_
\[\dim(\mathcal{E}_{D_{a}f})=\begin{cases}r+2,&a\in G,\\ r,&a\in\mathbb{F}_{2^{r}}^{*}\setminus G,\\ 2,&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}.\end{cases}\]
Proof.: Let \(g_{a}(x)=\mathrm{tr}_{n}(a^{2^{r}}x^{3}+a^{2}x^{2^{r}+1}+ax^{2^{r}+2})\). By the definition of the linear kernel, we have
\[\mathcal{E}_{D_{a}f}=\mathcal{E}_{g_{a}}=\{x\in\mathbb{F}_{2^{n}}\mid B(x,y)= g_{a}(0)+g_{a}(x)+g_{a}(y)+g_{a}(x+y)=0,\text{ for all }y\in\mathbb{F}_{2^{n}}\}.\]
Using the properties of the trace function and the fact that \(n=2r\), we have
\[0 = B(x,y) \tag{13}\] \[= g_{a}(0)+g_{a}(x)+g_{a}(y)+g_{a}(x+y)\] \[= \mathrm{tr}_{n}(a^{2^{r}}(x^{2}y+xy^{2})+a(x^{2^{r}}y^{2}+x^{2}y^ {2^{r}})+a^{2}(x^{2^{r}}y+xy^{2^{r}}))\] \[= \mathrm{tr}_{n}((a^{2^{r}}x^{2}+a^{2}x^{2^{r}})y+(a^{2^{r}}x+ax^{ 2^{r}})y^{2}+(ax^{2}+a^{2}x)y^{2^{r}})\] \[= \mathrm{tr}_{n}((a^{2^{r}}x^{2}+a^{2}x^{2^{r}}+a^{2^{r-1}}x^{2^{ n-1}}+a^{2^{n-1}}x^{2^{r-1}}+a^{2^{r}}x^{2^{r+1}}+a^{2^{r+1}}x^{2^{r}})y).\]
Equation (13) holds for all \(y\in\mathbb{F}_{2^{n}}\) if and only if the coefficient of \(y\) is zero, that is,
\[a^{2^{r}}x^{2}+a^{2}x^{2^{r}}+a^{2^{r-1}}x^{2^{n-1}}+a^{2^{n-1}}x^{2^{r-1}}+a^ {2^{r}}x^{2^{r+1}}+a^{2^{r+1}}x^{2^{r}}=0. \tag{14}\]
Let \(\begin{cases}y=x^{2^{r}}\\ b=a^{2^{r}}\end{cases}\). Thus \(\begin{cases}x=y^{2^{r}}\\ a=b^{2^{r}}\end{cases}\). Equation (14) becomes
\[bx^{2}+a^{2}y+b^{\frac{1}{2}}x^{2^{n-1}}+a^{2^{n-1}}y^{\frac{1}{2}}+by^{2}+b^ {2}y=0.\]
Squaring both sides of the above equation, we have
\[0 = b^{2}x^{4}+a^{4}y^{2}+bx+ay+b^{2}y^{4}+b^{4}y^{2} \tag{15}\] \[= b^{2}(x+y)^{4}+y^{2}(a^{4}+b^{4})+bx+ay\] \[= a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})+a^ {2^{r}}x+ax^{2^{r}}. \tag{16}\]
Thus \(\mathcal{E}_{D_{a}f}\) is the set of \(x\in\mathbb{F}_{2^{n}}\) such that (15) is satisfied. We consider the following cases.
**Case 1**: \(a\notin\mathbb{F}_{2^{r}}\), i.e., \(a\neq b\).
**Subcase 1.1**: \(x\in\mathbb{F}_{2^{r}}\), i.e. \(x=y\). In this case, (15) is equivalent to
\[0 = (a^{4}+b^{4})x^{2}+(a+b)x \tag{17}\] \[= (a+b)x((a+b)^{3}x+1).\]
The solutions to (17) are \(x\in\{0,(a+b)^{2^{n}-4}\}\).
**Subcase 1.2**: \(x\notin\mathbb{F}_{2^{r}}\). Since \((a^{2^{r}}x+ax^{2^{r}})^{2^{r}}=a^{2^{r}}x+ax^{2^{r}}\), we have \(a^{2^{r}}x+ax^{2^{r}}\in\mathbb{F}_{2^{r}}\). From (16), we have
\[a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})=a^{2^{r}}x+ax^{2^ {r}},\]
which implies that \(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})\in\mathbb{F}_{2^{r}}\). Since any element \(\alpha\in\mathbb{F}_{2^{r}}\) satisfies equation \(\alpha^{2^{r}}=\alpha\), we have
\[0 = \left(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}}) \right)^{2^{r}}+\left(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+ 2}})\right) \tag{18}\] \[= (a^{2}+a^{2^{r+1}})(x^{2}+x^{2^{r+1}})^{2}+(x^{2}+x^{2^{r+1}})(a^{ 2}+a^{2^{r+1}})^{2}\] \[= (a^{2}+a^{2^{r+1}})(x^{2}+x^{2^{r+1}})(a^{2}+a^{2^{r+1}}+x^{2}+x^{ 2^{r+1}}).\]
Since \(a,x\notin\mathbb{F}_{2^{r}}\), we have \(a^{2}+a^{2^{r+1}}\neq 0\) and \(x^{2}+x^{2^{r+1}}\neq 0\). Thus, by (18), we have \(a^{2}+a^{2^{r+1}}+x^{2}+x^{2^{r+1}}=0\), that is, \((x+a)^{2}=(x+a)^{2^{r+1}}\). As such, we have \(x+a=(x+a)^{2^{r}}\), that is, \(y=x+a+b\). So we claim that if \(x\in\mathcal{E}_{D_{a}f}\), then we must have \(y=x+a+b\). On the other hand, let us solve (15) assuming \(y=x+a+b\) is satisfied, which, in fact, must be satisfied, as we have shown. Plugging \(y=x+a+b\) into equation (15), we have
\[0 = b^{2}(a+b)^{4}+(x+a+b)^{2}(a+b)^{4}+bx+a(x+a+b)\] \[= (a+b)^{4}x^{2}+a^{2}(a+b)^{4}+(a+b)x+a(a+b)\] \[= (a+b)^{4}(x+a)^{2}+(a+b)(x+a)\] \[= (a+b)(x+a)((a+b)^{3}(x+a)+1),\]
which implies that \(x=a\) or \(x=(a+b)^{2^{n}-4}+a\).
In Case 1 where \(a\notin\mathbb{F}_{2^{r}}\), we conclude that \(\mathcal{E}_{D_{a}f}=\{0,(a+b)^{2^{n}-4},a,(a+b)^{2^{n}-4}+a\}\) and \(\dim(\mathcal{E}_{D_{a}f})=2\).
**Case 2**: \(a\in\mathbb{F}_{2^{r}}^{*}\). In this case, \(b=a\), equation (15) becomes
\[0 = a^{2}(x+y)^{4}+a(x+y) \tag{19}\] \[= a(x+y)(a(x+y)^{3}+1),\]
which implies that \(y=x\) or \((x+y)^{3}=a^{2^{r}-2}\).
**Subcase 2.1**: \(y=x\), i.e., \(x^{2^{r}}=x\). In this case, \(x^{2^{r}}=x\) if and only if \(x\in\mathbb{F}_{2^{r}}\). Thus \(\mathbb{F}_{2^{r}}\subseteq\mathcal{E}_{D_{a}f}\).
**Subcase 2.2**: \((x+y)^{3}=a^{2^{r}-2}\). In this case, we consider the following two subcases according to the parity of \(r\).
* If \(r\) is odd, we have \(2^{r}-1\equiv 1\pmod{3}\) and \(\gcd(2^{r}-2,3)=3\). Thus \((x+y)^{3}=a^{2^{r}-2}\) implies \[x^{2^{r}}+x=a^{\frac{2^{r}-2}{3}}.\] (20) Since \(\mathbb{F}_{2^{n}}\) is a field extension of \(\mathbb{F}_{2^{r}}\) of degree \(2\), \(x^{2^{r}}+x\) is the trace function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{r}}\), which is a \(2^{r}\)-to-\(1\) mapping. So the number of solutions to (20) is \(2^{r}\). Combining with Subcase 2.1, we conclude that when \(r\) is odd and \(a\in\mathbb{F}_{2^{r}}^{*}\), we have \(\dim(\mathcal{E}_{D_{a}f})=r+1\).
* If \(r\) is even, we have \(\gcd(2^{r}-1,3)=3\). Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) be the multiplicative group of order \(\frac{2^{r}-1}{3}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). If \(a\notin G\), then \(a^{2^{r}-2}\) is not a cube, that is, \((x+y)^{3}=a^{2^{r}-2}=a^{-1}\) has no roots. Combining with Subcase 2.1, we deduce that \(\dim(\mathcal{E}_{g_{a}})=r\) when \(a\notin G\) and \(r\) is even. If \(a\in G\), we have \[x+y=x^{2^{r}}+x=g^{-s+\frac{2^{r}-1}{3}i},\text{ for }i=0,1,2,\] (21) for some \(0\leq s\leq\frac{2^{r}-4}{3}\). Similarly, we can prove, for each \(i=0,1,2\), equation (21) has exactly \(2^{r}\) solutions. Thus we have \(\dim(\mathcal{E}_{g_{a}})=r+2\) when \(a\in G\) and \(r\) is even.
Summarizing all the cases above, we complete the proof.
Combining Proposition 1 and Theorem 8, we can prove the the lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\).
Proof.: (of Theorem 2) When \(r\) is odd, by Lemma 1 and Theorem 8, we have
\[\mathrm{nl}(D_{a}f(x))=\begin{cases}2^{n-1}-2^{\frac{n+r-1}{2}},&a\in\mathbb{F }_{2^{r}}^{*},\\ 2^{n-1}-2^{\frac{n}{2}},&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}. \end{cases} \tag{22}\]
By Proposition 1 and (22), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{2}+1}+2^{\frac{5n}{4}+\frac {1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
When \(r\) is even, similarly, we can prove
\[\mathrm{nl}(D_{a}f(x))=\begin{cases}2^{n-1}-2^{\frac{n+r}{2}},&a\in G,\\ 2^{n-1}-2^{\frac{n+r}{2}-1},&a\in\mathbb{F}_{2^{r}}^{*}\setminus G,\\ 2^{n-1}-2^{\frac{n}{2}},&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}. \end{cases} \tag{23}\]
where \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) is the multiplicative group of order \(\frac{2^{r}-1}{3}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). By Proposition 1 and (23), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3}{2}n+1}+\frac{1}{3}}\cdot 2^{ \frac{5}{2}n+2}-2^{n}-\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
## 4 Third-order nonlinearity
The following proposition is proved by applying Proposition 1 twice.
**Proposition 3**.: _[_1_]_ _Let \(f\) be any \(n\)-variable function and \(r\) a positive integer smaller than \(n\). We have_
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{\sum_{a\in\mathbb{F}_{2^{n}}^{ *}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_{2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{b}f)}}.\]
By the above proposition, our goal is to estimate the nonlinearities of the second-order derivatives of \(\mathrm{tr}_{n}(x^{15})\). Observe that
\[\sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{b}f)} = \sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{ab}f)}\] \[= \sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{ab}D_{a}f)}.\]
Thus it is equivalent to estimate the first-order nonlinearity of \(D_{ab}D_{a}f\) for all \(a,b\in\mathbb{F}_{2^{n}}\).
**Lemma 6**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). For any \(a\in\mathbb{F}_{2^{n}}\) and \(b\in\mathbb{F}_{2^{n}}\), element \(x\in\mathbb{F}_{2^{n}}\) is in the linear kernel of \(D_{ab}D_{a}f\) if and only if \(P(x,a,b)=0\), where_
\[P(x,a,b)=Q(x,a,b)(Q(x,a,b)+1) \tag{24}\]
_and_
\[Q(x,a,b)=(b^{2}+b)^{-4}R(x,a,b)(R(x,a,b)+1) \tag{25}\]
_and_
\[R(x,a,b)=a^{30}(b^{2}+b)^{6}\left((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\right)^{4}+a ^{15}(b^{2}+b)^{5}\left((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\right). \tag{26}\]
Proof.: For any \(a,b\in\mathbb{F}_{2^{n}}\), we have
\[(D_{a}f)(ax) = \mathrm{tr}_{n}((ax)^{15})+\mathrm{tr}_{n}((ax+a)^{15})\] \[= \mathrm{tr}_{n}((ax)^{15}+(ax+a)^{15})\] \[= \mathrm{tr}_{n}(a^{15}(\sum_{i=0}^{14}x^{i})),\]
and
\[D_{b}((D_{a}f)(ax))\] \[= (D_{ab}D_{a}f)(ax)\] \[= \mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{12}+(b^{4}+b)x^{10}+(b^{4}+b^{ 2})x^{9}+(b^{8}+b)x^{6}+(b^{8}+b^{2})x^{5}+(b^{8}+b^{4})x^{3}))+l(x),\]
where \(l(x)\) is an affine function. By Proposition 2, we have \(\mathcal{E}_{D_{ab}D_{a}f(ax)}=\mathcal{E}_{D_{ab}D_{a}f}\) for any \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) and \(a\in\mathbb{F}_{2^{n}}^{*}\). (When \(a=0\) or \(b\in\{0,1\}\), \(D_{ab}D_{a}f\) becomes \(0\), so the conclusion holds obviously.)
For convenience, let \(g(x)=D_{ab}D_{a}f(ax)\). We have \(\mathcal{E}_{D_{ab}D_{a}f(ax)}=\{x\in\mathbb{F}_{2}^{n}\mid B(x,y)=g(0)+g(x)+ g(y)+g(x+y)=0\) for all \(y\in\mathbb{F}_{2}^{n}\}\) by definition. By somewhat tedious computation, we have
\[B(x,y) = g(0)+g(x)+g(y)+g(x+y)\] \[= \mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2 })x)y^{8})\] \[+\mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2 })x)y^{4})\] \[+\mathrm{tr}_{n}(a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4 })x)y^{2})\] \[+\mathrm{tr}_{n}(a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^ {8}+b^{4})x^{2})y).\]
Using the properties of the trace function, we have
\[B(x,y) = \mathrm{tr}_{n}(((a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{ 2})x))^{2^{-3}}\] \[+(a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2})x))^{2^{-2}}\] \[+(a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4})x))^{2^{-1}}\] \[+a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^{8}+b^{4})x^{2})) y).\]
It is clear that \(B(x,y)=0\) for all \(y\in\mathbb{F}_{2^{n}}\) if and only if the coefficient of \(y\) is zero, that is,
\[0 = (a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2})x))^{2^{-3}}\] \[+ (a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2})x))^{2^{-2}}\] \[+ (a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4})x))^{2^{-1}}\] \[+ a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^{8}+b^{4})x^{2}).\]
Raising both sides of the above equation to the 8th power, we get \(P(x,a,b)=0\), as desired.
Let \(\mathrm{N}_{P}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(P(x,a,b)=0\) where \(a\neq 0\) and \(b\neq 0,1\); let \(\mathrm{N}_{Q}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(Q(x,a,b)=0\); let \(\mathrm{N}_{Q+1}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(Q(x,a,b)+1=0\); let \(\mathrm{N}_{R}(a,b)\) denote the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(R(x,a,b)=0\), and \(\mathrm{N}_{R+1}(a,b)\) denote the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(R(x,a,b)+1=0\).
**Lemma 7**.: _Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), and let polynomials \(P(x,a,b)\), \(Q(x,a,b)\) and \(R(x,a,b)\) be defined as in Lemma 6. We have \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\), and \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\). In addition,_
* \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\) _and_ \(\mathrm{N}_{Q+1}(a,b)\leq 32\)_._
* \(\mathrm{N}_{R}(a,b)\in\{2^{2},2^{3},2^{4}\}\) _and_ \(\mathrm{N}_{R+1}(a,b)\leq 16\)_._
* _When_ \(n\) _is even,_ \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\)_; when_ \(n\) _is odd,_ \(\mathrm{N}_{P}(a,b)\in\{2^{1},2^{3},2^{5}\}\)_._
Proof.: Notice that \(P(x,a,b)\) is a \(2\)-polynomial (in variable \(x\)) of degree \(64\); the number of roots for equation \(P(x,a,b)=0\) is at most \(64\). By Lemma 2, we know that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\) and \(n\) have the same parity. Therefore, when \(n\) is even, \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\); when \(n\) is odd, \(\mathrm{N}_{P}(a,b)\in\{2^{1},2^{3},2^{5}\}\). (Note that, when \(b\notin\{0,1\}\), \(R(x,a,b)=0\) has at least \(4\) roots \(0,1,b,b+1\), which implies that \(P(x,a,b)=0\) has at least \(4\) roots.)
Since \(P(x,a,b)=Q(x,a,b)(Q(x,a,b)+1)\), we have \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\). Observe that \(Q(x,a,b)\) is a \(2\)-polynomial of degree \(32\), we have \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\).
From (25), we have \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\), when \(b\notin\{0,1\}\). Clearly, \(R(x,a,b)\) is a \(2\)-polynomial of degree \(16\) in variable \(x\). Note that \(x=0,1,b,b+1\) are the four different roots whenever \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), then \(\mathrm{N}_{R}(a,b)\in\{2^{2},2^{3},2^{4}\}\). On the other hand, the degree of \(R(x,a,b)+1\) is \(16\), so \(\mathrm{N}_{R+1}(a,b)\leq 16\). Since \(Q(x,a,b)\) is a \(2\)-polynomial of degree \(32\), we have \(\mathrm{N}_{Q}(a,b)\in\{2^{2},2^{3},2^{4},2^{5}\}\). In the following, we lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{P}(a,b)\leq 16\) for a fixed \(a\in\mathbb{F}_{2^{n}}^{*}\).
**Lemma 8**.: _Let \(n\) be even. Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\). If \(\mathrm{N}_{R}(a,b)=4\), then \(\mathrm{N}_{P}(a,b)\leq 16\)._
Proof.: Since \(\mathrm{N}_{R}(a,b)=4\), we have \(\mathrm{N}_{Q}(a,b)\leq 4+\deg(R+1)=20\). Note that \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\in\{2^{2},2^{3},2^{4},2^{5}\}\) by Lemma 7. So we have \(\mathrm{N}_{Q}(a,b)\leq 16\).
Note that \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\), and \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\) by Lemma 7. So we have \(\mathrm{N}_{P}(a,b)\leq 16\).
By Lemma 8, when \(n\) is even, to lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) where \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\leq 4\), it suffices to lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) where \(\mathrm{N}_{R}(a,b)=4\).
**Theorem 9**.: _Let \(n\) be even. For any \(a\in\mathbb{F}_{2^{n}}^{*}\), there are at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{R}(a,b)=4\)._
Proof.: When \(x\notin\{0,1,b,b+1\}\), we have \((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\neq 0\). Let \(y=(x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\). Since \(R(x,a,b)=0\), we can deduce that
\[y^{3}=\frac{1}{a^{15}(b^{2}+b)}. \tag{27}\]
Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\}\), where \(g\) is a primitive element. If \(b^{2}+b\not\in G\), it is clear that (27) has no solution, which implies that \(\mathrm{N}_{R}(a,b)=4\). Next, we prove there are at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) elements \(b\) such that \(b^{2}+b\not\in G\), which will complete our proof.
We estimate the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\notin G\). Let \(s_{1}\) denote the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\notin G\); let \(s_{2}\) denote the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\in G\).
Consider equation
\[x^{3}=b^{2}+b. \tag{28}\]
Observe that
* Equation (28) has (at least) a solution in variable \(x\) if and only if \(b^{2}+b\in G\).
* It is well known that (for example, see page 536 in [11]) equation \(b^{2}+b=c\) has two solutions if and only if \(\mathrm{tr}_{n}(c)=0\), otherwise the equation has no solution. Thus, equation (28) in variable \(b\) has a solution if and only if \(\mathrm{tr}_{n}(x^{3})=0\).
For any fixed \(b\), denote the set of solutions by \(X_{b}\), and let \(X=\cup_{b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}}X_{b}\). Consider the mapping \(\phi(x):X\to\mathbb{F}_{2^{n}}^{*}\), where \(\phi(x)=x^{3}\). Notice that \(\phi(x):\mathbb{F}_{2^{n}}^{*}\to\mathbb{F}_{2^{n}}^{*}\) is a 3-to-1 mapping on \(\mathbb{F}_{2^{n}}^{*}\). Furthermore, mapping \(\phi:X\to\mathbb{F}_{2^{n}}^{*}\) is also a 3-to-1 mapping on \(X\). Otherwise, there exist \(x_{1}\in X\), \(x_{2}\notin X\) such that \(x_{1}^{3}=x_{2}^{3}\) and \(\mathrm{tr}_{n}(x_{1}^{3})\neq\mathrm{tr}_{n}(x_{2}^{3})\), which is a contradiction.
Recall that there exists \(b\) such that \(x^{3}=b^{2}+b\) if and only if \(\mathrm{tr}_{n}(x^{3})=0\). Therefore, \(|X|=2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))\). Combining with the fact that \(\phi(x):X\to\mathbb{F}_{2^{n}}^{*}\) is a 3-to-1 mapping, we have
\[|\{b^{2}+b:b^{2}+b\in G,b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}|=\frac{1}{3} \cdot(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))).\]
Since \(b\mapsto b^{2}+b\) is 2-to-1 mapping on \(\mathbb{F}_{2^{n}}\setminus\{0,1\}\), we have \(s_{2}=2\cdot|\{b^{2}+b:b^{2}+b\in G\}|=\frac{1}{3}\cdot(2^{n+1}-2-2\mathrm{wt }(\mathrm{tr}_{n}(x^{3})))\). So \(s_{1}=2^{n}-2-s_{2}=\frac{1}{3}\cdot(2^{n}-4+2\mathrm{wt}(\mathrm{tr}_{n}(x^{ 3})))\). By Lemma 5, we have \(\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))\geq 2^{n-1}-2^{\frac{n}{2}}\). So \(s_{1}\geq\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\).
The number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that (27) with no solution is at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\).
By Theorem 9, the following theorem is immediate.
**Theorem 10**.: _Let \(n\) be even. We have_
**Lemma 9**.: _Let \(n\) be odd. Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\). If \(\mathrm{N}_{R}(a,b)=4\), then \(\mathrm{N}_{P}(a,b)\leq 8\)._
Proof.: Let \(n\) be odd and let \(\mathrm{N}_{R}(a,b)=4\). We will prove the followings step by step:
* \(\mathrm{N}_{R+1}(a,b)\leq 8\).
* \(\mathrm{N}_{Q}(a,b)\leq 8\).
* \(\mathrm{N}_{Q+1}(a,b)\leq 16\).
* \(\mathrm{N}_{P}(a,b)\leq 8\).
First, let us prove \(\mathrm{N}_{R+1}(a,b)\leq 8\). If \(R(x,a,b)+1=0\), we have
\[0 = a^{30}(b^{2}+b)^{6}\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)} ^{4}+a^{15}(b^{2}+b)^{5}\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)}+1 \tag{29}\] \[= a^{30}(b^{2}+b)^{6}y^{4}+a^{15}(b^{2}+b)^{5}y+1,\]
where \(y=(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\). Equation (30) can be converted to
\[y^{4}+\frac{y}{a^{15}(b^{2}+b)}+\frac{1}{a^{30}(b^{2}+b)^{6}}=0. \tag{31}\]
By Lemma 4, since \(n\) is odd, equation (31) in variable \(y\) has no solution or exactly two solutions. Furthermore, since \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\) is a polynomial of degree 4, the number of \(x\in\mathbb{F}_{2^{n}}\) such that \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)+y=c\), for any \(c\), is at most 4. So equation (29) in variable \(x\) has at most 8 solutions, that is, \(\mathrm{N}_{R+1}(a,b)\leq 8\).
Second, we prove \(\mathrm{N}_{Q}(a,b)\leq 8\). Note that \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\leq 12\). On the other hand, by Lemma 7, \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\). So we have \(\mathrm{N}_{Q}(a,b)\leq 8\).
Next, we prove \(\mathrm{N}_{Q+1}(a,b)\leq 16\). Suppose \(Q(x,a,b)+1=0\). We have \((b^{2}+b)^{4}Q(x,a,b)+(b^{2}+b)^{4}=0\), that is,
\[R(x,a,b)(R(x,a,b)+1)+(b^{2}+b)^{4}=0. \tag{32}\]
Viewing (32) as a quadratic equation in variable \(R\), we know that (32) has at most 2 solutions, denoted by \(c_{1},c_{2}\). We shall prove that, for each \(i=1,2\), \(R(x,a,b)=c_{i}\) has at most 8 solutions.
Let \(R(x,a,b)=c_{i}\) and let \(y=(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\), where \(c_{i}\in\mathbb{F}_{2^{n}}^{*}\). Then we have
\[a^{30}(b^{2}+b)^{6}y^{4}+a^{15}(b^{2}+b)^{5}y+c_{i}=0,\]
that is
\[y^{4}+\frac{y}{a^{15}(b^{2}+b)}+\frac{c_{i}}{a^{30}(b^{2}+b)^{6}}=0. \tag{33}\]
By Lemma 4, equation (33), in variable \(y\), has no solution or exactly two solutions. Furthermore, since \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\) is a polynomial of degree 4, the number of \(x\in\mathbb{F}_{2^{n}}\) such that \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)=d\) is at most 4. In total, equation \(R(x,a,b)=c_{i}\) has at most 8 solutions. Thus, the \(Q(x,a,b)+1=0\) has at most 16 solutions, that is, \(\mathrm{N}_{Q+1}(a,b)\leq 16\).
Finally, we prove \(\mathrm{N}_{P}(a,b)\leq 8\). Note that \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\leq 24\). By Lemma 7, \(\mathrm{N}_{P}(a,b)\in\{1,8,32\}\). So we have \(\mathrm{N}_{P}(a,b)\leq 8\).
**Theorem 11**.: _Let \(n\) be odd. We have_
Proof.: Since \(n\) is odd, we have \(3\mid(2^{n}-2)\). When \(x\in\{0,1,b,b+1\}\), we have \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\neq 0\). If \(R(x,a,b)=0\) for \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), we have
\[a^{15}(b^{2}+b)\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)}^{3}=1,\]
which is
\[(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)=a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}}. \tag{34}\]
Multiplying \(\frac{1}{(b^{2}+b)^{2}}\) to both sides of (34), we get
\[\left(\frac{x^{2}+x}{b^{2}+b}\right)^{2}+\frac{x^{2}+x}{b^{2}+b}=a^{-5}(b^{2} +b)^{\frac{2^{n}-2}{3}-2}. \tag{35}\]
If \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\), then \(t^{2}+t=a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2}\) has no solution, where \(t=\frac{x^{2}+x}{b^{2}+b}\). So \(\mathrm{N}_{R}(a,b)=4\). Thus it suffices to lower bound the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\).
Let \(n=2r+1\). Note that \(\frac{2^{n}-2}{3}-2=\sum_{i=1}^{r-1}2^{n-2i}\). So we have
\[(b^{2}+b)^{\frac{2^{n}-2}{3}-2} = (b^{2}+b)^{\sum_{i=1}^{r-1}2^{n-2i}}\] \[= \prod_{i=1}^{r-1}(b^{2}+b)^{2^{n-2i}}\] \[= \prod_{i=1}^{r-1}(b^{2^{n-2i+1}}+b^{2^{n-2i}})\] \[= \sum_{d_{1},d_{2},\ldots,d_{r-1}\in\{0,1\}}b^{\sum_{i=1}^{r-1}2^{n -2i+d_{i}}}.\]
Expanding the trace function using its definition, we have
\[\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2}) \tag{36}\] \[= \mathrm{tr}_{n}(a^{-5}\sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}}})\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\mathrm{tr}_{n}(a^{-5}b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}}})\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5})^{2^ {j}}b^{\sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5\cdot 2 ^{j}})b^{\sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}.\]
For convenience, let
\[h(b)=\sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5\cdot 2^{j}})b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}. \tag{37}\]
Next, we will analyze the highest and lowest degree terms of the polynomial \(h(b)\) as they are closely related to the number of roots of \(h(b)=0\).
**Lemma 10**.: _The maximum degree of \(h(b)\) is \(\frac{5}{3}\cdot 2^{n-1}-\frac{32}{3}\) for \(n\geq 6\)._
**Lemma 11**.: _The minimum degree of the monomial of \(h(b)\) is \(\frac{1}{3}\cdot(2^{n-4}+1)\), which implies \(h(b)=b^{\frac{1}{3}\cdot(2^{n-4}+1)}p(b)\), where \(b\nmid p(b)\)._
The proofs of these two lemmas can be found in Appendix A and B.
By Lemma 10 and 11, the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) for which \(h(b)=0\) is at most \(13\cdot 2^{n-4}-12\). Hence, the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(h(b)=1\) is at least \(3\cdot 2^{n-4}+10\).
Hence, we have the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{P}(a,b)=8\) is at least \(3\cdot 2^{n-4}+10\) since the set of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\) satisfying is the set of roots of the equation \(h(b)=1\). That is, the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\leq 3\) satisfying is at least \(3\cdot 2^{n-4}+10\); the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})=5\) is at most \(13\cdot 2^{n-4}-12\).
By Theorem 10 and 11, the following corollary is immediate.
**Corollary 2**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). Denote that \(\mathrm{nl}(D_{ab}D_{a}f)\) be the nonlinearity of \(D_{ab}D_{a}f\). For any \(b\in\mathbb{F}_{2^{n}}\), the distribution of \(\mathrm{nl}(D_{ab}D_{a}f)\) is as follows:_
Now we are ready to prove Theorem 3, which gives a lower bound on the third-order nonlinearity of \(\operatorname{tr}_{n}(x^{15})\).
Proof.: (of Theorem 3) By Proposition 3 and Corollary 2, for even \(n\), we have
\[\operatorname{nl}_{3}(f)\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{2n}-2((2^{n-1}-2^{ \frac{n}{2}+1})(\frac{2^{n+1}-2^{\frac{n}{2}+1}-4}{3})+(2^{n-1}-2^{\frac{n}{2} +2})(\frac{2^{n}+2^{\frac{n}{2}+1}-2}{3}))}+2^{n}}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{\frac{1}{3}\cdot 2^{ \frac{3}{2}n+4}+\frac{7}{3}\cdot 2^{n+1}-\frac{1}{3}\cdot 2^{\frac{n}{2}+5}+2^{n}}}\] \[\geq 2^{n-1}-2^{\frac{7_{n}}{8}-\frac{1}{4}\log_{2}3}-O(2^{\frac{3n}{ 8}}).\]
By Proposition 3 and Corollary 2, when \(n\) is odd and \(n>6\), we have
\[\operatorname{nl}_{3}(f)\geq 2^{n-1}-\] \[= 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{2}-1}\sqrt{\frac{29} {8}\cdot 2^{\frac{3n+1}{2}}+2^{n+1}-7\cdot 2^{\frac{n+5}{2}}+2^{n}}}\] \[\geq 2^{n-1}-2^{\frac{7_{n}}{8}-\frac{13}{8}+\frac{1}{4}\log_{2}29}-O (2^{\frac{3n}{8}}).\]
### Comparison
We list the lower bound values on the third-order nonlinearity of \(\operatorname{tr}_{n}(x^{15})\) for \(7\leq n\leq 20\) in Table 8 and 9. Our lower bound outperforms all the existing lower bounds [1, 2, 3], both asymptotically and for all concrete \(n\).
\begin{table}
\begin{tabular}{c|c|c} \hline n & \(\operatorname{nl}(D_{ab}D_{a}f)\) & The number of \(b\in\mathbb{F}_{2^{n}}\) \\ \hline even \(n\) & \(0\) & \(2\) \\ \cline{2-3} & \(\geq 2^{n-1}-2^{\frac{n}{2}+1}\) & \(\geq\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) \\ & \(\leq 2^{n-1}-2^{\frac{n}{2}+2}\) & \(\leq\frac{1}{3}\cdot(2^{n}+2^{\frac{n}{2}+1}-2)\) \\ \hline odd \(n\) & \(0\) & \(2\) \\ \cline{2-3} & \(\geq 2^{n-1}-2^{\frac{n+1}{2}}\) & \(\geq 3\cdot 2^{n-4}+10\) \\ & \(\leq 2^{n-1}-2^{\frac{n+3}{2}}\) & \(\leq 13\cdot 2^{n-4}-12\) \\ \hline \end{tabular}
\end{table}
Table 7: The distribution of \(\operatorname{nl}(D_{ab}D_{a}f)\)
## 5 Higher-order nonlinearity
In this section, we lower bound the \(r\)-th order nonlinearity for Boolean functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\) and \(\operatorname{tr}_{n}(x^{2^{n}-2})\).
Applying \(t\) times Proposition 1, we have
**Proposition 4**.: _[_1_]_ _Let \(f\) be any \(n\)-variable Boolean function and \(r\) a positive integer smaller than \(n\). We have_
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{\sum_{a_{1}\in\mathbb{F} _{2^{n}}}\sqrt{\sum_{a_{2}\in\mathbb{F}_{2^{n}}}\cdots\sqrt{2^{2n}-2\sum_{a_{ \epsilon}\in\mathbb{F}_{2^{n}}}\operatorname{nl}_{r-t}(D_{a_{\epsilon}}D_{a_{ \epsilon-1}}\ldots D_{a_{1}}f)}}}.\]
By Proposition 4, to lower bound the \(r\)-th order nonlinearity for functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\), our strategy is to lower bound the first-order nonlinearity \(\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\) for all distinct \(a_{1},a_{2},\ldots,a_{r-1}\in\mathbb{F}_{2^{n}}^{*}\). We will need the following lemma in the proof of Lemma 13.
The following lemma is proved in [10]; we state a special case of interest using different notations. Let \(a,b\) be two positive integers, where \(a=\sum_{i\geq 0}2^{i}a_{i}\) and \(b=\sum_{i\geq 0}2^{i}b_{i}\) be the binary representations of \(a\) and \(b\) respectively. Define a partial order \(\preceq\) between two positive integers as follows: \(a\preceq b\) if and only if \(a_{i}\leq b_{i}\) for all \(i\geq 0\); \(a\prec b\) if and only if \(a\preceq b\) and \(a\neq b\). Lucas's theorem says that \(\binom{b}{a}\equiv 1\pmod{2}\) if and only if \(a\preceq b\).
**Lemma 12**.: _(Lemma 4 in [10]) Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\). For any distinct \(a_{1},a_{2},\ldots,a_{t}\in\mathbb{F}_{2^{n}}^{*}\), where \(1\leq t\leq r\), we have_
\[D_{a_{t}}D_{a_{t-1}}\ldots D_{a_{1}}f(x)=\operatorname{tr}_{n} \Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{t}\prec d_{t-1}\prec\ldots\prec d _{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,t\end{subarray}}x^{d_{t}}\prod _{i=1}^{t}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+p(x), \tag{38}\]
_where \(\deg(p)\leq r-t\)._
The next lemma gives a lower bound on the first-order nonlinearity for the \((r-1)\)-th order derivatives of \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\).
**Lemma 13**.: _Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\). For any distinct \(a_{1},a_{2},\ldots,a_{r-1}\in\mathbb{F}_{2^{n}}^{*}\), we have_
\[\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\geq 2^{n-1}-2^{ \frac{n+2r-2}{2}}.\]
Proof.: Let \(g(x)=D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f(x)\). Applying Lemma 12 with \(t=r-1\), we have
\[g(x)=\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r-1} \prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}x^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+p(x),\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \(n\) & 7 & 9 & 11 & 13 & 15 & 17 & 19 \\ \hline \(\operatorname{nl}_{3}\) & 12 & 80 & 429 & 2096 & 9660 & 42923 & 186092 \\ \hline \end{tabular}
\end{table}
Table 8: Lower bounds in Theorem 3 for odd \(n\)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \(n\) & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ \hline \(\operatorname{nl}_{3}\) & 30 & 183 & 944 & 4484 & 20308 & 89180 & 383411 \\ \hline \end{tabular}
\end{table}
Table 9: Lower bounds in Theorem 3 for even \(n\)
where \(\deg(p)\leq 1\).
Let \(B(x,y)=g(0)+g(x)+g(y)+g(x+y)\). We have
\[B(x,y) = \operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r- 1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}x^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+ \tag{39}\] \[\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r- 1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}y^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+\] \[\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{ r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}(x+y)^{d_{r- 1}}\prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}\] \[= \operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{ r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r\end{subarray}}x^{d_{r-1}-d_{r }}y^{d_{r}}\prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}.\]
Let \(e_{i}=d_{i-1}-d_{i}\) for \(i=1,2,\ldots,r\). Let \(e_{r+1}=d_{r}\). Note that
\[0\prec d_{r}\prec d_{r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{ r+1}-1\]
and \(\operatorname{wt}(d_{k})=r+1-k\) for \(k=1,2,\ldots,r\). So \(e_{1},e_{2},\ldots,e_{r+1}\) are distinct, and \(\operatorname{wt}(e_{k})=1\) for \(k=1,2,\ldots,r+1\). Rewriting (39), we have
\[B(x,y)=\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}}( \prod_{i=1}^{r-1}a_{i}^{e_{i}})x^{e_{r}}y^{e_{r+1}}\Bigr{)}. \tag{40}\]
According to (40), \(B(x,y)=0\) holds for all \(y\) if and only if the coefficient of \(y\) is zero, that is,
\[\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}} \bigl{(}a_{1}^{e_{1}}a_{2}^{e_{2}}a_{3}^{e_{3}}\ldots a_{r-1}^{e_{r-1}}x^{e _{r}}\bigr{)}^{e_{r+1}^{r-1}}=0. \tag{41}\]
Raising both sides of (41) to the \(2^{r}\)th power, we have
\[\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}} \bigl{(}a_{1}^{e_{1}}a_{2}^{e_{2}}a_{3}^{e_{3}}\ldots a_{r-1}^{e_{r-1}}x^{e _{r}}\bigr{)}^{2^{r}\cdot e_{r+1}^{-1}}=0. \tag{42}\]
Observe that each monomial in the left hand side of (42) has degree at most \(2^{2r}\), because \(e_{r}\leq 2^{r}\) and \(2^{r}\cdot e_{r+1}^{-1}\leq 2^{r}\). So the degree of (42) is at most \(2^{2r}\), which implies that (42) has at most \(2^{2r}\) solutions. Therefore, the dimension of the linear kernel of \(B(x,y)\) is at most \(2r\). By Lemma 1, we have
\[\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\geq 2^{n-1}-2^{ \frac{n+2r-2}{2}}.\]
We will need the following lemma in the proof of Theorem 13.
**Lemma 14**.: _Let integer \(r\geq 1\). Let \(\alpha_{1}>\alpha_{2}>\ldots>\alpha_{r}>0\) and \(c_{1},c_{2},\ldots,c_{r}>0\). We have_
\[c_{1}\cdot 2^{\alpha_{1}n}+c_{2}\cdot 2^{\alpha_{2}n}+\ldots+c_{r}\cdot 2^{ \alpha_{r}n}\leq(\sqrt{c_{1}}\cdot 2^{\frac{1}{2}\cdot\alpha_{1}n}+\frac{c_{2}}{2 \sqrt{c_{1}}}\cdot 2^{(\alpha_{2}-\frac{1}{2}\cdot\alpha_{1})n}+\ldots+\frac{c_{r}}{2 \sqrt{c_{1}}}\cdot 2^{(\alpha_{r}-\frac{1}{2}\cdot\alpha_{1})n})^{2}\]
Proof.: By straightforward calculation, we have
\[\mathrm{R.H.S} = c_{1}\cdot 2^{\alpha_{1}n}+\ldots+c_{r}\cdot 2^{\alpha_{r}n}+ \sum_{i=2}^{r}\frac{c_{i}^{2}}{4c_{1}}\cdot 2^{(2\alpha_{i}-\alpha_{1})n}+ \sum_{i,j=2}^{r}\frac{c_{i}\cdot c_{j}}{2c_{1}}2^{(\alpha_{i}+\alpha_{j}- \alpha_{1})n}\] \[\geq \mathrm{L.H.S}\]
In the following, we lower bound the \(r\)-th order nonlinearity for functions \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\).
**Theorem 12**.: _Let \(f=\mathrm{tr}_{n}(x^{2^{r+1}-1})\) and \(r\geq 1\). We have_
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n }{2}}).\]
Proof.: (of Theorem 4) Let \(l_{0}=\mathrm{nl}_{r}(f)\) and
\[l_{i}=\min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n}}^{*}} \mathrm{nl}_{r-i}(D_{a_{i}}\ldots D_{a_{1}}f)\]
for \(i=1,2,\ldots,r-1\).
By Proposition 1, we have
\[l_{i} = \min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n}} ^{*}}\mathrm{nl}_{r-i}(D_{a_{i}}\ldots D_{a_{1}}f) \tag{43}\] \[\geq \min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n} }^{*}}2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a_{i+1}\in\mathbb{F}_{2^{n}}^{*} \setminus\{a_{1},a_{2},\ldots,a_{i}\}}\mathrm{nl}_{r-i-1}(D_{a_{i+1}}\ldots D_ {a_{1}}f)}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2(2^{n}-(i+1))l_{i+1}},\]
for \(i=0,1,\ldots,r-2\). Let \(u_{i}=2^{n-1}-l_{i}\). Replacing \(l_{i}\) by \(2^{n-1}-u_{i}\) in (43), we have
\[u_{i}\leq\frac{1}{2}\sqrt{2^{n}(i+1)+2^{n+1}u_{i+1}}. \tag{44}\]
**Claim 1**.: \[u_{i}\leq\frac{1}{2}\left(2^{(1-2^{-(r-i)})n+\frac{r}{2^{r-1}-1}}+\sum_{j=1}^{ r-i-1}(j+i)\cdot 2^{\frac{2^{j}-1}{2^{r-1}}n-\frac{2^{j}-1}{2^{r-1}-1}r-j}\right).\] (45)
_for \(0\leq i\leq r-2\)._
Proof.: (of Claim 1) We prove by induction on \(i\). For the base step, we prove the claim for \(i=r-2\). By (44), we have
\[u_{r-2}\leq\frac{1}{2}\sqrt{2^{n}(r-1)+2^{n+1}u_{r-1}}. \tag{46}\]
By definition of \(l_{r-1}\) and Lemma 13, we have \(l_{r-1}\geq 2^{n-1}-2^{\frac{n+2r-2}{2}}\), that is, \(u_{r-1}\leq 2^{\frac{n+2r-2}{2}}\). Plugging \(u_{r-1}\leq 2^{\frac{n+2r-2}{2}}\) into (46), we have
\[u_{r-2} \leq \frac{1}{2}\sqrt{2^{\frac{1}{2}(3n+2r)}+(r-1)2^{n}}\] \[\leq \frac{1}{2}(2^{\frac{3}{4}n+\frac{r}{2}}+(r-1)2^{\frac{n}{4}- \frac{r}{2}-1}),\]
where the last step follows from Lemma 14.
For the induction step, assuming inequality (45) holds for \(i+1\), we prove (45) for \(i\), where \(i=r-3,r-4,\ldots,0\). Assuming (45) is true for \(i+1\), we prove it for \(i\). We have
\[u_{i} \leq \frac{1}{2}\sqrt{2^{n}(i+1)+2^{n+1}u_{i+1}}\] \[\leq \frac{1}{2}\sqrt{2^{n}(i+1)+2^{n}\cdot\left(2^{(1-2^{-(r-i-1)})n+ \frac{r}{2^{r-i-2}}}+\sum_{j=1}^{r-i-2}(j+i+1)\cdot 2^{\frac{2j-1}{2^{r-i-1}}n- \frac{2^{j-1}}{2^{r-i-2}}r-j}\right)}\] \[\leq \frac{1}{2}\left(2^{(1-2^{-(r-i)})n+\frac{r}{2^{r-i-1}}}+\sum_{j =1}^{r-i-1}(j+i)\cdot 2^{\frac{2j-1}{2^{r-i}}n-\frac{2^{j-1}}{2^{r-i-1}}r-j} \right),\]
as desired, where the third step follows from Lemma 14.
Turn back to the proof of Theorem 4. By Claim 1, we have
\[\mathrm{nl}_{r}(f) = 2^{n-1}-u_{0}\] \[\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-\sum_{j=1}^{r-1}j \cdot 2^{\frac{2^{j}-1}{2^{r}}n-\frac{2^{j}-1}{2^{r-1}}r-(j+1)}\] \[\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}}).\]
**Remark 1**.: _By Theorem 12, we deduce that_
\[\mathrm{nl}_{r}(f) \geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}})\] \[= 2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}})),\]
_where \(\alpha\approx\log_{2}e\) when \(r\ll\log_{2}n\)._
Similarly, for the inverse function, we prove the following nonlinearity lower bound. This is studied by Carlet in [1], who claims that the \(r\)-th order nonlinearity is asymptotically lower bounded by \(2^{n-1}-2^{(1-2^{-r})n}\). We credit the lower bound, i.e., Theorem 13, to Carlet, since our proof closely follows the method in [1] by working out the calculations carefully. The proof of the following theorem is in Appendix C.
**Theorem 13**.: _Let \(f_{\mathrm{inv}}=\mathrm{tr}_{n}(x^{2^{n}-2})\). For any \(r\geq 1\), we have \(\mathrm{nl}_{r}(f_{\mathrm{inv}})\geq 2^{n-1}-2^{(1-2^{-r})n-2^{-(r-1)}}-O(2^{ \frac{n}{2}})\)._
Note that the bound in Theorem 12 is slightly better than that in Theorem 13.
### Comparison
Babai, Nisan and Szegedy [1] proved that the \(r\)-th nonlinearity of the generalized inner product function
\[\mathrm{GIP}_{r+1}(x_{1},x_{2},\ldots,x_{n})=\prod_{i=1}^{r+1}x_{i}+\prod_{i=r +2}^{2(r+1)}x_{i}+\ldots+\prod_{i=n-r}^{n}x_{i}\]
is lower bounded by \(2^{n-1}(1-\exp(-\Omega(\frac{n}{r+4^{r}})))\). Bourgain [1] and Green _et al._[1] proved that the \(r\)-th nonlinearity of the mod\({}_{3}\) function is at least \(2^{n-1}(1-\exp(-\frac{n}{8^{r}}))\); Viola [12] and Chattopadhyay [1] improved this bound to \(2^{n-1}(1-\exp(-\frac{n}{4^{r}}))\). Viola [12] exhibited an explicit function \(f\in P\) (which relies on explicit small-bias generators) with \(r\)-th nonlinearity at least \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\), where \(\alpha<\frac{1}{4}\cdot\log_{2}e\); the lower bound is also proved in [13] using similar argument.
By Theorem 12, we prove that the \(r\)-th order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\) is at least \(2^{n-1}(1-\exp(-\frac{\beta\cdot n}{2^{r}}))\), where \(\beta\approx\log_{2}e\) when \(r\ll\log_{2}n\). Previous to our work, the best lower bound is \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\)[12, 13], where \(\alpha<\frac{1}{4}\cdot\log_{2}e\).
Conclusion
Using algebraic methods, we lower bound the second-order, third-order, and higher-order nonlinearities of some trace monomial Boolean functions. For the second-order nonlinearity, we study Boolean functions \(\mathrm{tr}_{n}(x^{7})\) and \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\); the latter class of Boolean functions is studied for the first time. Our lower bounds match the best proven lower bounds on the second-order nonlinearity among all trace monomial functions [1, 10]. For the third-order nonlinearity, we prove the lower bound for functions \(\mathrm{tr}_{n}(x^{15})\), which is the best provable third-order nonlinearity lower bound. For higher-order nonlinearity, we prove the lower bound
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n }{2}})\]
for functions \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\). When \(r\ll\log n\), this is the best lower bound, compared with all the previous works, e.g., [1, 1, 2, 10, 11, 12].
|
2309.16546 | Correcting for heterogeneity in real-time epidemiological indicators | Auxiliary data sources have become increasingly important in epidemiological
surveillance, as they are often available at a finer spatial and temporal
resolution, larger coverage, and lower latency than traditional surveillance
signals. We describe the problem of spatial and temporal heterogeneity in these
signals derived from these data sources, where spatial and/or temporal biases
are present. We present a method to use a ``guiding'' signal to correct for
these biases and produce a more reliable signal that can be used for modeling
and forecasting. The method assumes that the heterogeneity can be approximated
by a low-rank matrix and that the temporal heterogeneity is smooth over time.
We also present a hyperparameter selection algorithm to choose the parameters
representing the matrix rank and degree of temporal smoothness of the
corrections. In the absence of ground truth, we use maps and plots to argue
that this method does indeed reduce heterogeneity. Reducing heterogeneity from
auxiliary data sources greatly increases their utility in modeling and
forecasting epidemics. | Aaron Rumack, Roni Rosenfeld, F. William Townes | 2023-09-28T15:57:18Z | http://arxiv.org/abs/2309.16546v1 | Correcting for heterogeneity in real-time epidemiological indicators
## Abstract
Auxiliary data sources have become increasingly important in epidemiological surveillance, as they are often available at a finer spatial and temporal resolution, larger coverage, and lower latency than traditional surveillance signals. We describe the problem of spatial and temporal heterogeneity in these signals derived from these data sources, where spatial and/or temporal biases are present. We present a method to use a "guiding" signal to correct for these biases and produce a more reliable signal that can be used for modeling and forecasting. The method assumes that the heterogeneity can be approximated by a low-rank matrix and that the temporal heterogeneity is smooth over time. We also present a hyperparameter selection algorithm to choose the parameters representing the matrix rank and degree of temporal smoothness of the corrections. In the absence of ground truth, we use maps and plots to argue that this method does indeed reduce heterogeneity. Reducing heterogeneity from auxiliary data sources greatly increases their utility in modeling and forecasting epidemics.
## 1 Introduction
Understanding the burden of epidemics is a critical task for both public health officials and modelers. However, traditional surveillance signals are often not available in real-time, due to delays in data collection as well as data revisions. Alternative data sources can provide more timely information about an epidemic's current state, which can be useful for modeling and forecasting. We can use these data sources to create _indicators_, which provide a single number quantifying some measure of epidemic burden for a given location and time. An indicator usually estimates the disease burden at a certain severity level (e.g. symptomatic infections, hospitalizations) when the ground truth is unobserved. During the COVID-19 pandemic, the Delphi group published a repository of several real-time indicators of COVID-19 activity [1].
Many, if not all, of these indicators suffer from heterogeneity. That is, the relationship between the indicator and unobserved ground truth changes over space or time. To define heterogeneity, let \(X\in\mathbb{R}^{N\times T}\) be the matrix containing the indicator values for \(N\) locations and \(T\) time values, and \(Z\in\mathbb{R}^{N\times T}\) be the matrix containing the corresponding ground truth values. We say that spatial heterogeneity is present when
\[\mathbb{E}[X_{i_{1}t}]-Z_{i_{1}t}\neq\mathbb{E}[X_{i_{2}t}]-Z_{i_{2}t}\text{ for some }i_{1}\neq i_{2},t.\]
Likewise, temporal heterogeneity is present when
\[\mathbb{E}[X_{it_{1}}]-Z_{it_{1}}\neq\mathbb{E}[X_{it_{2}}]-Z_{it_{2}}\text{ for some }i,t_{1}\neq t_{2}.\]
Note that we define heterogeneity not simply as a bias in the indicator, but rather that the bias is dependent on location or time. The causes of heterogeneity vary depending on the indicator, but we can consider as an example an indicator based on insurance claims that seeks to estimate incidence of COVID-19 outpatient visits. Insurance claims could be higher relative to COVID-19 incidence in locations where the population in the insurance dataset is older, or where the doctors have more liberal coding policies in labeling a probable COVID case. Even the signal of reported cases, which purportedly reflects COVID-19 infections directly, will suffer from heterogeneity. If a few locations suffer from a shortage of tests, or from a new strain which tests are less accurate in detecting or that has a different fraction of symptomatic cases, those locations will have a different relationship between reported cases and true cases. Similar causes can result in temporal heterogeneity. Test shortages, changing demographics, coding practices can also vary over time within a single location. For example, spatial heterogeneity has been documented in CDC's ILINet due to different mixtures of reporting healthcare provider types in the network [2].
We use real-time indicators for three main functions: modeling the past, mapping the present, and forecasting the future. Correcting for heterogeneity is important for all of these applications. Any statistical conclusions we make about spatiotemporal spread of a disease may be distorted if the underlying data is subject to heterogeneity. In the presence of spatial heterogeneity, the indicator values are not comparable across locations, and a choropleth map displaying the current values of the indicator will be misleading. Similarly, in the presence of temporal heterogeneity, displaying a time series of the indicator may be misleading. Heterogeneity affects forecasts as well, as biases in the features of a forecasting model will lead to forecast inaccuracy. Our goal is to remove heterogeneity in an indicator in order to make it more reliable for these three uses.
Heterogeneity has been described and modeled in the field of econometrics [3]. Nearly all of the work involving heterogeneity in econometrics deals with the implications in regression. If only spatial heterogeneity is present, then a fixed or random effects model can be used [4, 5]. Others have developed parametric methods that assume heterogeneity is also time-varying [6]. The main reason that these methods cannot be transferred to our domain is that they identify heterogeneity only through strict assumptions on the error terms in the regression model. Additionally, we are not performing regression in our application. Rather, we are trying to remove the heterogeneity in the indicator.
A challenge of correcting for heterogeneity is that the problem doesn't have a clear formulation. In nearly every practical application, we lack access to the ground truth and our best option is to compare our indicator with another signal that is a noisy estimate of the ground truth, and often suffers from heterogeneity itself. We will call this signal a "guide" to emphasize that it is not a target for prediction. We believe that the indicator is strongly related with the guide, so they should be correlated across time and space. However, they don't measure the same value, so the correlation should not be 1 even in the absence of noise. Another challenge is that we present the problem in a retrospective setting, without a clear division for training and testing.
In this paper, we investigate removing heterogeneity from two indicators using a different guide for each. The first indicator is based on insurance claims data, and we use reported cases as a guide signal. The second indicator is based on Google search trends of specific queries related to COVID-19. We use the COVID-19 Trends and Impact Surveys (CTIS) as a guide. All of these signals (indicators and guides) are available in COVIDCast [1].
Because heterogeneity is present in a wide variety of indicators, we desire a solution that is general and flexible. Another desired property is that the temporal corrections are smooth across time, because we want to accommodate situations where the relationship between the indicator and guide can drift slowly over time. The model should be flexible enough to allow for abrupt changes, but these should be limited in number. If the corrections are jagged in time, the model may be overadjusting to the guide signal rather than identifying and removing the true heterogeneity.
Lastly, the method should generalize well to a variety of indicators and guides. It should not rely on specific domain knowledge of a single indicator-guide pair because we want the method to be applicable to any current or future indicator and guide. If we believe the indicator and guide have a stronger relationship, then we might want the model to use the guide matrix more and make a stronger bias correction. If we believe that there is more noise in the guide variable, that heterogeneity is mild, or that the inherent signals are more divergent, we might want the model to make a weaker bias correction. Additionally, the temporal smoothing constraint will be stronger or weaker, depending on the application.
The model should have hyperparameters to control the strength of the guide signal in fitting as well as the strength of the temporal smoothness constraint. These can be conceptualized as "knobs". For the indicator-guide relationship, the knob turns between one extreme of not using the guide signal at all and the other extreme of fitting maximally to the guide signal (in some models, fitting exactly to the guide signal). For the temporal smoothness constraint, the knob turns between the extremes of applying no smoothing and enforcing a constant temporal correction factor across time.
In the rest of this paper, we will provide three methods to correct for heterogeneity for a general indicator and guide signal. We then demonstrate their performance in simulated experiments and on several actual epidemiological data sources.
## 2 Methods
Let \(X\in\mathbb{R}^{N\times T}\) be the matrix containing the indicator values for \(N\) locations and \(T\) time points, and \(Y\in\mathbb{R}^{N\times T}\) be the matrix containing the corresponding guide values. We want to transform \(X\) to a matrix \(\tilde{X}\), with the spatial and temporal biases mitigated. As mentioned above, the simplest way to do so is to set \(\tilde{X}=Y\), but this is the most extreme version of overadjustment and removes any unique information contained in \(X\). We will present three methods to remove heterogeneity by using \(Y\) as a guide. The first uses a simple low-rank approximation, and the second and third add elements which ensure that the biases removed are smooth in time. In all of our methods, we detect heterogeneity by examining the difference \(Y-X\). We assume that the signal in this difference matrix is the heterogeneity between \(X\) and \(Y\).
### Bounded Rank Approach
In this approach, we assume that the heterogeneity between \(X\) and \(Y\) is of low rank. We begin without making any assumptions on the smoothness of the temporal biases. Therefore, we solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T})-Y\|_{F}^{2},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{T\times K}\), \(K\leq\min(N,T)\), and \(\|\cdot\|_{F}\) is the Frobenius norm. This optimization can be solved by performing singular value decomposition on the difference matrix \(Y-X\) and keeping the vectors with the \(K\) highest singular values. The corrected matrix is \(\tilde{X}=X+AB^{T}\).
### Fused Lasso Approach
In addition to the low rank assumption, here we further assume that the temporal biases are mostly piecewise constant over time. Therefore, we solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T})-Y\|_{F}^{2}+\lambda\|\Delta_{t}B\|_{1},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{T\times K}\), and \(K\leq\min(N,T)\), and \(\Delta_{t}B\) contains the first differences of B along the time axis. The \(\Delta_{t}B\) penalty is inspired by the fused lasso [7] and encourages \(B\) to be piecewise constant along the time axis.
We solve this optimization using penalized matrix decomposition algorithms described in [8]. We reproduce the algorithm as applicable to our case here:
1. Let \(Z^{1}=Y-X\).
2. For \(k=1,\ldots,K\): 1. Initialize \(v_{k}\) to have \(L_{2}\) norm 1. 2. Iterate until convergence: 1. If \(v_{k}=0\), then \(u_{k}=0\). Otherwise, let \(u_{k}=\frac{Z^{k}v_{k}}{\|Z^{2}v_{k}\|_{2}}\). 2. Let \(v_{k}\) be the solution to \[\min_{v}\frac{1}{2}\|Z^{kT}u_{k}-v\|_{2}^{2}+\lambda\sum_{j=2}^{T}\|v_{j}-v_{j -1}\|_{1}.\] 3. Let \(d_{k}=u_{k}^{T}Z^{k}v_{k}\). 4. Let \(Z^{k+1}=Z^{k}-d_{k}u_{k}v_{k}^{T}\).
3. \(A\) is the matrix whose \(k^{th}\) column is \(d_{k}u_{k}\), and \(B\) is the matrix whose \(k^{th}\) column is \(v_{k}\).
Step 2b) ii) is a fused lasso problem and can be solved using the alternating direction method of multipliers (ADMM) [9]. All of the other steps are trivial to compute.
This optimization has two hyperparameters which can be considered as "knobs": \(K\) and \(\lambda\). The matrix rank \(K\) controls the degree to which we match the guiding signal \(Y\). When \(K=0\), we keep \(X\) exactly as is and apply no correction. As \(K\) increases, we use more information from \(Y\), and when \(K=\min(N,T)\), we transform \(X\) to equal \(Y\) exactly (when \(\lambda=0\)). The lasso penalty \(\lambda\) enforces smoothness along the time axis of \(B\). At \(\lambda=0\), we apply no smoothing at all, and the model is equivalent to the Bounded Rank Model above. As \(\lambda\) approaches \(\infty\), \(B\) contains a constant value across each row.
### Basis Spline Approach
An alternative way to enforce smoothness on the temporal bias correction is to transform the temporal corrections by using B-spline basis functions. These functions \(S\) are determined by setting the polynomial degree \(d\) and a set of knots \(\{t_{1},\ldots,t_{m}\}\)[10]:
\[S_{i,0}(x)=1,\text{ if }t_{i}\leq x<t_{i+1},\text{ otherwise }0,\]
\[S_{i,k}(x)=\frac{x-t_{i}}{t_{i+k}-t_{i}}S_{i,k-1}(x)+\frac{t_{i+k+1}-x}{t_{i+k +1}-t_{i+1}}S_{i+1,k-1}(x),\]
for \(i\in\{1,\ldots,m\}\) and \(k\in\{1,\ldots d\}\). We can use these basis functions to create a fixed spline transformation matrix \(C\in\mathbb{R}^{L\times T}\), where \(C_{i,t}\equiv S_{i,d}(t)\) and \(L\) is a function of \(d\) and \(m\).
We now solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T}C)-Y\|_{F}^{2},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{L\times K}\), and \(K\leq\min(N,L)\), and \(C\) is the spline transformation matrix determined by the given polynomial degree and knots. This problem can be reformulated and solved by reduced rank regression, using the algorithm described in [11]. In this approach, we do not need to apply a penalty to the components of \(B\); the spline basis transformation will ensure that the temporal correction matrix \(B^{T}C\) is smooth.
In this approach, the hyperparameter \(K\) is understood the same way as above. The temporal smoothing hyperparameters are different, however. The degree of smoothing is determined by the polynomial degree \(d\) and knots \(t\). For simplicity, we will set \(d\) as a constant 3; this results in the commonly used cubic spline transformation. We will also enforce that the knots are uniformly spaced, leaving us with the knot interval as the only temporal hyperparameter. The larger the knot interval, the smoother the temporal corrections will be. Note that due to the transformation matrix \(C\), we are no longer able to fit \(\tilde{X}=Y\) exactly, even with unbounded \(K\).
We note that we can parameterize the Basis Spline Approach to be equivalent to the Fused Lasso Approach. By setting the basis spline degree to be \(d=0\), the spline transformation matrix \(C\) results in a vector that is piecewise constant. If we place a knot at every time point and apply an \(\ell_{1}\) penalty to the first differences of the spline components, then the Basis Spline Approach is equivalent to the Fused Lasso Approach. Analogous equivalences hold for higher order splines. If the basis spline degree is \(d=1\), the method is equivalent to trend filtering [12], and so on for higher polynomial degrees.
### Preprocessing Indicator Values
All of the models above assume that the heterogeneity corrections should be additive, that is, \(\tilde{X}=X+AB^{T}\). Depending on the application, it may be more reasonable to apply a multiplicative correction. In such a case, we can fit the models using \(\log X\) and \(\log Y\). If \(X\) or \(Y\) contain zeros, then we can add a pseudocount and fit using \(\log(X+\epsilon)\). We optimize
\[\min_{A,B}\|(\log X+AB^{T})-\log Y\|_{2}^{F}\]
for the Bounded Rank Model, and the temporal penalties are straightforward for the Fused Lasso and Basis Spline models. Our corrected indicator is \(\tilde{X}=X\odot\exp(AB^{T})\), where \(\odot\) represents the Hadamard product and exponentiation is element-wise. One caveat to note is that the optimization minimizes the mean squared error between the indicator and guide on the log scale.
### Hyperparameter Selection
Each of our three models has one or two hyperparameters that control how the guide signal is used. A user may have domain knowledge which suggests that a certain rank is appropriate, in which case, \(K\) can be selected manually. A rank could also be selected via various heuristics, such as an elbow plot of the principal components of \(Y-X\). Alternatively, multiple values of \(K\) could be selected for sensitivity analysis. In this section, we provide a quantitative method of selecting hyperparameters as a default option, as an alternative to manual selection.
In our setting, several factors complicate the usually straightforward application of cross validation. First, the data is structured in a two dimensional matrix. Our optimization method does not allow missingness in the matrices, so we cannot simply
remove a random subset of data and run the optimization procedure. We can remove entire columns (time points) either randomly or in blocks, but we will need to interpolate the values for the missing time points. We can use mean squared error between \(\tilde{X}\) and \(Y\) as the error metric, but it is not clear that this is an ideal choice. The indicator and guide measure different quantities and we do not believe or wish that success is defined as matching \(\tilde{X}\) and \(Y\).
Despite these challenges, we will select hyperparameters by using a cross validation framework with mean squared error as the error metric. In order to reduce the temporal dependencies inherent in the data, we leave out blocks of time for testing, as illustrated in Fig 1. We use linear interpolation to populate the rows of \(B\) in the test set, as illustrated in Fig 2. Our error metric is the mean squared error between \(\tilde{X}\) and \(Y\) on the test set.
In the penalized regression context, it is common to apply the "one standard error rule" to cross validation, in which we select the most parsimonious model whose cross validation error is within one standard error of the the minimum cross validation error across all models [13]. A common justification for this rule is that the cross validation errors are calculated with variance, and it is preferable to take a conservative approach.
Fig 1: We use cross-validation for hyperparameter selection. The red blocks (ten days each) are held out for testing, and the yellow blocks (five days each) are held out to reduce dependencies between the training data and the test data. We repeat for 6 folds.
Fig 2: We need to interpolate the test indices of the temporal adjustment matrix \(B\) in order to calculate \(\tilde{X}\). We do this by linear interpolation between the values of \(B\) on the boundaries of the blocks of training indices. This figure shows interpolation for a single column of matrix \(B\), as an example.
against more complex models [13]. Our setup provides further motivation to apply this rule. Unlike in standard cross validation, our goal is not to find the model which fits best to \(Y\), but rather to use \(Y\) as a guiding signal to mitigate heterogeneity. Additionally, there is likely a slight dependence between the training data and test data due to the temporal structure of the data. Applying the "one standard error rule" will prevent overadjustment to \(Y\).
In order to use the "one standard error rule", we will need to calculate the number of parameters for a given model. For the Bounded Rank and Basis Spline models, this is straightforward. For the Bounded Rank Model, the number of degrees of freedom is \(K(N+T-1)\), and for the Basis Spline Model, it is \(K(N+L-1)\), where \(L\) is the dimensionality of the basis spline transformation matrix \(C\). For the Fused Lasso Model, we cannot simply calculate the number of entries in the matrices \(A\) and \(B\). We will use a result that applies to generalized lasso problems under weak assumptions [14]. In our case, we will estimate the degrees of freedom in matrix \(B\) as \(\|\Delta_{t}B\|_{0}\), or the count of non-zero successive differences along the time axis of \(B\). The total degrees of freedom for the Fused Lasso Model is \(K(N-1)+\|\Delta_{t}B\|_{0}\).
We note that the theorem in [14] applies only to generalized lasso problems, and in our case, we use an iterative approach of which the fused lasso is just a subroutine. Therefore, the results may not hold precisely in our case. However, we are using the "one standard error rule" merely as a heuristic, and we do not require absolute accuracy in estimating the degrees of freedom.
We reiterate that this is a general rule, and the user can use any rule to select hyperparameters. If a user has domain knowledge which suggests that a certain rank is appropriate, then they could simply select that rank. If a user wants a more parsimonious model, they could use a two standard error rule or a three standard error rule. In some cases, the cross validation error may have a clear elbow, which could suggest an ideal rank. The "one standard error rule" is used simply as a baseline when no obvious choice exists.
## 3 Results
### Simulation Experiments
We first performed experiments on simulated data, where the true rank of the difference matrix \(Y-X\) was known. We fit each of the three models to the difference matrix and evaluate performance through cross validation. The simulation setup is as follows:
1. Generate \(A\) as a \(N\times K\) matrix, where \(A_{ij}\stackrel{{\text{iid}}}{{\sim}}\text{Unif}(-1,1)\).
2. Generate \(B\) as a \(T\times K\) matrix. For each column \(k\), select nine random breakpoints \((b_{1}^{k},...,b_{9}^{k})\) between \(1\) and \(T-1\). Set \(b_{0}^{k}=0\) and \(b_{1}^{k}0=T\). Set \(B_{b_{i}^{k}:b_{i+1}^{k},k}\) to be a random constant between \(0\) and \(1\). Thus each column of \(B\) is piecewise constant with \(10\) pieces.
3. Let \(C=AB^{T}\), then normalize to have standard deviation \(1\) across all elements.
4. Let the simulated difference matrix \(Y-X\) be \(D\), where \(D_{ij}=C_{ij}+\epsilon_{ij}\) and \(\epsilon_{ij}\stackrel{{\text{iid}}}{{\sim}}N(0,\sigma=0.1)\). Note that we do not have to simulate \(X\) or \(Y\) individually, only the difference.
In our simulations, we used \(N=51\) and \(T=699\) as in our real-world analysis. We experimented with \(K\in\{5,10,40\}\). For the simulations, we will discuss the Fused Lasso Model (including \(\lambda=0\), which is equivalent to the Bounded Rank Model). The Basis
Spline Model does not perform well in the simulations, as will be discussed in the next section.
For \(K=5\), the rank selected by cross validation is 4, as shown in Fig 3. In applications where the signal-to-noise ratio is low, our methods will have difficulty in detecting all of the heterogeneity. In this simulation, we can decompose the signal and noise exactly, and perform SVD on the signal and noise separately to determine the signal-to-noise ratio. The first 4 singular values of the signal matrix \(C\) are larger than those of the noise matrix \(\epsilon\), but the remainder are smaller. We do not see a strong signal in all 5 singular values partially because the rows of \(AB^{T}\) were not constructed to be orthogonal. This supports the hypothesis that the optimal rank was not found due to the low signal-to-noise ratio. We see a similar pattern for \(K=10\) (Fig 4), where the rank selected by cross validation is 8 and the first 8 singular values of \(C\) are larger than those of \(\epsilon\). Once \(K\) exceeds the optimal rank, the cross validation error slowly increases, just as would be expected if the model were overfitting.
Fig 5 shows that for \(K=40\), the rank with the minimum cross validation error is indeed 40, using the Fused Lasso Model with \(\lambda=1\). Even though the signal-to-noise ratio is higher than 1 for only the first 11 singular values, the correct rank is selected. This is a somewhat surprising result, and might be attributable to the penalty encouraging the rows of \(B\) to be piecewise constant. This may allow the model to detect even parts of the signal that are weaker than the noise.
### COVID-19 Insurance Claims and Reported Cases
Insurance claims are a useful data source in modeling and forecasting epidemics. They provide information about how many people are sick enough to seek medical care, which is potentially more useful than simply the number of people who are infected but potentially asymptomatic. They can also be available at high geographic and temporal resolution, as well as cover a large proportion of the total population. In this section, we will use a dataset of aggregated insurance claims provided by Optum. The signal is the fraction of all outpatient claims with a confirmed COVID-19 diagnosis code, followed by smoothing and removal of day-of-week effects [15]. Despite the advantages of claims
Fig 3: Although the true rank of the correction matrix is 5, the optimal rank selected is 4 and \(\lambda=1\). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears. Even when applying the one standard error rule (1se), the same model is selected (as denoted by the square and triangle).
Fig 4: Although the true rank of the correction matrix is 10, the optimal rank selected is 8 and \(\lambda=0.1\) (denoted by the square). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears. When applying the one standard error rule (1se), the model selected has rank 7 and \(\lambda=1\) (denoted by the triangle).
Fig 5: The true rank of the correction matrix is 40, and the optimal rank selected is 40 with \(\lambda=1\) (denoted by the square). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears with the minimum significantly lower than the optimal rank. When applying the one standard error rule (1se), the rank selected is 13 with \(\lambda=1\) (denoted by the triangle).
datasets, they are often subject to spatial and temporal heterogeneity, as we will demonstrate.
We used reported COVID-19 cases from Johns Hopkins [16] as our guide signal to correct for heterogeneity in the insurance claims signal. As in the simulation experiments, we used the hyperparameter selection scheme described in Section 2.5. Because we believe that the effects of heterogeneity here are multiplicative rather than additive, we applied preprocessing steps as described in Section 2.4. We set \(X\) to be the log of the insurance claims signal, and \(Y\) to be the log of the reported cases signal, each with a pseudocount of \(\epsilon=1\) to account for zeros.
Unlike in the simulation experiments, we do not see a clear overfitting curve. As shown in Fig 6, the cross validation error decreases as \(K\) increases and \(\lambda\) decreases (as the model's complexity increases) and then flattens. The model with the best cross validation error has \(K=50\), where the rank of the difference matrix is \(51\). Clearly, we do not want to use this model, since we do not believe that the heterogeneity present in the claims signal has rank \(50\) out of a possible \(51\). This is where the "one standard error rule" is useful. It selects the Fused Lasso Model with rank \(K=12\) and \(\lambda=1\). Although this model still has a higher rank than we may have thought appropriate, it is much simpler than the model which minimizes cross validation error.
The Basis Spline Models perform poorly on this dataset, as shown in Fig 7. For very small knot intervals, some models are candidates for selection under the "one standard error rule", but their degrees of freedom are larger than those of the Fused Lasso Models. We examine the behavior of the Basis Spline Models in Fig 8, where we see that there is overfitting if the knot interval is too short (many parameters) and overfitting if the knot interval is too long (fewer parameters). After performing linear interpolation, the overfitting model ends up with reasonable accuracy. However, the basis splines themselves do not accurately represent the temporal corrections. We conclude that the assumption of cubic splines is too rigid in this case. The splines simply cannot fit well to the data, likely due to abrupt changepoints that the Fused Lasso Models are able to handle better.
In Fig 9, we illustrate the benefit of applying heterogeneity corrections. The raw
Fig 6: Cross validation error is optimized at \(K=50\) using the Bounded Rank Model (BR), i.e. \(\lambda=0\), indicated by the square. However, when applying the one standard error rule, we select the Fused Lasso Model (FL) with \(K=12\) and \(\lambda=1\), indicated by the triangle. This results in a great reduction in parameters with a small decrease in cross validation accuracy.
[MISSING_PAGE_POST]
insurance claims signal is quite different than the reported case signal in late summer in 2020. The state with the highest claims signal is New York, even though New York has one of the lowest rates of confirmed cases. After applying heterogeneity correction using cases as a guide, the insurance claims signal looks more similar to the reported case signal, improving the comparability of the insurance claims signal across states.
### Evaluating Preprocessing Assumptions
As mentioned above, we applied a log transform to the data, assuming that the heterogeneity effects are multiplicative rather than additive. We can test that assumption by comparing the following three models.
1. Bounded Rank Model with rank \(k=1\) (BR-1): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}(\log X_{it}+a_{i}\cdot b_{t}-\log Y_{it })^{2}\]
2. Additive Model in log space (AL): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}(\log X_{it}+a_{i}+b_{t}-\log Y_{it})^{2}\]
Fig 9: Applying a rank-2 correction improves similarity between reported COVID-19 cases and the insurance claims signal. On the left, the average daily confirmed COVID-19 cases between August 15 and September 15, 2020 are displayed in a choropleth map. On the right, we display the value of the insurance claims signal for the same time period before (top) and after (bottom) applying a rank-2 heterogeneity correction using the Bounded Rank Model. The pre-correction %CLI map is not similar to the cases map, but the post-correction %CLI map is.
3. Additive Model in count space (AC): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(\frac{X_{it}+a_{i}+b_{t}}{Y_{it}}-1 \right)^{2}\]
All of these models have \(N+T\) parameters and a total of \(N+T-1\) degrees of freedom, with a single parameter for each location and a single parameter for each day, with no regularization. In the first two models, the heterogeneity is assumed to be additive in the log space, or multiplicative in the count space. In the AC model, the heterogeneity is assumed to be additive in the count space. In the BR-1 model, the heterogeneity parameters are multiplied together, whereas in the AL model, they are added. Note that we minimize the relative error for the AC model so that all three models have the same objective.
We display the mean squared error between \(\log\tilde{X}\) and \(\log Y\) for each of the three models below, with the standard error in parentheses. The models BR-1 and AL, which assume that heterogeneity is multiplicative, perform much better than AC, which assumes that heterogeneity is additive. This supports our initial assumption that the effects of heterogeneity in this particular signal pair are multiplicative.
The AL model performs slightly better than the BR-1 model, which weakly suggests that the spatial and temporal parameters should be added instead of multiplied together. However, the AL model cannot be generalized to higher rank corrections. Therefore, we cannot use this model in practice, as we believe that the effects of heterogeneity are too complex to be modeled solely by a single parameter for each location and for each time point.
\begin{tabular}{|c|c|c|c|} \hline Model & BR-1 & AL & AC \\ \hline MSE & 0.2205 (0.00220) & **0.2138 (0.00235)** & 0.7611 (0.00919) \\ \hline \end{tabular}
### Google Trends and CTIS Survey
Google has made public an aggregated and anonymized dataset of Google search queries related to COVID-19 [17]. An indicator derived from this dataset roughly measures the relative prevalence of a specific set of search queries in a given location and time. Ideally, this indicator could inform us approximately how many people are symptomatic at a given time at a very minimal cost. However, search query behavior is affected by many other factors other than whether a person is symptomatic. People may be more likely to search for COVID-19 related terms if someone famous was reported as infected, or if public health measures were enacted or lifted. These can create both spatial and temporal heterogeneity in the indicator.
We used the COVID-19 Trends and Impact Survey (CTIS) as a guide signal. Specifically, our guide is the estimated percentage of people with COVID-like illness from the survey. We used the hyperparameter selection scheme as above.
The results here, shown in Fig 10, look more similar to the results in the simulated dataset. The model that performs best in cross validation has a rank of 7, and when applying the one standard error rule, the optimal model is a Fused Lasso Model with rank \(K=4\) and \(\lambda=1\). With increasing \(K\), the cross validation errors increase, indicating that some overfitting can occur. Here as well, the Basis Spline Models perform poorly (not pictured), entrenching a pattern seen in the insurance claims experiment as well.
We examine the temporal components of the optimal model in Fig 11. As expected, the components are piecewise constant across time. The first (most important) component is mostly negative in the beginning of the pandemic and spikes during the
Omicron wave. By using the CTIS survey as a guide, we correct the Google Trends signal downwards in the beginning of the pandemic and upwards during the Omicron wave.
One possible explanation for this heterogeneity is the decline in public attention and anxiety regarding the COVID-19 pandemic. In the beginning and middle of 2020, many asymptomatic people entered COVID-related searches into Google, resulting in a positively biased signal. Throughout most of 2021, minimal corrections are made and the two signals are at their strongest agreement. During the Omicron wave around the beginning of 2022, our method applies a strong positive correction to the Google Trends signal. According to the CTIS signal, COVID-19 cases are highest at this point, but the Google Trends signal does not increase to the same extent, so a further positive correction is needed. One possible explanation is that fewer symptomatic individuals were appearing in the Google signal, potentially because they were more confident that they indeed had a COVID-19 infection, or because they were less anxious. Another explanation could be that fewer non-symptomatic individuals were appearing in the Google signal, potentially because they were less interested in the pandemic. Whatever the exact reason, the corrections show that the Google signal suffers from temporal heterogeneity, which can be corrected by using the CTIS survey as a guide signal.
## 4 Discussion
As explained above, we define heterogeneity as the presence of location-dependent or time-dependent bias between an indicator and its unobserved ground truth. Indicators are useful sources of information for modeling, mapping, and forecasting epidemics, but conclusions derived from the indicators in the presence of heterogeneity may be suspect. The problem of heterogeneity is poorly suited to translate into an optimization problem in the absence of any ground truth data. Therefore, we use another signal as a guide, and present a method that can use the guide strongly or weakly.
Our method appears to be useful on several pairs of COVID-19 indicators. As Fig 9 shows, the raw COVID-19 insurance claims signal gives a very different picture than reported cases. If we were to use the insurance claims signal to understand the current
Fig 10: When correcting the Google Trends signal, cross validation error is optimized at \(K=7\) using the Fused Lasso Model (FL) with \(\lambda=0.1\), indicated by the square. However, when applying the one standard error rule, we select \(K=4\) and \(\lambda=1\), indicated by the triangle.
COVID-19 burden across the United States, we could be very misinformed.
The flexibility of our approach is both its main strength and main weakness. On the one hand, the models discussed in this paper can be used for any generic signal and corresponding guide. The user can choose the appropriate parameters based on domain knowledge, exploratory data analysis (e.g. an elbow plot), or the cross validation scheme described above. Because heterogeneity is not straightforward to quantify, we require flexibility to cover a variety of use cases.
However, this flexibility requires a method to select hyperparameters. In simulations, cross validation yields a reasonable choice of hyperparameters. However, in a real-world setting, the hyperparameters selected by cross validation lead to a model that seems to overadjust. Cross validation might lead to model overadjustment because there are dependencies between the left-out data and the training data. In this case, just as we would expect to overhit in a normal prediction setup, we would expect to overadjust to the guide signal. Additionally, the error metric also encourages overadjustment, since we minimize the squared error between the corrected signal and the guide.
Another significant limitation of this approach is that the guide signal needs to be more reliable than the indicator we are trying to correct. Using Fig 9 as an example again, we see that \(Y\) is low in New York but \(X\) is high, and that after applying our heterogeneity correction, \(\tilde{X}\) is low. This is only an improvement if \(Y\) is correct, that true COVID-19 activity in New York is actually low. In this case, we have domain knowledge to suggest that reported cases suffer from spatial heterogeneity less than insurance claims. However, were we to treat cases as \(X\) and insurance claims as \(Y\), then our "corrected" case signal would be incorrectly high in New York.
An important extension to this approach would be modifying the hyperparameter selection scheme. A better scheme would not default to overadjustment so strongly and would not use an error metric that is optimized when fit exactly to the guide signal. Another extension would be the use of multiple guide signals \(Y_{1},\ldots,Y_{m}\). A simple start would be to set \(Y=\alpha_{1}Y_{1}+\cdots+\alpha_{m}Y_{m}\) and then apply the heterogeneity correction using \(Y\) as the guide signal. Intuitively, if the sources of heterogeneity in the various guides are uncorrelated, then they will tend to cancel out as a result of this averaging,
Fig 11: We plot the temporal components of the heterogeneity correction between the CTIS and Google Trends signal, using the Fused Lasso model with \(K=4\) and \(\lambda=1\). The most prominent corrections occur around January 2022, corresponding with the Omicron wave.
resulting in a spatially and temporally more homogeneous guide. Alternatively, we could view \(X,Y_{1},Y_{2},\ldots,Y_{m}\) as \(m+1\) different signals, and use them with the models discussed above to jointly estimate the underlying latent quantity to which they are all related. Using multiple guide signals will likely also reduce the overadjustment problem, and a more creative approach to incorporating multiple signals might avoid using the error with the guide signal as a performance metric for hyperparameter selection.
Our current setup fits the adjustment matrix in a batch setting, but a future direction would be to modify the algorithm in an online setting. Indicators are commonly used in real-time, so an online algorithm which makes adjustments as new data arrives may be more appropriate for many use cases.
Of the three models we propose, the Fused Lasso Model performs best in both simulated and real-world experiments. However, it is quite expensive computationally, whereas the other two models can be solved rapidly using SVD-based approaches. Given that the Bounded Rank Model usually performs well, it may be preferable to simply use the Bounded Rank Model in some applications. The Basis Spline Model is slightly more sophisticated without a meaningful increase in computation time. However, the assumptions that lie behind the Basis Spline Model seem to be too strong, specifically when there are abrupt changepoints in temporal heterogeneity.
|
2309.10137 | Spatz: Clustering Compact RISC-V-Based Vector Units to Maximize
Computing Efficiency | The ever-increasing computational and storage requirements of modern
applications and the slowdown of technology scaling pose major challenges to
designing and implementing efficient computer architectures. In this paper, we
leverage the architectural balance principle to alleviate the bandwidth
bottleneck at the L1 data memory boundary of a tightly-coupled cluster of
processing elements (PEs). We thus explore coupling each PE with an L0 memory,
namely a private register file implemented as Standard Cell Memory (SCM).
Architecturally, the SCM is the Vector Register File (VRF) of Spatz, a compact
64-bit floating-point-capable vector processor based on RISC-V's Vector
Extension Zve64d. Unlike typical vector processors, whose VRF are hundreds of
KiB large, we prove that Spatz can achieve peak energy efficiency with a VRF of
only 2 KiB. An implementation of the Spatz-based cluster in GlobalFoundries'
12LPP process with eight double-precision Floating Point Units (FPUs) achieves
an FPU utilization just 3.4% lower than the ideal upper bound on a
double-precision, floating-point matrix multiplication. The cluster reaches 7.7
FMA/cycle, corresponding to 15.7 GFLOPS-DP and 95.7 GFLOPS-DP/W at 1 GHz and
nominal operating conditions (TT, 0.80V, 25^oC) with more than 55% of the power
spent on the FPUs. Furthermore, the optimally-balanced Spatz-based cluster
reaches a 95.0% FPU utilization (7.6 FMA/cycle), 15.2 GFLOPS-DP, and 99.3
GFLOPS-DP/W (61% of the power spent in the FPU) on a 2D workload with a 7x7
kernel, resulting in an outstanding area/energy efficiency of 171
GFLOPS-DP/W/mm^2. At equi-area, our computing cluster built upon compact vector
processors reaches a 30% higher energy efficiency than a cluster with the same
FPU count built upon scalar cores specialized for stream-based floating-point
computation. | Matheus Cavalcante, Matteo Perotti, Samuel Riedel, Luca Benini | 2023-09-18T20:26:25Z | http://arxiv.org/abs/2309.10137v1 | # Spatz: Clustering Compact RISC-V-Based Vector Units to Maximize Computing Efficiency
###### Abstract
The ever-increasing computational and storage requirements of modern applications and the slowdown of technology scaling pose major challenges to designing and implementing efficient computer architectures. In this paper, we leverage the architectural balance principle to alleviate the bandwidth bottleneck at the L1 data memory boundary of a tightly-coupled cluster of Processing Elements (PEs). We thus explore coupling each PE with an L0 memory, namely a private register file implemented as Standard Cell Memory (SCM). Architecturally, the SCM is the Vector Register File (VRF) of Spatz, a compact 64-bit floating-point-capable vector processor based on RISC-V's Vector Extension Zve64d. Unlike typical vector processors, whose VRFs are hundreds of KiB large, we prove that Spatz can achieve peak energy efficiency with a VRF of only 2 KiB. An implementation of the Spatz-based cluster in GlobalFoundries' 12LPP process with eight double-precision Floating Point Units (FPUs) achieves an FPU utilization just 3.4% lower than the ideal upper bound on a double-precision, floating-point matrix multiplication. The cluster reaches 7.7F/FCA/cycle, corresponding to 15.7GFLOPSp and 95.7GFLOPSp/W at 1 GHz and nominal operating conditions (TT, 0.80 V, 25\({}^{\circ}\)C), with more than 55% of the power spent on the FPUs. Furthermore, the optimally-balanced Spatz-based cluster reaches a 95.0% FPU utilization (7.6FMA/cycle), 15.2GFLOPSp, and 99.3GFLOPSp/W (61% of the power spent in the FPU) on a 2D workload with \(7\times 7\) kernel, resulting in an outstanding area/energy efficiency of \(171\,\)GFLOPSp/W/mm\({}^{2}\). At equi-area, our computing cluster built upon compact vector processors reaches a 30% higher energy efficiency than a cluster with the same FPU count built upon scalar cores specialized for stream-based floating-point computation.
RISC-V, Vector Processors, Computer Architecture, Embedded Systems-on-Chip, Machine Learning.
## I Introduction
The pervasiveness of Artificial Intelligence (AI) and Machine Learning (ML) applications triggered an explosion of computational requirements across many application domains. The required computing of the largest ML model doubles every 3.4 months, while its parameter count doubles every 2.3 months [1, 2]. As a result, large-scale computing systems struggle to keep up with the increasing complexity of such ML models. In fact, the performance of the fastest supercomputers only doubles every 1.2 years [2], while their power budget is capped around 20 MW by infrastructure and operating cost constraints. Furthermore, smart devices running AI applications at the Internet of Things (IoT) edge [3] are also tightly constrained in their power budget due to battery lifetime and passive cooling requirements. Therefore, small and large modern computing architectures must optimize their compute and data movement energy and delay [4].
Another major issue for present computer architectures stems from the drastic slowdown of technology scaling, particularly Static Random-Access Memory (SRAM) area scaling. For example, the SRAM bit cell area on TSMC's cutting-edge N3E technology node did not scale compared to its previous N5 node, still coming at 0.021 \(\upmu\)m\({}^{2}\), while logic area scaled down by 70% [5]. The flattening of SRAM scaling challenges almost every hardware design. However, it is particularly disastrous for AI hardware, which exploits SRAMs to implement high-bandwidth, low-latency on-chip storage. As a result, modern AI accelerators resort to large swaths of Standard Cell Memories (SCMs) [6], which dominate their total area. Furthermore, interconnects have trouble keeping up with transistor scaling [7]. The increasing memory and bandwidth requirements of modern computing applications demand large interconnect networks, leading to a considerable area overhead due to the interconnect between Processing Elements (PEs) and memories.
Designers often rely on the extreme domain specialization of hardware architectures to boost their energy efficiency and performance at the price of loss of flexibility. This is not sustainable given the rapidly evolving nature of applications and AI models. This paper focuses on fully programmable architectures based on instruction processors. Particularly, we tackle the interconnect and memory scaling issue on the shared-L1 cluster of Figure 1, a generic template for programmable computing architectures. Each shared-L1 cluster contains a set of PEs sharing tightly-coupled L1 memory through a low-latency interconnect [8]. The cluster's L1 memory is typically implemented as a multi-banked SRAM data cache or SPM. In addition, each PE includes private high-bandwidth L0 SCM.
Figure 1: A simple shared-L1 cluster with \(C\) PEs and a multi-banked L1 SPM implementation with \(M\) SRAM banks. |
2309.14351 | To build or not to build -- A queueing-based approach to timetable
independent railway junction infrastructure dimensioning | Many infrastructure managers have the goal to increase the capacity of their
railway infrastructure due to an increasing demand. While methods for
performance calculations of railway line infrastructure are already well
established, the determination of railway junction capacity remains a
challenge. This work utilizes the concept of queueing theory to develop a
method for the capacity calculation of railway junctions, solely depending on
their infrastructure layout along with arrival and service rates. The
implementation of the introduced approach is based on advanced model-checking
techniques. It can be used to decide which infrastructure layout to build, i.e.
whether an overpass for the analysed railway junction is needed. The developed
method hence addresses the need for fast and reliable timetable independent
junction evaluation in the long-term railway capacity calculation landscape. | Tamme Emunds, Nils Nießen | 2023-09-20T12:51:10Z | http://arxiv.org/abs/2309.14351v1 | To build or not to build - A queueing-based approach to timetable independent railway junction infrastructure dimensioning
###### Abstract
Many infrastructure managers have the goal to increase the capacity of their railway infrastructure due to an increasing demand. While methods for performance calculations of railway line infrastructure are already well established, the determination of railway junction capacity remains a challenge. This work utilizes the concept of queueing theory to develop a method for the capacity calculation of railway junctions, solely depending on their infrastructure layout along with arrival and service rates. The implementation of the introduced approach is based on advanced model-checking techniques. It can be used to decide which infrastructure layout to build, i.e. whether an overpass for the analysed railway junction is needed. The developed method hence addresses the need for fast and reliable timetable independent junction evaluation in the long-term railway capacity calculation landscape.
keywords: junction capacity, Markov Chain, queueing, model-checking, railway infrastructure +
Footnote †: journal:
## 1 Introduction
The amount of goods and passengers transported by rail is predicted to increase significantly, hence infrastructure managers not only need to build new railway infrastructure but also find possibilities to increase traffic volume on already existing relations. Both of these tasks require sufficient methods to determine the capacity of all sub-units, i.e. lines, junctions and stations, in a railway network.
While Capacity Determination techniques for railway lines, junctions and stations have already been developed and continue to contribute to more efficient rail transportation today, only some methods actually describe timetable independent approaches, allowing for sophisticated infrastructure-dimensioning
in early stages of the planning process which is traditionally done in multiple steps (UIC, 2013). Strategical design decisions on a network level may take place decades before operation and are an ongoing process for most infrastructure managers. The technical infrastructure planning does take some time and is done several years in advance, while the definitive timetable construction may be done several months or years prior to operation. Therefore, timetable independent methods determining the capacity of railway infrastructure are of particular importance for the early process-stages including'strategical network design' and 'technical infrastructure planning'.
In this work, classical queueing-based capacity analysis methodology is extended to railway junctions by introducing a Continuous-Time Markov Chain formulation based on routes through the infrastructure of a railway junction. Dependent only on resource conflicts and arrival/service rates of the trains travelling along considered routes, the number of trains waiting for the assignment of their requested infrastructure are estimated by calculating state-probabilities in the introduced Continuous-Time Markov Chain. To the best of our knowledge, this work is the first to utilize state-of-the-art probabilistic model-checking software to perform the computations on models describing railway junction infrastructure. The presented approach does not require timetable data to determine the capacity of a railway junction, enabling early-stage infrastructure dimensioning. Consequently, technical planning decisions, i.e. whether a railway overpass is to be built, can be assisted by capacity calculations with the described approach. With its versatile range of parameters, multiple applications of capacity research, i.e. infrastructure dimensioning for a fixed traffic demand or capacity determination of a fixed junction infrastructure, may be realised. In comparison with other approaches to determine timetable independent capacity, the presented method does not rely on simulations or sampling of possible sequences for timetable-compression, but rather on queueing-systems, formulated for multiple service channels. Further separating this approach from other analytical methods, no deep railway-specific knowledge for a preceding analysis of the considered junction infrastructure is needed, only conflicting routes and either necessary service times or planned operating programs are required as input.
While Section 2 includes a literature review regarding other approaches to the determination of railway capacity, Section 3 introduces formal problem formulations and proposes the new capacity calculation method. In Section 4, the proposed method is compared to a simulation approach with respect to computation times and accuracy of the obtained solutions. A Case Study, highlighting the applicability of the introduced approach, is performed in Section 5. This work is concluded with a summary and discussion in Section 6.
## 2 Related Work
This section provides an overview regarding the state-of-the-art for railway capacity analysis. After the definition of some key types of performance analysis
and frequently used terms, selected examples of literature are matched regarding their associated methodology and capacity definition.
### Terminology of railway performance analysis
With their various requirements to the grade of detail, different stages of the planning process can require an analysis of diverse definitions of railway capacity. While a first definition of railway capacity as the _maximal number of trains traversing a given infrastructure in a given time period under some fixed assumptions_ summarizes the concept in a straightforward manner, the interpretation of the included assumptions determines different levels of capacity. In detail, three types of railway capacity may be distinguished (see also Jensen et al., 2020):
* _Theoretical capacity_: The maximum number of requests (i.e. trains, train-route enquiries), that can be scheduled without conflicts on the given infrastructure under consideration of driving dynamics and installed railway control systems.
* _Timetable capacity_ (sometimes referred to as _maximal capacity_): The maximum number of requests, that can traverse the given infrastructure in acceptable quality when compared to a specified threshold, not only considering driving dynamics, railway control systems, but also operating program specific settings, such as train-mix and arrival processes.
* _Operational capacity_ (sometimes referred to as _practical capacity_): The maximum number of trains, that can traverse the given infrastructure in acceptable operational quality when compared to a specified threshold, considering driving dynamics, railway control systems, operating programs, and additionally respecting disturbances and (knock-on) delays.
Additionally, research has not only been focused on determining the capacity (of any kind), but also on the calculation of the _capacity utilization_, i.e, the amount of available capacity, consumed in a given timetable.
While methods determining capacity utilization are mostly dependent on a timetable (_timetable dependent_) or at least a fixed order of a given train set, some methods are capable of determining performance indicators without the need for a previously set timetable (_timetable independent_), making them valuable for infrastructure dimensioning in early stages of infrastructure planning processes. Some approaches build on a given timetable and find maximal subsets of a set of trains that can additionally be scheduled (_timetable saturation_), building a category in between both dependency expressions. In contrast to a timetable, an _operating program_ specifies the demand of train types on given lines or routes, hence being indispensable for most performance analyses.
Depending on the stage of railway infrastructure planning, a timetable may already be available, such that methodologies like _timetable compression_, i.e. routing trains through the infrastructure with a separation of only the minimum headway possible, are useable.
The various railway infrastructure planning stages make use of varying capacity definitions and hence distinct methodologies for the analysis of railway infrastructure. They additionally differ in terms of timetable dependency or granularity of the described infrastructure. During the following subsection, the methodologies _max-plus-algebra_, _optimisation_, _operational data analysis_, _simulation_ and _analytical_ are differentiated.
Some additional distinctions of relevant railway performance analysis are made regarding their analysed infrastructure - lines, junctions, stations or networks -, their infrastructure decomposition and their utilized solution methods, f.e. _mixed integer programming_ (MIP) or matrix calculations.
### Literature Review
The described terminology is used in Table 1 to partition some relevant related research regarding their associated category. Additionally, selected utilization, optimisation and delay-propagation methods are briefly introduced in the following, while our contribution is classified along the strongly related analytical approaches. An additional table, categorizing the considered literature can be found in the Appendix in Table A.5.
Describing the determination of **capacity utilization** and introducing threshold values for sufficient operational quality, the UIC Code 406 (UIC, 2004, 2013) is widely used internationally for capacity assessments on railway lines (Abril et al., 2008; Landex, 2009; Goverde et al., 2013) and stations (Landex and Jensen, 2013; Weik et al., 2020). Utilizing the theory of capacity occupation, Max-Plus Automata (Goverde, 2007; Besinovic and Goverde, 2018) give measures for the assessment of railway infrastructure and timetable structure. While
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} & & & & & & & **application** \\ & **capacity** & **timetable** & **microscopic** & **stochastic** & **stochastic** & **specific** \\ & **capacity** & **type** & **timetable** & **junction** & **arrival** & **service** & **infrastructure** \\ & **type** & **independent** & **evaluations** & **process** & **process** & **decomposition** & **technique** \\ \hline timetable & \multirow{2}{*}{util} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{composation} \\ compression & & & & & & & \\ \hline randomized timetable & \multirow{2}{*}{util} & \multirow{2}{*}{yes} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{combinatorial} \\ compression & & & & & & & \\ \hline max-plus-algebras & \multirow{2}{*}{util} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} \\ & & & & & & & \\ \hline timetable & \multirow{2}{*}{theo} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} \\ saturation & & & & & & & \\ \hline capacity & \multirow{2}{*}{theo, util} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{mostly MIP} \\ optimisation & & & & & & & \\ \hline operational & \multirow{2}{*}{op} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\begin{tabular}{*}{\end{tabular}{tabular}{*}}} \\ and machine learning \\ \end{tabular} \\ data analysis & \multirow{2}{*}{op} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}{*}}} \\ analysis & & & & & & \\ \hline simulation & \multirow{2}{*}{op} & \multirow{2}{*}{no} & \multirow{2}{*}{yes} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ \hline analytical & \multirow{2}{*}{tt, op} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ line capacity & & & & & & \\ \hline \begin{tabular}{l} (prize) analytical \\ node capacity \\ \end{tabular} & \multirow{2}{*}{tt, op} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ \hline \hline introduced & \multirow{2}{*}{tt} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{
\begin{tabular}{*}{\end{tabular}}} \\ here & & & & & & & \\ \hline \multicolumn{7}{l}{Remarks. Please note that a statement in leancaptics means that this feature is partially supported, i.e. for some methods within the group or in an incomplete interpretation only. Furthermore, capacity types utilization (util), theoretical (theo), operational (op) and timetable (tt) are abkirütal.} \\ \end{tabular}
\end{table}
Table 1: Literature considering railway performance estimations
traditional compression methods (UIC, 2004; Goverde, 2007; Abril et al., 2008; Landex, 2009; Goverde et al., 2013) require a timetable to calculate capacity consumption, other approaches have been proposed utilizing randomly generated sequences of train types to overcome timetable dependencies (Jensen et al., 2017, 2020; Weik et al., 2020), focusing on the determination of timetable capacity. An overview and comparison with other capacity consumption methods can be found in Zhong et al. (2023).
Furthermore, **optimisation methods** for the estimation of theoretical capacity have been developed. They mostly formulate (linear) mixed integer programming problems, including approaches for railway lines (Harrod, 2009; Yaghini et al., 2014), stations (Zwaneveld et al., 1996, 2001; Delorme et al., 2001) and networks (Burdett and Kozan, 2006; Burdett, 2015). Some approaches rely on solutions to the railway timetabling problem (Cacchiani and Toth, 2012; Leutwiler and Corman, 2022), building timetables utilizing a given infrastructure in a 'best', as defined by objective functions, manner. Methods may also estimate the capacity occupation of its solution while creating a timetable (Zhang and Nie, 2016). Other approaches are based on the saturation of given timetables, which may also include empty schedules, and optimise the amount of additional traffic (Burdett and Kozan, 2006; Harrod, 2009; Liao et al., 2021).
Going even further, optimisation methods may incorporate rolling stock information (Liao et al., 2021) to the construction of a saturated timetable or estimate the effects of emerging first-order delays to following trains (Mussone and Wolfler Calvo, 2013), handling the propagation of so called _knock on_ delays. For this, Mussone and Wolfler Calvo (2013) extend an approach by De Kort et al. (2003), utilizing max-plus algebras to calculate capacity on lines with some single-track sections, i.e. tunnels.
More detailed insights into operational parameters of specific timetables and detailed infrastructure, rolling stock and delay distribution data can be obtained by utilizing **simulations**. D'Acierno et al. (2019) provide a comprehensive literature review. While being subject to large computational times, simulations are versatile in their use case. As such, the influence on model performance of different parameters can be analysed, i.e. different buffer time distributions to operational capacity estimations (Zieger et al., 2018), analysing the propagation of delays on railway infrastructure.
An investigation of **delay propagation** properties has also been part of further analytical (Goverde, 2010; Buker, 2011) research and subject of machine-learning models (Sahin, 2017; Corman and Kecman, 2018), trained on historical operational data. Sahin (2017) calculates probabilities for different delay states with the help of a Markov Chain, while Corman and Kecman (2018) utilize Bayesian networks. Interested readers are referred to Spanninger et al. (2023) for a detailed review.
Recent approaches make additional use of **operational data** to identify capacity bottlenecks. While some research introduces train traffic data mining approaches to analyse actual operation on railway lines (Graffagnino, 2012) or stations (Armstrong and Preston, 2017), others (Weik, 2022; Corman and Henken, 2022) discuss the applicability of macroscopic fundamental diagrams.
For this, Weik (2022) introduces a simulation approach to highlight their benefit for further macroscopic research and Corman and Henken (2022) provide an overview of open research applications.
Taking randomly distributed inter-arrival and service times into consideration, **analytical methods**, based on queueing theory, have been developed for an efficient timetable or operational capacity analysis in the early planning stages. In (Potthoff, 1970) railway lines and stations are dimensioned, analysing loss probabilities depending on filling and service functions utilizing probability distributions for the arrival process of trains to the analysed station. While Schwanhauser (1974); Wakob (1984); Wendler (2007) introduce measures for an analytical determination of the capacity of lines, Schwanhauser (1978); Niessen (2008, 2013); Schmitz et al. (2017) analyse junctions, incorporating randomly distributed inter-arrival and service durations.
Schwanhauser (1974) formulates the STRELE method to calculate expected values for the waiting times of trains in a line infrastructure, which is extended to junction infrastructure in Niessen (2008). They hence give measures to calculate the **operational capacity** of lines and junctions by comparing the estimated waiting times with threshold values (Schwanhauser and Schultze, 1982).
Potthoff (1970); Schwanhauser (1978); Wendler (2007); Wakob (1984); Niessen (2008, 2013); Schmitz et al. (2017); Weik (2020), however, implement results for the **timetable capacity** of the analysed infrastructure by matching corresponding limits with estimated waiting times for trains without taking delays into account.
Calculating the expected waiting times via probabilities for the loss of an arriving train, Potthoff (1970); Niessen (2008, 2013) formulate methods for station (Potthoff, 1970) and junction infrastructure. For junction capacity estimations, Niessen (2008, 2013) tackles queueing systems with multiple channels, while Schwanhauser (1978) approximates the waiting times in a junction via a single-channel system. This single-channel system approximation is based on route-exclusion probabilities and is adapted in Weik (2020), joining it with line-capacity advancements in Wendler (2007), therefore utilising multi-state service processes and hence more flexible probability distributions when modelling the service process.
_Parameter estimations_ have been done (Wakob, 1984) to obtain a closed formula for approximating the actual waiting times on a railway line with general independent service and inter-arrival time distributions.
Also directly modelling general independent service and arrival processes, Schmitz et al. (2017) formulate multi-dimensional Markov Chains and include phase-type distributions in their approximation. However, their approach is limited to already partitioned junction infrastructure, hence dependent on additional input and deep system knowledge of the algorithm operator. Additionally, they distinguish between train and request types in a queue, resulting in issues with computational memory and scaling, when analysing a more complex infrastructure and/or operating program.
Overall, the methodology landscape is still lacking research combining major advantages of analytical junction capacity methods: Calculating timetable
capacity for multi-channel railway junctions while being easily applicable to a broad range of problem formulations, f.e. infrastructure dimensioning for a fixed traffic demand, and utilizing multi-dimensional Markov Chains, enabling the use of advanced queueing theory methodology, such as probabilistic model-checking.
In this work, infrastructure performance is measured by such an approach, analysing timetable capacity by modelling railway junction infrastructure with a multi-dimensional Continuous-Time Markov Chain. This later discussed model features a route-based infrastructure decomposition, hence being easily applicable to more complex junctions, while respecting arising resource conflicts without the need for additional infrastructure partitioning.
## 3 Methods
### Junction Layout
To evaluate the capacity of a railway system, all sub-units of the network need to be assessed. A common differentiation is made between line and node capacity, while methods describing node capacity can additionally be partitioned describing junction and station capacity.
A _railway line_ is a connection between two origins, usually equipped with single- or double-track infrastructure. If some traffic shares only part of the route and connects to a third origin, a _railway junction_ will be installed to divide the traffic regarding its destination. Unlike at junctions, trains may start and end their journey at _railway stations_. They can be used for passenger- and good-exchange or fulfil primarily operational needs, such as ensuring a possibility for over-takings.
In Figure 1 an infrastructure designs for a railway junction of two double-track lines is given. The _main line_ spans between the two origins A and B, while the _branching line_ connects C with A.
As in most European countries, we consider operation to be defaulting to right-hand traffic, which suggests the usage of route \(r_{1}\) for the direction A to B and routing traffic from B to A via route \(r_{3}\) for the main line. Consequently, routes \(r_{2}\) and \(r_{4}\) are used accordingly for traffic between A and B.
Figure 1: Track-layout for a double-track junction
When dividing traffic on the junction into routes, those four different trails may be considered for both junction layouts (Table 2). The infrastructure can be abstracted along those routes to model its operation.
### Problem Formulation
A railway junction infrastructure \(J=(R,C)\) consists of a set \(R\) of \(k\) routes \(r\in R\) and a _conflict-matrix_\(C\in\{0,1\}^{k\times k}\). Two routes \(r_{i},r_{j}\in R\) are described as _conflicting_, i.e. they cannot be used at the same time, by denoting \(C_{i,j}=1\), and as not conflicting by \(C_{i,j}=0\). Per default, the same route cannot be used twice at a time, hence \(C_{i,i}=1\) for all \(i\in\{1,\ldots,k\}\).
An _operating program_ on the railway junction \(J\) is a set \(A\) of _demands_\(a=(r,n)\) corresponding to a route \(r\in R\) and a total number of trains \(n\in\mathbb{N}\) of that demand in the time horizon \(U\). In order to service a train of demand \(a=(r,n)\in A\), the service time \(t_{\text{service}}(a)\) may be calculated using microscopic railway software. The service time of demand \(a\) is dependent on the infrastructure and safety technology in \(J\) (and adjacent railway lines and junctions), as well as some train-specific characteristics, such as braking percentages, acceleration capabilities, or total mass.
The problem of dimensioning of railway junctions can be formulated by two inverse statements:
1. Given a railway junction \(J\) and a set of possible operating programs \(\{A_{1},\ldots,A_{l}\}\). Which operating program \(\hat{A}\in\{A_{1},\ldots,A_{l}\}\) is the _largest_ to be able to be completed on the junction \(J\) in acceptable quality?
2. Given an operating program \(A\) and a set of possible infrastructure layouts \(\{J_{1},\ldots J_{l}\}\). Which infrastructure \(\hat{J}\in\{J_{1},\ldots J_{l}\}\) is the most affordable, sufficient for acceptable quality in the completion of the desired operation program?
Note that for both problem statements an ordering of some sort can be given for the set of possible solutions. The set of possible operating programs \(\{A_{1},\ldots,A_{l}\}\) can be sorted by the total number of trains in the operating program for statement (I). Hence, the _largeness_ (as in statement (I)) of an operating program may be evaluated by a given order. Regarding statement (II), the volume of funds needed for the construction of the infrastructure layouts may be the metric for the set of possible infrastructure layouts \(\{J_{1},\ldots J_{l}\}\).
While problem statement (I) can be used to assess the theoretical maximal capacity of a railway junction, railway infrastructure operators could be mostly
\begin{table}
\begin{tabular}{l|l|l|l} Name & Origin & Destination & Conflicts \\ \hline \(r_{1}\) & A & B & \(r_{2}\) \\ \(r_{2}\) & A & C & \(r_{1}\), \(r_{3}\) \\ \(r_{3}\) & B & A & \(r_{2}\), \(r_{4}\) \\ \(r_{4}\) & C & A & \(r_{3}\) \\ \end{tabular}
\end{table}
Table 2: Routes through the junction
interested in solutions to the statement (II): The operational program can often be assessed beforehand, i.e. when fixed by external factors, such as governments, promoting a modal shift to railways (Europe's Rail, 2015), or increasing demands when establishing new industry locations.
With the described approach, both problem statements may be solved by using the infrastructure and operating program in the formulation of a queueing system and analyzing its characteristics.
### Queueing System
Queueing theory (Bolch et al., 2006; Zukerman, 2013) has been extensively used to describe the processes in a transportation system. An analysed entity is usually divided into a _service_ and a _waiting_ part (_queue_). Incoming duties may either be assigned to a processing _channel_ or, if none is available, to the next slot in the queue. The Kendall notation (Kendall, 1953) has been developed to abbreviate definitions of Queueing Systems with
\[A/B/n/m.\]
Arrival (\(A\)) and service processes (\(B\)) can be modelled with arbitrary probability distributions. The described Queuing System can contain one or more service channels (\(n\)) and any natural number of waiting slots (\(m\)) - including none or infinitely many. In this work, Exponential (\(M\)) and General independent (\(GI\)) distributions are considered to describe arrival and service processes, but generally, other probability distributions can be utilized as well.
Modelled Queueing Systems are analysed by a set of relevant parameters. For _Markovian_ (or _Exponential distributed_) models (_Markov Chains_), the state at any point in time is the only dependency to the future evolution of the process (see Zukerman (2013, Ch. 2.4)). Hence, transition rates between the states in a Markov Chain can be given. We denote _arrival rates_ by \(\lambda\) and _service rates_ by \(\mu\). They can be calculated with the use of expected values for the inter-arrival time \(ET_{A}\)
\[\lambda=\frac{1}{ET_{A}}, \tag{1}\]
and service times \(ET_{S}\)
\[\mu=\frac{1}{ET_{S}}. \tag{2}\]
Further, we describe the _occupancy rate_ of a queuing system by
\[\rho=\frac{\lambda}{\mu}. \tag{3}\]
Furthermore, variance coefficients of the arrival process \(v_{A}\) and of the service process \(v_{B}\) may be used for estimating supplementary characteristics. Additional parameters include the _estimated length of the queue_\(EL_{W}\) and the _probability of loss_\(p_{loss}\), describing the probability that incoming requests can not be considered in the system, as all service and waiting slots may be occupied.
An example of the use of a Queueing System is the determination of the capacity of a railway line (Wakob, 1984; Wendler, 2007), which is usually described with a \(GI/GI/1/\infty\) system. As an analytical solution to those systems has not yet been found, one can model it as an \(M/M/1/\infty\) system and use an approximation formula (Gudehus, 1976) for the calculation of the expected queue length
\[EL_{W}(GI/GI/1/\infty)\approx EL_{W}(M/M/1/\infty)\cdot c=\frac{\rho^{2}}{1- \rho}\cdot\frac{v_{A}^{2}+v_{B}^{2}}{2} \tag{4}\]
with a factor \(c=\frac{v_{A}^{2}+v_{B}^{2}}{2}\).
Further approximations for the lengths of the queue in general independent arrival and/or service processes have been developed in Wakob (1984); Fischer and Hertel (1990); Wendler (2007); Weik (2020).
Figure 2 introduces a graphical representation of a Continuous-Time Markov Chain (see Ross (2014) for a definition) modelling the \(M/M/1/\infty\) system. It consists of a root state (without a label) as well as states with one currently serviced unit (denoted by's') and some units in the queue (each denoted by 'w'). Ordering the states by the number of units in the system, as of being serviced and currently waiting for units, transitions between subsequent states correspond to the arrival of a unit into the Queueing System (denoted by the arrival rate '\(\lambda\)') or the termination of the service of a unit (denoted by the service rate '\(\mu\)').
The following Section generalizes this concept to railway junctions, which allow more complex resource allocations than on a railway line without multiple possible routes.
### Modelling railway junctions
In this work, railway junctions are considered, which differ from railway lines in one central aspect: It may be feasible for multiple trains to use the infrastructure at the same time - depending on the routes, they are scheduled to take.
#### 3.4.1 States
Modelling those infrastructures as a Continuous-Time Markov Chain \(MC=(S,T)\), used states \(s\in S\) need to be distinguishable regarding the currently serviced and waited-for route(s).
Figure 2: Markov Chain for a \(M/M/1/\infty\) Queueing System
The general set of possible states
\[\hat{S}=\left\{\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)|q_{i}\in\{0,\ldots,m\},b_{i}\in\{0,1\}\right\} \tag{5}\]
can be obtained by utilizing the information in the set of routes \(R\) of a junction \(J=(R,C)\). A State \(s=\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in\hat{S}\) contains information regarding the number of trains \(q_{i}\in\{0,\ldots,m\}\) waiting in the queue for route \(r_{i}\) and whether a route \(r_{i}\) is currently services \(b_{i}\in\{0,1\}\).
The state-space \(\hat{S}\) can be further restricted to
\[S=\left\{\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in\hat{S}|\sum_{i=1}^{k} \sum_{j=1}^{k}\left(C_{i,j}b_{i}b_{j}\right)=0\right\} \tag{6}\]
by applying the conflicts described in the conflict-matrix \(C\). Furtheron, entries \(q_{i}\) or \(b_{i}\) in a State \(s=\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in S\) are also referenced by \(q_{s,i}\) or \(b_{s,i}\).
#### 3.4.2 Transitions
Transitions \((u,v)=t\in T\) between two states \(u,v\in S\) can correspond to either the _arrival_ of a train for route \(r_{i}\), the _completion of service_ on route \(r_{i}\) or the _choice_ which holding train to service next.
The used transition rates of arrival-transitions are given by the arrival rate \(\lambda_{i}\), of service-transitions by the service rate \(\mu_{i}\), and of choice-transitions by the maximum rate \(M\). Those transition rates can be obtained by the described operational program \(A\) and the time horizon \(U\).
An arrival transition between
\[u=\left(q_{1},b_{1},\ldots,q_{i},b_{i},\ldots,q_{k},b_{k}\right) \tag{7}\]
and
\[v=\left(q_{1},b_{1},\ldots,q_{i}+1,b_{i},\ldots,q_{k},b_{k}\right) \tag{8}\]
utilizes the arrival rate
\[\lambda_{i}=\frac{\sum_{(r_{i},n)\in A}(n)}{U}, \tag{9}\]
where the number of trains using route \(r_{i}\), \(\sum_{(r_{i},n)\in A}(n)\), is divided by the time horizon \(U\).
Transitions for a service process can be modeled between
\[u=\left(q_{1},b_{1},\ldots,q_{i},1,\ldots,q_{k},b_{k}\right) \tag{10}\]
and
\[v=\left(q_{1},b_{1},\ldots,q_{i},0,\ldots,q_{k},b_{k}\right), \tag{11}\]
utilizing the service rate
\[\mu_{i}=\frac{1}{\frac{\sum_{(r_{i},n)\in A}(t_{\text{service}}((r_{i},n)) \cdot n)}{\sum_{(r,n)\in A}(n)}}, \tag{12}\]
which corresponds to the reciprocal of the average service times of all routes, weighted by the amount of trains on each route.
Choice-transitions \(t=(u,v)\) exclusively start at states \(u\in S\) with
\[\sum_{i=1}^{k}b_{u,i}=0 \tag{13}\]
and at least two conflicting sets of routes \(R_{i},R_{j}\subset R\), with
\[\begin{split} q_{u,o}>0,\ \forall r_{o}\in R_{i}\\ q_{u,p}>0,\ \forall r_{p}\in R_{j},\end{split} \tag{14}\]
which are not operable simultaneously, and end at states \(v\in S\), with at least one serviced route. They correspond to the choice of which route to operate next when multiple options are possible.
Since the choice-transitions should not induce additional time in the system, the maximum rate \(M\) should be large enough such that the additional expected time in the system per choice-transition \(1/M\) is sufficiently small. In this work, choice-transitions with identical transition rates are included to model the different decisions between the route(s) to be serviced next. Hence, the obtained results can be assumed to be independent of disposition strategies. The maximum rate is further chosen as \(M=600\), corresponding to a rate of \(10\) per second, which is deemed a competitive approximation.
#### 3.4.3 Example
The introduced Continuous-Time Markov Chain \(MC=(S,T)\) can be used to model Queueing Systems with more complex service regulations. In Figure 3, two different examples of railway track layouts are presented. While layout 3a corresponds to a short single-track segment with two possible routes, layout 3b illustrates the infrastructure of a crossover segment with three routes, two of which (\(r_{1}\) and \(r_{3}\)) being operable simultaneously.
Figures 4 and 5 give graphical representations of continuous-time Markov Chains modelling the single track segment (Figure 2(a)) and the crossover segment (Figure 2(b)). Starting at the most left state, arrivals to their two or three routes \(r_{i}\) are possible with their respective arrival rate \(\lambda_{i}\). Since the Queueing System is empty, an arriving train starts with its service time immediately after arriving. While every other train, arriving in the single track system (Figure 2(a)) during the service time of the first train, will be allocated to a waiting slot, the
Figure 3: Conflicting routes in different infrastructure
assignment of a second train in the Queueing System for the crossover segment does depend on the used routes. If the first train uses \(r_{1}\) or \(r_{3}\) and the second train the other of both, the second train can be serviced immediately. If the two trains share the same route or one of them is using route \(r_{3}\), the second train has to be assigned to a waiting slot.
Those implications of route exclusions continue in both graphs and all combinations of arriving and serviced trains are modeled with one limitation: While in theory, unlimited state spaces and therefore the modelling of \(m=\infty\) waiting positions are possible, an analysis of those models is very complex and in practical operation, the number of waiting trains will always be limited due to space restrictions. Hence, the examples in Figure 4 and 5 contain only one waiting slot per route. Trains arriving to a route while all waiting slots are occupied are therefore neglected.
Table 3 lists the number of states for the two models, rising with the number of waiting positions \(m\). Since the probability of loss \(p_{loss}=p_{loss}(m)\) is decreasing with the number of waiting positions per route, a sufficient number of waiting positions may be calculated by identifying a limit value \(p_{loss}^{*}\) and determining the lowest \(m\in\mathbb{N}\) satisfying
\[p_{loss}(m)\leq p_{loss}^{*} \tag{15}\]
Figure 4: Markov Chain modelling the Queueing System for the single track segment
\begin{table}
\begin{tabular}{l|c|c|c|c|c} number of waiting positions \(m\) & 1 & 2 & 4 & 8 & 16 \\ \hline single track segment & 10 & 23 & 67 & 227 & 835 \\ \hline crossover segment & 28 & 89 & 397 & 2261 & 15013 \\ \end{tabular}
\end{table}
Table 3: Number of states by the number of waiting positions in Queuing Systems for the examples in Figure 3.
iteratively or by testing relevant expressions of \(m\).
A Model for the junction in Figure 1 does incorporate more complex route conflicts and additional parameters, leading to substantial higher state numbers, i.e. \(|S|=128\) for an instance with only one waiting slot \(m=1\) per route. Hence, we refrain from including a graphical representation of the induced Continuous-Time Markov Chain. In the online repository (Emunds and Niessen, 2023) however, a full description of the used model, obtained with the definition in Section 3.4, is given.
### Determination of the estimated length of the queue
While closed-form analytical solutions exist for the estimated length of the queue \(E_{LW}\) for some Queueing Systems (i.e. equation (4), more examples in Fischer and Hertel (1990)), the analysis of more complex systems remains challenging. Since Continuous-Time Markov Chains may be used to model the behavior
Figure 5: Markov Chain modelling the Queueing System for the crossover segment.
of probabilistic programs, the calculation of state-probabilities is a substantial part of the verification of software systems. Furthermore, special software tools have been developed to perform the so-called _probabilistic model-checking_, therefore including sophisticated algorithms to calculate state possibilities and enabling higher-order evaluations. This work utilizes the model checker _Storm_(Hensel et al., 2022).
Storm parses Markov Chain models, i.e. formulated in the _PRISM modelling language_(Parker et al., 2000; Kwiatkowska et al., 2011), and gives tools to check specified _properties_ on them. Properties are formulas describing paths on sets of states of a Markov Chain and can be used to calculate probabilities or reachability statements of states. This process is described as probabilistic model-checking and Storm utilizes different _engines_ to perform this task, automatically detecting the most suitable engine for the specified input.
For this work, the introduced Continuous-Time Markov Chain model has been declared in the PRISM modelling language for multiple infrastructure layouts and read into Storm for the calculations of the expected length of the queue of each route. The formulated model can be found in the online repository (Emunds and Niessen, 2023).
Given the probabilities \(p:S\rightarrow[0,1]\) of all states \(s\in S\), the expected length of the queue \(E_{LW,i}\) of a route \(r_{i}\) can be calculated by summing the probabilities of all states containing elements in the queue
\[E_{LW,i}=\sum_{\Psi_{i}(s)>0}p(s)\cdot\Psi(s), \tag{16}\]
utilizing the function \(\Psi_{i}:S\rightarrow\mathbb{N}^{0}\), giving the number of elements in the queue of the route \(r_{i}\) for a state \(s\in S\).
Notice that the calculation of the expected length of the queue \(E_{LW,i}\) on an arbitrary route \(r_{i}\) in (16) relies on the use of Markov Chains, solely capable of modelling queueing system of type \(M/M/s/\infty\).
In order to analyse systems with general independent (\(GI\)) arrival or service processes, factors, utilizing the variation coefficient of the described process, can be introduced. According to Fischer and Hertel (1990), the expected length of a queue in a Queueing system of type \(GI/GI/s/\infty\) can be analysed by using a \(M/M/s/\infty\) system and modifying the results
\[E_{LW,r_{i}}(M/M/s/\infty)\cdot\frac{1}{\gamma}\approx E_{LW,r_{i}}(GI/GI/s/ \infty) \tag{17}\]
accordingly, using
\[\gamma=\frac{2}{c\cdot v_{B}^{2}+v_{A}^{2}} \tag{18}\]
and
\[c=\left(\frac{\rho}{s}\right)^{1-v_{A}^{2}}\cdot(1+v_{A}^{2})-v_{A}^{2}. \tag{19}\]
Here, \(v_{A}\) and \(v_{B}\) correspond to the coefficients of variation for the arrival and the service process respectively.
Following the equations (17 - 19), a greater coefficient of variation in either the arrival or the service process yields a larger expected length of the queue. In addition to the introduced coefficients of variation, factor \(\gamma\) depends on the occupancy rate \(\rho\) (see identity (3)) and the number of parallel service channels \(s\). Here, the length of the queue \(E_{LW,r_{i}}\) corresponds to arrivals on one route \(r_{i}\) only, \(s=1\) can hence be fixed.
This modification can be implemented in the design of threshold values that have been developed for the capacity analysis of railway infrastructure.
Regarding the modelled railway infrastructure (see Section 3.4), \(E_{LW,i}\) has to be calculated for every route \(r_{i}\). Using the threshold values introduced in the next section, every \(E_{LW,i}\) can be verified regarding its sufficiency for acceptable operating quality on the infrastructure.
### Threshold values
Different threshold values for theoretical and operational capacity have been discussed in the literature. In the UIC Code 406 (UIC, 2013) maximum occupancy rates have been introduced to limit capacity utilization. Potthoff (1970) introduces limits for the loss probabilities in railway stations, which are still in practical use for the dimensioning of track numbers in a railway station, i.e. at the German infrastructure manager DB Netz AG (2022).
In this work, the threshold value \(L^{*}_{W,limit}\), introduced in Schwanhauser and Schultze (1982) and likewise still in practical use (DB Netz AG, 2022), is utilized. It corresponds to the maximum number of trains that are to wait in the analysed line section at any given time for a sufficient performance of the infrastructure. With the ratio of passenger trains in all considered trains \(p_{pt}\), the threshold value
\[L^{*}_{W,limit}=0.479\cdot\exp(-1.3\cdot p_{pt}) \tag{20}\]
can be specified. A threshold value of the approximating queuing system
\[L_{W,limit}\approx\gamma\cdot L^{*}_{W,limit}=\gamma\cdot 0.479\cdot\exp(-1.3 \cdot p_{pt}) \tag{21}\]
can be obtained by utilizing (17) and (20).
Hence, the performance of railway infrastructure is judged based on the arrival and service processes, including their rates and variation, as well as on the operating program during the analysed time horizon.
## 4 Model Performance
Using the formulation of Section 3.4, a Continuous-Time Markov Chain for a railway junction can be obtained. In this work, a maximum of \(m=5\) waiting slots per route has been utilized, as a trade-off between tractability (dependent on the model-size) and accuracy of the model (see Section 3.4.3). By calculating the estimated length of a queue and comparing it to the obtained threshold values (see Section 3.6), the capacity of modelled railway junctions may be assessed. To ensure the quality of the obtained model, the validity of the generated queue-length estimations has to be verified.
Aiming to survey solely the performance of the solution process using the introduced Continuous-Time Markov Chain, \(M/M/s/\infty\) systems have been considered. For this, simulations of a railway junction with multiple incoming railway lines have been built and run on sample data.
### Simulation Architecture
The Simulations have been implemented in Python 3.10.9 (Van Rossum and Drake Jr, 1995; Python Software Foundation, 2022), utilizing SimPy (Team SimPy Revision, 2023). A model of the junction in Figure 1 has been built, including four different routes with route-specific arrival and service processes.
To estimate inter-arrival times for every route, a pseudo-random number generator yields the next inter-arrival time within a specified exponential distribution with an expected value \(ET_{A,r}\) equal to the reciprocal value of the mean arrival rate \(\lambda_{r}=\frac{1}{ET_{A,r}}\) of the modelled route. Trains are stored in a first-in-first-out queue for the route and serviced according to the service time acquired by a second pseudo-random number generator, utilizing another exponential distribution with an expected value \(ET_{S}\) equal to the mean service rate \(\mu=\frac{1}{ET_{S}}\) of the system. During the service of a route \(r\) it is ensured, that no conflicting route \(r^{\prime}\in R\) is able to start service by utilizing shared resources for every pair of routes \((r,r^{\prime})\in R\times R\).
An implementation can be found in the online repository (Emunds and Niessen, 2023). To assess the performance of the simulated process, a snapshot of every route's queue-length is taken in every simulated minute. Hence, the mean length of a queue can be obtained easily for every simulation run.
### Validation
Both, implementations of the simulation and the queueing-length estimations with the formulated Continuous-Time Markov Chain, have been run on a single core of an Intel Xeon Platinum 8160 Processor (2.1 GHz), utilizing a maximum of 3900 MB working memory.
Two different simulations setups have been considered:
1. A simulation with no limit to the number of trains being able to wait in the queues of their requested routes
2. A simulation with a maximum of 5 trains per route being able to wait at the same time for the service on their requested routes
In the second simulation and in the analytical setting, an arriving train is rejected, i.e. not inserted into the respective queue, if its arrival time lies within a time frame where 5 trains are already waiting for the release of the requested route. Here, all routes are capable of having 5 trains waiting for service, hence a total maximum of 20 trains may wait at the same time.
All three solution approaches, both simulation setups and the analytical method, have been set to compute the estimated length of the queue \(E_{LW,r}\) for the route \(r_{3}\) (see Table 2), conflicting with routes in direction of A and C (see Figure 1). Route \(r_{3}\) has been selected as it is one of the routes with the
most conflicts (with \(r_{2}\) and \(r_{4}\)) and it would directly benefit from an overpass construction.
Since those computations have been done to compare the results and running times of the approaches, only a small set of 10 different service rates, between 0.1 and 1.0 trains per minute, has been considered. An arrival rate of 0.1 trains per minute has been set for every route in all computed instances.
Each simulation has been run 100 times for every considered service time, simulating a total of 22 hours for every run. From those 22 hours, the first and last have not been considered, resulting in an evaluated time of 20 hours per simulation and 2000 hours in total for every service time investigated.
For both simulation setups the mean computing time per hour and the mean total run time per 22 hour simulation have been evaluated. Those can be found in Figure 6, which additionally includes the running time of the analytical solution for reference.
The computing time results in Figure 6 clearly indicate that the analytical approach is faster, even if compared to the run of a single simulation hour. Noting the logarithmic scale, for small service rates the analytical approach is faster by a factor of 5 to 10, depending on the considered service rate. Since simulations have to be conducted multiple times in order to receive sufficient results, a comparison with the total simulation times might be more realistic, yielding factors of \(10^{3}\) up to \(10^{4}\).
In general, the computing times of the simulation approach are significantly increasing with a decreasing service rate. This is probably due to a higher amount of trains in the network at any given time, as the mean service time increases with a decreasing service rate, leading to more trains waiting for the release of their requested route. Contrary, simulation runs with a setup without limit required almost the same computing time compared to simulations subject to a maximum of 5 trains per queue.
Additional observations can be made by evaluating the accuracy of the in
Figure 6: Computation times needed for the analytical compared to the simulation approach
troduced analytical approach. This can be done by comparing the obtained results of the simulation methods with the results of the queueing-based analytical approach. In Figure 7 the obtained results of the \(E_{LW,r_{3}}\) computations are depicted by including the exact analytical results and the standard deviation area of the simulation results, i.e. the area \(\left[\overline{E}_{LW}-\sigma,\overline{E}_{LW,r_{3}}+\sigma\right]\), surrounding the mean \(\overline{E}_{LW,r_{3}}\) by the standard deviation \(\sigma\).
Noting the logarithmic scale in Figure 7, the results show significant differences between the simulation setups. For the setup without a limit on the number of trains in a queue (Figure 6(a)), serious discrepancies between the simulation and the analytical can be found for service rates of less than \(0.3\). Results of the analytical solution seem to be more similar to the results of the simulation setup with a limit of \(m=5\) on the length of a queue (Figure 6(b)), matching the setting for the queueing-based analytical approach. Hence, an assumption, that for railway junctions, utilizing the fixed predefinitions and an averaged service rate of less than \(0.3\), the number of trains in a queue will on average exceed the limit of \(5\), can be made. Consequently, arriving trains are likely to have to wait, resulting in a very poor service quality.
Taking the limit for \(M/M/s/\infty\) queueing systems (see Section 3.6), in Figure 7 with '\(L_{W,limit}\)' annotated, into account, the accuracy of the introduced queueing-based analytical approach seems to be sufficient for the use in practical applications.
## 5 Computational Study
The introduced method for the calculation of queuing lengths can be used to guide infrastructure managers when dimensioning railway junctions. This work particularly focuses on choosing the right infrastructure layout for a given operating program, i.e. problem statement (II) in Section 3.2.
Figure 7: Accuracy of the analytical approach in comparison to the confidence interval of conducted simulations
### Setup
In detail, a case study, deciding whether or not an overpass should be built for a junction with a given operating program has been conducted. An overpass is a way to reduce the number of route conflicts in a railway junction, an example for the track layout of the junction in Figure 1 is given in Figure 8.
To show the applicability of the described analytical approach, 23 different operating programs have been considered. Thus, all combinations of 23 different operating programs for both to the railway junction adjacent railway lines have been built and analysed for their peak traffic hour, resulting in a total number of \(23^{2}=529\) different examples. A detailed description of all considered operating program combinations can be found in B, we restrict ourselves to six exemplary railway lines here.
The operating programs have been selected according to different types of the adjacent railway lines, we distinguish between _mixed traffic lines_, _local train lines_, _long-distance train lines_, _freight train lines_, and _urban railway lines_. Additionally, two different loads have been considered for the mixed traffic line. Table 4 lists the selected operating programs.
In Figure 9 the considered combinations of those operating programs have been listed. In every entry, the top bar corresponds to the operating program on the _main line_, i.e. the routes from A to B (\(r_{1}\)) and from B to A (\(r_{3}\)), while the bottom bar corresponds to the operating program on the _branching line_, i.e. the routes from (\(r_{4}\)) and to (\(r_{2}\)) C.
\begin{table}
\begin{tabular}{c|c|c|c} operating & \# regional & \# high speed & \# freight \\ program & trains & trains & trains \\ \hline low intensity mixed & 2 & 0 & 1 \\ long distance & 0 & 4 & 0 \\ local train & 4 & 1 & 0 \\ freight train & 0 & 0 & 5 \\ high intensity mixed & 4 & 2 & 2 \\ urban railway & 10 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 4: Operating programs considered
Figure 8: Track-layout for a double-track junction with an overpass
The total number of trains on a route \(n_{r}\) is utilized to determine the arrival rate \(\lambda_{r}=\frac{1}{n_{r}}\) for this route. Next, Continuous-Time Markov Chains (see Section 3.4) can be formulated for any possible service rate \(\mu\in(0,1]\), using the defined arrival rates \(\lambda_{r}\) and the maximum number of waiting positions per queue \(m=5\), while also taking conflicting routes into consideration. To ensure a sufficient precision while also maintaining efficient computability, all service rates \(\mu\in[0.01,1]\) with a step size of \(0.01\) have been taken into consideration in this work.
Hence, 100 Models have been analysed for both junction infrastructure layouts with/without an overpass for every considered combination of operating programs. The solving process has been automated using Python 3.10.9 (Van Rossum and Drake Jr, 1995; Python Software Foundation, 2022) and state-of-the-art model-checking Software. For this work, the model-checker _Storm_(Hensel et al., 2022) has been utilized. With its Python-Interface _Stormpy_(Junges and Volk, 2023) it is easy to use while also accomplishing competitive results in qualitative benchmarks (Budde et al., 2021). Used Models have been formulated according to Section 3.4, expressed in the PRISM modelling language (Parker et al., 2000; Kwiatkowska et al., 2011). Detailed information regarding the solving process and the used model files can be found in the online repository (Emunds and Niessen, 2023).
Figure 9: Operating program combinations considered
Aiming to gain insights about the performance benefit of the infrastructure layout including the overpass (see Figure 8), the expected length of the queue at route \(r_{3}\), from B to A, has been analysed. It has been chosen as it has two conflict points with other routes, one with the other route to A, \(r_{4}\) from C, the other with the route \(r_{2}\) from A to C, which is only conflicting in the junction layout without an overpass structure.
Resulting from the solving process is a grid of expected lengths of the queue for both infrastructure layouts for every considered service rate \(\mu\in[0.01,1]\). In Figure 10 the expected length of the queue at route \(r_{3}\) from B to A for both railway junction layouts of a local train main line and a long-distance train branching line is recorded. Both operating programs are only considering passenger transport, hence a ratio of passenger trains in all considered trains of \(p_{pt}=1\) can be used for the calculation of the threshold value \(L_{W,limit}\).
The combination is assumed to have to deal with an hourly load of 5 trains on the main line, from A to B and back, as well as 5 additional trains on the branching line, from A to C and back. This resumes in an arrival rate of \(\lambda_{r}=0.083\) for the routes \(r\in\{r_{1},r_{3}\}\) and of \(\lambda_{r^{\prime}}=0.067\) for the routes \(r^{\prime}\in\{r_{2},r_{4}\}\).
Furthermore, the threshold \(L_{W,limit}\) is depicted, modified according to Section 3.6, depending on the occupancy rate of \(\rho=\frac{\lambda_{r_{3}}}{\mu}\) for route \(r_{3}\). In this work, the coefficients of variation are assumed to be \(v_{A}=0.8\) for the arrival process and \(v_{S}=0.3\) for the service process, according with standard values in literature (see Wendler (1999)).
Utilizing the grid of resulting queue-length estimations as well as the calculated threshold, a minimum mean service rate \(\mu_{\min}\), needed for sufficient infrastructure quality, may be obtained. Hence, the maximum mean service time
Figure 10: Estimated lengths of the queue on route \(r_{3}\) for the combination of a local train main line and a long-distance train branching line
\(b_{\max}=\frac{1}{\mu_{\min}}\) for the given operation program on the main and branching lines can be derived. This maximum mean service time can be used to investigate the needed infrastructure by comparing it to the actual achieved service times on the analysed junction, which are subject to train and control system specific parameters.
### Results
The introduced derivation of a maximum mean service time has been applied to all 529 considered operating program combinations for the infrastructure settings with and without an overpass.
In Figures 11 and 12 the results for the selected 36 combinations are shown in heat-maps for both considered railway junction layouts. While both infrastructure settings share the same global distributions of service time requirements, results indicate that under the same load, as fixed by the operational program of the main and branching line, a railway junction including an overpass structure allows for a higher mean service time, while achieving the same operational quality as a railway junction without the overpass structure.
Further investigating the difference between the two infrastructure settings, Figure 13 includes the maximum mean service times (Figure 13a) for all 529 computed combinations of main and branching lines (see also B) and a histogram, representing the distribution of relative differences in the calculated maximum mean service times (Figure 13b).
Figure 11: Resulting maximum mean service times for the considered operating program combinations without an overpass
Taking the logarithmic scale in Figure 13a into account, a wide range of calculated maximum mean service times required for sufficient operational quality can be recognized. While some operating programs, i.e. examples with a low number of total trains in the considered peak hours, are granting sufficient operational quality even for mean service times as high as 10 to 20 minutes, the majority of considered examples require a mean service time of under 5 minutes, with very densely operated main and branching lines demanding maximum mean service times of even under 2 minutes.
By including an overpass in the layout of a planned railway junction, the required maximum mean service times can be decreased by a significant margin. For the majority of considered examples, the relative difference in the maximum mean service time lies between 5% and 10 %, but the achieved reduction of maximum mean service time is diverse. Some examples show virtually no difference between the considered infrastructure layouts, others report maximum mean service time decreases by up to 10 %.
Crucially, service times are dependent on various factors, some of which are not under the influence of infrastructure managers. Hence, substantial limitations to the lower bound of service times are relevant, affected by physical properties such as length or acceleration and braking performance of the rolling stock.
Concluding the computational experiment, differences between the considered infrastructure layouts have been analysed with the introduced method.
Figure 12: Resulting maximum mean service times for the considered operating program combinations with an overpass
For this, the calculation of many different operating program and service rate configurations has been conducted, resulting in substantiated estimations of infrastructure quality requirements. The achieved results indicate that railway junctions with a high total number of trains per hour can benefit from overpass structures to resolve some conflicts on requested routes.
## 6 Discussion and Outlook
This work introduced a novel method for analysing the timetable capacity of railway junctions based on queueing theory. It is applicable for solving both formulated problem statements (Section 3.2), timetable capacity determination of a given railway junction infrastructure (I) and dimension of junction infrastructure for a fixed operating program (II). By modelling railway junction routes as parts of a queueing system, while respecting their parallel service possibilities and taking resource conflicts into consideration, timetable independent analyses of examined infrastructure are enabled.
Utilizing classical queueing theory concepts, well established for railway line performance analysis (see Section 3.3), timetable capacity is determined by comparing queue-length estimations through a Continuous-Time Markov Chain representing the considered railway junction (see Section 3.4) with threshold values (see Section 3.6) depending on parameters of considered service and arrival process distributions along with operating program specifics. In this work, estimations of queue lengths have been carried out by model-checking software, enabling a fast and reliable computation for complex infrastructure dependencies.
The performance of the introduced approach has been studied by exemplary comparing the computation times and results of the analytical solution with
Figure 13: Comparison of the obtained maximum mean service times
simulations (see Section 4). Those simulations indicate that the accuracy of the introduced method does fulfill the requirements for sufficient analysis when utilizing threshold values, while also being significantly faster than the applied simulations.
Implementing the novel analytical model for operating programs in computational experiments (see 5), the selection of sufficient junction infrastructure has been tested. For this, 529 different operating program combinations have been considered, leading to substantive estimations regarding the effect of overpass structure on the timetable capacity of railway junctions. Hence, the introduced method has been proven to be applicable to the solution of conceptional issues on abstract and implementing scales.
Even though the concept is already applicable, additional research could still yield substantial benefit. As such, the modelling of general independent stochastic processes with Markov processes and approximation factors might be improvable, i.e. by including phase-type distributions. Similarly, utilized thresholds for the expected length of a queue have been introduced for the use in a context of railway lines, updating these to innovative measures could improve the real world applicability. Additionally, the introduced concept could be extended to implementations for railway stations and eventually railway networks. By including delay distributions and propagation in the model, similar measures for operational capacity could be enabled.
Utilizing the developed timetable independent method, infrastructure managers are however able to identify bottlenecks in early stages of the planning process. With the achieved computing time benefits when compared to simulation approaches, they might be setup to give junction designers substantiated indicators for required infrastructure. Hence, the introduced analytical timetable independent approach might be proven to form a valuable addition to the capacity method landscape.
**CRediT authorship contribution statement**
**Tamme Emunds:** Conceptualization, Methodology, Software, Formal Analysis, Writing - original draft. **Nils Niessen:** Conceptualization, Supervision, Writing - review & editing.
**Declaration of competing interest**
The authors declare that they have no known competing financial interests of personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgements
The authors thank Mr. Alexander Bork for his invaluable insights regarding model-checking techniques in general and regarding the software Storm in
specific. Furthermore, the authors thank Mr. Tobias Muller and Dr. Andreas Pfeifer for their supervision and guidance regarding practical implementations as well as providing application relevant case examples. Additionally, the authors thank DB Netz AG for the opportunity of applying the described theory in a practical project.
This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 2236/2. Computational Experiments were performed with computing resources granted by RWTH Aachen University under project rwth1413.
|
2309.11201 | Noise-induced transition from superfluid to vortex state in
two-dimensional nonequilibrium polariton condensates -- semi-analytical
treatment | We develop a semi-analytical description for the
Berezinskii-Kosterlitz-Thouless (BKT) like phase transition in nonequilibrium
Bose-Einstein condensates. Our theoretical analysis is based on a noisy
generalized Gross-Pitaevskii equation. Above a critical strength of the noise,
spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical
determination of the transition point based on a linearized Bogoliubov
analysis, to which some nonlinear corrections are added. We present two
different approaches that are in agreement with our numerical calculations in a
wide range of system parameters. We find that for small losses and not too
small energy relaxation, the critical point approaches that of the equilibrium
BKT transition. Furthermore, we find that losses tend to stabilize the ordered
phase: keeping the other parameters constant and increasing the losses leads to
a higher critical noise strength for the spontaneous generation of
vortex-antivortex pairs. Our theoretical analysis is relevant for experiments
on microcavity polaritons. | Vladimir N. Gladilin, Michiel Wouters | 2023-09-20T10:38:03Z | http://arxiv.org/abs/2309.11201v1 | Noise-induced transition from superfluid to vortex state in two-dimensional nonequilibrium polariton condensates - semi-analytical treatment
###### Abstract
We develop a semi-analytical description for the Berezinskii-Kosterlitz-Thouless (BKT) like phase transition in nonequilibrium Bose-Einstein condensates. Our theoretical analysis is based on a noisy generalized Gross-Pitaevskii equation. Above a critical strength of the noise, spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical determination of the transition point based on a linearized Bogoliubov analysis, to which some nonlinear corrections are added. We present two different approaches that are in agreement with our numerical calculations in a wide range of system parameters. We find that for small losses and not too small energy relaxation, the critical point approaches that of the equilibrium BKT transition. Furthermore, we find that losses tend to stabilize the ordered phase: keeping the other parameters constant and increasing the losses leads to a higher critical noise strength for the spontaneous generation of vortex-antivortex pairs. Our theoretical analysis is relevant for experiments on microcavity polaritons.
## I Introduction
The interest in nonequilibrium phase transitions of quantum many body systems has witnessed a rapid growth over the last decade thanks to the developments in Bose-Einstein condensation in optical systems (micro-cavity polaritons and photons in dye filled cavities) [1], circuit QED [2] and ultracold atomic gases [3]. One of the most elementary phase transitions in these systems is the onset of Bose-Einstein condensation, defined as the emergence of spontaneous long range phase coherence. Where at thermal equilibrium, long range phase coherence appears when the temperature is lowered below a density-dependent critical temperature, in nonequilibrium systems, the phase coherence is determined by the interplay between the hamiltonian and dissipative parts of the dynamics or even between competing dissipative mechanisms [4; 5].
Since quantum fluids of light are only available in one or two dimensions, true long range order is actually absent. In one-dimensional bose gases, both at thermal equilibrium and out of equilibrium, the spatial decay of the first order coherence function is always exponential [6; 7]. In two dimensions and at equilibrium there is the celebrated Berezinskii-Kosterlitz-Thouless phase transition [8; 9] that separates the normal and the superfluid state, with exponential and algebraic decay of the spatial coherence respectively. In equilibrium, the phase dynamics is in the XY universality class and the corresponding universal jump in the superfluid stiffness has been experimentally observed in \({}^{4}\)He [10]. More recently, the flexibility of the platform of ultracold atoms allowed a direct observation of the spontaneous formation of vortex-antivortex pairs above the BKT transition [11]. The ultracold atomic gases are in the weakly interacting regime, for which the transition temperature was computed by Prokof'ev and Svistunov by a clever combination of the linear Bogoliubov approximation and numerical Monte Carlo simulations [12].
For photonic systems out of equilibrium, the phase dynamics is actually in the Kardar-Parisi-Zhang universality class where a nonlinear term in the phase evolution is essential [13; 14]. For one-dimensional polariton systems, the spatial decay of the correlations remains qualitatively unaffected by the nonlinearity in the phase dynamics [15], but a specific spatiotemporal scaling emerges, that was recently observed experimentally [16].
In two dimensions, the KPZ phase dynamics was predicted to make long range phase coherence impossible in isotropic systems [13; 17]. Numerical studies on the other hand have shown a transition toward a state with algebraic decay of the coherence [18] and an associated disappearance of vortex-antivortex pairs [18; 19; 20; 21] without the formation of topological defects even when the spatiotemporal correlations feature KPZ scaling [22; 23]. Since computational resources limit the system sizes for numerical studies, the discrepancy between the renormalisation group studies could be due to finite size effects, but at present it does not seem that the issue is fully settled. Even when the numerically observed BKT transition is due to a limited system size, experimentally available systems necessarily also work with relatively small sizes, so that there is a clear interest in the nonequilibrium BKT transition. Compared to the equilibrium case, the current understanding of the dependence of the BKT critical point on the system parameters is much less mature. The reason here is twofold. First, out of equilibrium the standard Boltzmann-Gibbs ensemble can no longer be used and the steady state has to be characterized by a more involved simulation of the system dynamics. Second, the nonequilibrium dynamics is governed by more parameters: in addition to the system Hamiltonian and environment temperature, also the details of the coupling to the environment come into play in the non-equilibrium situation.
In our previous work on photon condensation [24], we have pinpointed the nonequilibrium BKT critical point with numerical simulations and developed a semi-analytical approach in order to get a better understanding of the location of the critical point. In our nu
merical simulations, the transition was approached from the ordered side with no vortices present in the initial state. Above a critical value of the noise strength in the stochastic classical field description of the dynamics, vortex-antivortex pairs spontaneously appear, signalling the BKT like transition to the disordered state. Our work involved both numerical simulations and analytical approximations that capture the dependences of the transition point on all the system parameters. The analytical approximation for photon condensates was based on the Bogoliubov approximation, combined with an infrared cutoff set by the inverse vortex core size [25]. In our previous study on the BKT transition for (interacting) polaritons [20], no such analytical estimate was given.
In the present article, we wish to fill this gap. Moreover, we extend our previous results to the regime of vanishing interactions, so that we can elucidate the effect of both the nonequilibrium condition and of interactions on the BKT transition point. When the interactions become small compared to the gain saturation nonlinearity, the vortex core size can significantly deviate from the usual healing length defined as \(\xi=\hbar/\sqrt{mg\vec{n}}\), where \(m\) is the mass, \(g\) the interaction constant and \(\vec{n}\) the density of polaritons in the condensate. The vortex core size appears in our treatment as a good proxy for the inverse of the infrared cutoff that we have to introduce to avoid the divergence of a momentum integral. We therefore carried out a systematic analysis of the vortex size and structure as a function of the strength of the interactions and of the driving and dissipation.
The structure of this paper is as follows. In Sec. II, we introduce our model for polariton condensates and derive the density and phase fluctuations within the linear (Bogoliubov) approximation. In Sec. III, we construct some approximate formulae for the BKT critical point with a few fitting parameters that are able to capture our numerical simulations. We start with a simple approach that is able to capture the main dependencies of the critical point on the system parameters and then present a more refined approach that allows for a very good fitting of the numerical results. Conclusions are drawn in Sec. IV and the vortex structure is discussed in appendix A.
## II Model and linearization
We consider nonresonantly excited two-dimensional polariton condensates. In the case of sufficiently fast relaxation in the exciton reservoir, this reservoir can be adiabatically eliminated and the condensate is described by the noisy generalized Gross-Pitaevskii equation [26; 27; 28; 29]
\[(\mathrm{i}-\kappa)\hbar\frac{\partial\psi}{\partial t}= \left[-\frac{\hbar^{2}\nabla^{2}}{2m}+g|\psi|^{2}\right. \tag{1}\] \[\left.+\frac{\mathrm{i}}{2}\left(\frac{P}{1+|\psi|^{2}/n_{s}}- \gamma\right)\right]\psi+\sqrt{D}\xi.\]
Here \(m\) is the effective mass and the contact interaction between polaritons is characterized by the strength \(g\). The imaginary term in the square brackets on the right hand side describes the saturable pumping (with strength \(P\) and saturation density \(n_{s}\)) that compensates for the losses (\(\gamma\)). We take into account the energy relaxation \(\kappa\) in the condensate [30]. The complex stochastic increments have the correlation function \(\langle\xi^{*}(x,t)\xi(x^{\prime},t^{\prime})\rangle=2\delta(\mathbf{r}- \mathbf{r})\delta(t-t^{\prime})\). Eq.(1) is a classical stochastic field model that describes all the fluctuations in the system as classical. This model is therefore only valid in the weakly interacting regime \(gm/\hbar^{2}\ll 1\), where quantum fluctuations are small.
For \(\kappa=0\), the zero momentum steady state of Eq. (1) is under homogeneous pumping \(\psi_{0}(\mathbf{x},t)=\sqrt{n_{0}}e^{-ign_{0}t}\), with \(n_{0}=n_{s}(P/\gamma-1)\). By expressing the particle density \(|\psi|^{2}\) in units of \(n_{0}\), dividing time by \(\hbar(1+\kappa^{2})/n_{0}\), length by \(\hbar/\sqrt{2mn_{0}}\), and noise intensity by \(\hbar^{3}n_{0}/(2m)\), Eq. (1) takes the form:
\[\frac{\partial\psi}{\partial t}= (i+\kappa)\left[\nabla^{2}-g|\psi|^{2}-\frac{i\gamma}{2n_{s}} \frac{1-|\psi|^{2}}{1+\nu|\psi|^{2}}\right]\psi\] \[+\sqrt{D}\xi, \tag{2}\]
where \(\nu=n_{0}/n_{s}\). The steady state density is then in the absence of noise given by [20]
\[\bar{n}=\sqrt{\left(\frac{\kappa+c}{2\kappa\nu}\right)^{2}+\frac{c}{\kappa\nu }}-\left(\frac{\kappa+c}{2\kappa\nu}\right) \tag{3}\]
with \(c\equiv\gamma/(2gn_{s})\).
In order to gain some insight in the physics of the fluctuations induced by the noise in Eq. (2), one can consider in first approximation the linearized equations for the density and phase fluctuations around the steady state:
\[\psi(\mathbf{x},t)=\sqrt{\bar{n}+\delta n(\mathbf{x},t)}e^{-ig\bar{n}t+i \delta\theta(\mathbf{x},t)} \tag{4}\]
After a spatial Fourier transform, these obey the linearized equations of motion
\[\frac{\partial}{\partial t}\delta\theta_{\mathbf{k}} =-\kappa\epsilon_{\mathbf{k}}\delta\theta_{\mathbf{k}}-\frac{ \epsilon_{\mathbf{k}}}{2\bar{n}}\delta n_{\mathbf{k}}-(g-\kappa\tilde{\gamma} )\delta n_{\mathbf{k}}\] \[+\sqrt{\frac{D}{\bar{n}}}\xi_{\mathbf{k}}^{(\theta)}, \tag{5}\]
\[\frac{1}{\bar{n}}\frac{\partial}{\partial t}\delta n_{\mathbf{k}} =-\kappa\epsilon_{\mathbf{k}}\frac{\delta n_{\mathbf{k}}}{\bar{n }}+2\epsilon_{\mathbf{k}}\delta\theta_{\mathbf{k}}-2(\kappa g+\tilde{\gamma} )\delta n_{\mathbf{k}}\] \[+2\sqrt{\frac{D}{\bar{n}}}\xi_{\mathbf{k}}^{(n)}, \tag{6}\]
where
\[\tilde{\gamma}=\frac{\gamma(1+\nu)}{2n_{s}(1+\nu\bar{n})^{2}}. \tag{7}\]
Using the Ito formula [31], one can obtain from Eqs. (5) and (6) a set of three equations:
\[\frac{D}{\bar{n}\epsilon_{\mathbf{k}}} =2\kappa\left\langle\left|\delta\theta_{\mathbf{k}}\right|^{2} \right\rangle+\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}} }{\bar{n}}\right\rangle\] \[+\frac{2(g-\kappa\tilde{\gamma})\bar{n}}{\epsilon_{\mathbf{k}}} \left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}}{\bar{n}} \right\rangle, \tag{8}\]
\[\frac{D}{\bar{n}\epsilon_{\mathbf{k}}} =\left[\frac{\kappa}{2}+\frac{(\kappa g+\tilde{\gamma})\bar{n}}{ \epsilon_{\mathbf{k}}}\right]\left\langle\left|\frac{\delta n_{\mathbf{k}}}{ \bar{n}}\right|^{2}\right\rangle\] \[-\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{ k}}}{\bar{n}}\right\rangle, \tag{9}\]
\[\left[\epsilon_{\mathbf{k}}+2(g-\kappa\tilde{\gamma})\bar{n} \right]\left\langle\left|\frac{\delta n_{\mathbf{k}}}{\bar{n}}\right|^{2} \right\rangle=4\epsilon_{\mathbf{k}}\left\langle\left|\delta\theta_{\mathbf{ k}}\right|^{2}\right\rangle\] \[-4\left[\kappa\epsilon_{\mathbf{k}}+(\kappa g+\tilde{\gamma}) \bar{n}\right]\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k} }}{\bar{n}}\right\rangle, \tag{10}\]
where
\[\epsilon_{\mathbf{k}}=k^{2}. \tag{11}\]
Eqs. (8)-(10) can be solved for the density and phase fluctuations and are accurate when they are small. Close to the BKT transition, this condition however breaks down. In the following, we will outline how these equations can still be used in order to obtain an estimate for the critical point, in analogy with our study of the BKT transition in photon condensates [24].
## III Approximations for the BKT critical point
### Heuristic estimate of density-phase correlator
In order to obtain our estimate of the critical point, we start by integrating Eq. (8) over all momenta. In the right hand side, we then use that for a homogeneous system
\[\int d^{2}\mathbf{k}\langle\left|\delta\theta_{\mathbf{k}}\right| ^{2}\rangle =\left\langle\delta\theta(\mathbf{x})\,\delta\theta(\mathbf{x}) \right\rangle\equiv\left\langle\delta\theta^{2}\right\rangle \tag{12}\] \[\int d^{2}\mathbf{k}\langle\delta\theta_{-\mathbf{k}}\delta n_{ \mathbf{k}}\rangle =\left\langle\delta\theta(\mathbf{x})\,\delta n(\mathbf{x})\right\rangle \equiv\left\langle\delta\theta\delta n\right\rangle \tag{13}\]
When integrating the left-hand side of Eq. (8) over \(\mathbf{k}\), we assume the presence of a finite UV momentum (energy) cutoff \(k_{+}\) (\(\epsilon_{+}=k_{+}^{2}\)). Our numerical simulations are performed for a lattice with grid size \(h\), for which our UV cutoff equals \(k_{+}=\pi/h\) [i.e, \(\epsilon_{+}=(\pi/h)^{2}\)]. Furthermore, one has to take into account that for the systems, described by nonlinear equations similar to Eq. (2), the use of the linear approximation given by Eq. (11) is physically meaningful [24; 12] only for \(k\) above a certain IR momentum (energy) cutoff \(k_{-}\) (\(\epsilon_{-}=k_{-}^{2}\)). Then the Fourier transform of the left-hand side of Eq. (8) can be represented as \(D[C_{1}+\ln(\epsilon_{+}/\epsilon_{-})]/(4\pi\bar{n})\), where the fitting constant \(C_{1}\) approximates the contribution of momenta smaller than \(k_{-}\).
Physically, the correlator \(\left\langle\delta\theta\delta n\right\rangle\) expresses correlations between the density and current fluctuations (since the velocity is the spatial derivative of the phase). In nonequilibrium condensates, density and velocity fluctuations are correlated because the particle balance equation: a local suppression of the density leads to local reduction of particle losses, which is compensated by an outward flow of particles. In the context of the BKT transition, this physics plays an important role, because the density in a vortex core is reduced so that vortices are accompanied by outgoing radial currents. The magnitude of the density-phase correlator was estimated in Ref. [24] for nonequilibrium photon condensates. Following this approach, for the system under consideration here, we obtain
\[\left\langle\delta\theta\,\delta n\right\rangle=\frac{\tilde{\gamma}}{\bar{n} }\langle\delta N^{2}\rangle, \tag{14}\]
where \(\delta N=\int_{0}^{x}\delta n(x^{\prime})dx^{\prime}\). In the case of a plane density wave \(n=\bar{n}(1-a\cos kx)\) one has
\[\left\langle\delta N^{2}\right\rangle=\frac{a^{2}\bar{n}^{2}}{2k^{2}}. \tag{15}\]
At the BKT transition, vortices have to nucleate, which requires in a continuum model strong density fluctuations with amplitude \(\bar{n}\) (i.e. \(a=1\)) [24]. Those strong fluctuations have appreciable probability only for relatively large momenta \(k\sim k_{+}\) as seen from the fact that the best fitting in Ref. [24] corresponds to the effective momentum value \(k\approx 0.3k_{+}\) in Eq. (15). Therefore, we approximate the correlator \(\left\langle\delta\theta\delta n\right\rangle\) by \(C_{2}\bar{n}\tilde{\gamma}/\epsilon_{+}\), where \(C_{2}\sim 1\) is a fitting parameter.
Analogously, the Fourier transform of \(\left\langle\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}\right\rangle/ \epsilon_{\mathbf{k}}\) in the last term of Eq. (8) is approximated by \(C_{3}\bar{n}\tilde{\gamma}/\epsilon_{+}^{2}\) with a fitting constant \(C_{3}\). As a result, we obtain the following approximate expression for the critical noise
\[d_{\mathrm{BKT}} =\left\{2\kappa\langle\delta\theta^{2}\rangle_{\mathrm{BKT}}+ \left[C_{2}+\frac{2C_{3}(g-\kappa\tilde{\gamma})}{\epsilon_{+}}\right]\frac{ \tilde{\gamma}}{\epsilon_{+}}\right\}\] \[\times\frac{4\pi}{C_{1}+\ln(\epsilon_{+}/\epsilon_{-})}, \tag{16}\]
where \(d_{\mathrm{BKT}}\equiv\left.(D/\bar{n})\right|_{\mathrm{BKT}}\).
In line with Refs. [24; 12], we will assume that at the transition \(\left\langle\delta\theta^{2}\right\rangle_{\mathrm{BKT}}=1/2\). In the equilibrium case (and at \(\kappa^{2}\ll 1\)) the IR momentum cutoff is inversely proportional to the healing length, so that the corresponding energy cutoff is \(\sim g\bar{n}\). Since the healing length corresponds at equilibrium to the vortex core size, a natural generalization to the nonequilibrium situation is to take a cutoff based on an estimate of the vortex core size. Our estimation of the vortex core size, detailed in appendix
A, leads to
\[\epsilon_{-}=\bar{n}\left[g+B_{0}\tilde{\gamma}\left(\frac{B_{0}\tilde{\gamma}}{g+B _{0}\tilde{\gamma}}\right)^{3}\right], \tag{17}\]
where \(B_{0}=0.524\). The average density \(\bar{n}\) in Eq. (17) will be approximated by its steady-state value in the absence of noise (3).
The results of fitting the numerical data for \(d_{\rm BKT}\) with Eq. (16) are represented by the dashed lines in Figs. 1 and 2 where the determined fitting parameters are \(C_{1}=8.87\), \(C_{2}=1.64\), and \(C_{3}=5.92\times 10^{-5}\). The small numerical value of \(C_{3}\) implies it can actually be set to zero without affecting the quality of the fits. The numerical data in Figs. 1(a) and 2(a) and the main panels in Figs. 1(b) and 2(b) are taken from Ref. [20]. To numerically solve Eq. (2), a finite-difference scheme was used. Specifically, we use periodic boundary conditions for a square of size \(L_{x}=L_{y}=40\) with grid step equal to \(0.2\). The location of the critical point is determined in the following way: after a long time evolution in the presence of noise, the system was evolved without noise for a short time (few our units of time) before checking for the presence of vortices. This noiseless evolution gives the advantage of cleaning up the density and phase fluctuations while it is too short for the unbound vortex-antivortex pairs to recombine. The propensity for their recombination is reduced [20] with respect to the equilibrium case thanks to outgoing radial currents that provide an effective repulsion between vortices and antivortices. To determine the critical noise for the BKT transition, \(D_{\rm BKT}\), we use the following criterion. If for a noise intensity \(D\) unbound vortex pairs are present after a noise exposure time \(t_{D}\) (and hence \(D>D_{\rm BKT}\)), while for a certain noise intensity \(D^{\prime}<D\) no vortex pairs appear even at noise exposures few times longer then \(t_{D}\), then \(D^{\prime}\) lies either below \(D_{\rm BKT}\) or above \(D_{\rm BKT}\) and closer to \(D_{\rm BKT}\) then to \(D\). Therefore, the critical noise intensity can be estimated as \(D_{\rm BKT}=D^{\prime}\pm(D-D^{\prime})\).
As seen from the comparison between the dashed lines and the symbols in Figs. 1 and 2, Eq. (16) qualitatively reproduces the main trends in the behavior of the numerically determined \(d_{\rm BKT}(c,\kappa,\nu,h)\) at relatively small grid steps \(h\), when \(\epsilon_{+}\) is considerably larger than \(\epsilon_{-}\). This qualitative agreement is ensured, in particular, by taking into account the contributions related to density-phase correlation, which are zero in equilibrium systems but play a crucial role for the BKT transition out of equilibrium. At the same time, this simple and transparent heuristic estimate of these contributions does not appear sufficient for a good quantitative description of the numerical results.
### Bogoliubov theory with nonlinear correction
In order to obtain a better quantitative description of the numerics for the nonequilibrium BKT transition, we
Figure 1: Numerically (symbols) and semi-analytical (lines) determined renormalized critical noise \(d_{\rm BKT}=D_{\rm BKT}/n_{\rm BKT}\) as a function of \(c=\gamma/(2n_{s}g)\) (a), \(\kappa\) (b), and \(\nu\) (c). The insets in panels (b) and (c) show the dependence of \(d_{\rm BKT}\) on \(\kappa\) and \(\nu\), respectively, in the case of \(g=0\). The solid and dashed lines correspond to Eqs. (26) and (16), respectively.
develop below a different approach that leads to a slightly more involved expression. To this purpose, we start from the linear approximation for the phase fluctuations in the steady state, obtained by solving Eqs. (8)-(10). Inserting \(D/\bar{n}\) from Eq. (8) and \(\left<\left|\delta n_{\mathbf{k}}/\bar{n}\right|^{2}\right>\) from Eq. (10) into Eq. (9), we obtain the relation
\[\left[\epsilon_{\mathbf{k}}+3g\bar{n}+2\left(g^{2}+\tilde{\gamma }^{2}\right)\frac{\bar{n}^{2}}{\epsilon_{\mathbf{k}}}\right]\left<\frac{ \delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}}{\bar{n}}\right>\] \[=2\tilde{\gamma}\bar{n}\left<\left|\delta\theta_{\mathbf{k}} \right|^{2}\right>. \tag{18}\]
Using Eq. (18), we express \(\left<\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}/\bar{n}\right>\) through \(\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>\) and insert the result into Eq. (8). For the phase fluctuations, this leads to the equation
\[\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>=\frac{D}{\bar{n}}f( \epsilon_{\mathbf{k}}), \tag{19}\]
where
\[f(\epsilon)=\frac{1}{2\kappa}\frac{\epsilon+3\bar{n}g+2\left(g^{2}+\tilde{ \gamma}^{2}\right)\bar{n}^{2}/\epsilon}{(\epsilon+\epsilon_{1})(\epsilon+ \epsilon_{2})}. \tag{20}\]
with
\[\epsilon_{1}=\bar{n}\left(g+\frac{\tilde{\gamma}}{\kappa}\right),\quad \epsilon_{2}=2\bar{n}g. \tag{21}\]
From Eqs. (19) and (20), one sees that the phase fluctuations are, as expected, proportional to the noise strength \(D\) and decrease as a function of the density \(\bar{n}\) and energy relaxation \(\kappa\). For what concerns their energy dependence, Eq. (20) shows a \(1/\epsilon\) behavior both at small and large energies. As a consequence, the Fourier transform of phase fluctuations, needed to obtain their real space correlations requires the introduction of an infrared cutoff \(\epsilon_{-}\), analogous to the treatment in Sec. III.1. As a result of Fourier transformation, the local phase variance becomes
\[\left<\delta\theta^{2}\right>=\frac{D}{4\pi\bar{n}}(F+F_{-}) \tag{22}\]
where
\[F= \int\limits_{\epsilon_{-}}^{\epsilon_{+}}f(\epsilon)d\epsilon= \frac{1}{2}\frac{g^{2}+\tilde{\gamma}^{2}}{g(\kappa g+\tilde{\gamma})}\ln \left(\frac{\epsilon_{+}}{\epsilon_{-}}\right)\] \[+\frac{\tilde{\gamma}}{\tilde{\gamma}+\kappa g}\left(\frac{1}{2 \kappa}+\frac{\kappa\tilde{\gamma}}{\tilde{\gamma}-\kappa g}\right)\ln\left( \frac{\epsilon_{+}+\epsilon_{1}}{\epsilon_{-}+\epsilon_{1}}\right)\] \[-\frac{\tilde{\gamma}^{2}}{2g(\tilde{\gamma}-\kappa g)}\ln\left( \frac{\epsilon_{+}+\epsilon_{2}}{\epsilon_{-}+\epsilon_{2}}\right), \tag{23}\]
where the logarithmic dependence on the lower and upper energy cutoffs is a consequence of the \(1/\epsilon\) behavior of \(f(\epsilon)\) at low and high energies. The term
\[F_{-}=C_{-}\epsilon_{-}f(\epsilon_{-}) \tag{24}\]
in Eq. (22) approximates the contribution of the integral over \(\epsilon\) from \(0\) to \(\epsilon_{-}\), where \(C_{-}\) is a fitting parameter.
Expression (22), derived with the use of linearized equations for the phase and density fluctuations, is expected to be applicable when these fluctuations are small. As discussed above, at the BKT transition, where both phase and density fluctuations are large, the real-space correlator \(\left<\delta\theta\delta n\right>\) is mainly determined by the contributions of \(k\sim k_{+}\). According to Eq. (18), the quantity \(\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>\) contains a term that is exactly proportional to \(\left<\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}\right>\). This implies that at the BKT transition the expression for the phase fluctuations \(\left<\delta\theta^{2}\right>\), derived above, needs an additional "nonlinear correction", which would describe an enhanced contribution of large momenta \(k\sim k_{+}\) (large energies \(\epsilon\sim\epsilon_{+}\)). Here, we approximate this correction by adding to \(F\) the term
\[F_{+}=C_{+}\epsilon_{+}\,f(\epsilon_{+}), \tag{25}\]
Figure 2: Numerically (symbols) and semi-analyticaly (lines) determined renormalized critical noise \(d_{\text{BKT}}\) as a function of the grid step at \(\kappa\geq 0.1\) (a) and \(\kappa=0\) (b) for nonzero \(g\). Inset in panel (b): \(d_{\text{BKT}}\) as a function of the grid step at \(g=0\). The solid and dashed lines correspond to Eqs. (26) and (16), respectively.
where \(C_{+}\) is a fitting parameter. Then at the BKT point we have
\[d_{\rm BKT}=\langle\delta\theta^{2}\rangle_{\rm BKT}\ \frac{4\pi}{F+F_{-}+F_{+}}, \tag{26}\]
where again we take \(\langle\delta\theta^{2}\rangle_{\rm BKT}=1/2\).
Applying Eq. (26) to fit the numerical data for \(d_{\rm BKT}\), we obtain for the two fitting parameters: \(C_{-}=2.24\) and \(C_{+}=7.33\). As compared to the results of the heuristic approach described in the previous subsection (dashed lines in Figs. 1 and 2), the results corresponding to more involved and accurate Eq. (26), which are shown by the solid lines in Figs. 1 and, demonstrate a much better quantitative agreement with the numerically determined \(d_{\rm BKT}\).
The semi-analytical expression for \(d_{\rm BKT}\), given by Eq. (26) together with Eqs. (17), (20), (21), and (23)-(25), can be considered as a function of three independent parameters: \(\tilde{\gamma}/g\), \(\kappa\) and \(\epsilon_{+}/\epsilon_{-}\). In Fig. 3, the renormalized critical noise \(d_{\rm BKT}/\kappa\), corresponding to Eq. (26), is plotted for a wide range of the parameters \(\tilde{\gamma}/g\) and \(\kappa\) at three different values of the ratio \(\epsilon_{+}/\epsilon_{-}\).
For small losses and not too small \(\kappa\), the ratio \(d_{\rm BKT}/\kappa\) is of order one, in line with the equilibrium BKT transition where according to fluctuation-dissipation relation \(D=\kappa T\)[32] and where the critical temperature scales in first approximation as \(T_{BKT}\sim n\). In line with our previous studies for polariton condensates [20] and photon condensates [24], we see that the losses stabilize the ordered phase: when \(\tilde{\gamma}\) is increased at fixed \(\kappa\), the noise required to make the transition to the state with free vortex-antivortex pairs increases. We explained this trend by the reduction of the density fluctuations for increased driving and dissipation [20], that manifests itself through density-phase correlations [24] [see discussions preceding Eq. (16) and Eq. (25)].
In the limit without losses (\(\tilde{\gamma}=0\)), our estimate for the critical point reduces to
\[n_{\rm BKT}=\frac{T_{\rm BKT}}{2\pi}\left[\log\left(\frac{1}{mh^{2}gn_{\rm BKT }}\right)+A_{1}\right]. \tag{27}\]
Here, we have used that \(T_{\rm BKT}=D_{\rm BKT}/\kappa\), defined \(A_{1}=C_{+}+C_{-}+\log(\pi^{2}/2)\approx 11.2\) and restored physical units. We can compare this expression with the equilibrium BKT transition for the weakly interacting lattice Bose gas (Eq. (12) in [12])
\[n_{\rm BKT}=\frac{mT_{\rm BKT}}{2\pi}\log\frac{A}{mh^{2}gT_{\rm BKT}}, \tag{28}\]
with \(A=6080\). This expression can be written as
\[n_{\rm BKT}=\frac{mT_{\rm BKT}}{2\pi}\left[\log\left(\frac{1}{mh^{2}gn_{\rm BKT }}\right)+A_{2}\right], \tag{29}\]
with
\[A_{2}=\log\left[\frac{A}{2\pi}\log\left(\frac{A}{m^{2}h^{2}gT_{\rm BKT}} \right)\right]. \tag{30}\]
Assuming here \(m^{2}h^{2}gT_{\rm BKT}\approx 1\), one obtains \(A_{2}\approx 9.1\), which is reasonably close to our \(A_{1}\approx 11.5\) given the simplicity of our approach and considering that the equilibrium case is actually a somewhat singular limiting case of our model where the gain and losses simultaneously tend to zero.
## IV Conclusions
In this paper, we have developed a semi-analytical approach to describe the BKT transition point for driven-dissipative weakly interacting Bose gases. We start from the linearized equations of motion for the density and phase fluctuations and subsequently correct phenomenologically for nonlinearities that are important close to the BKT transition. Our resulting analytical formulae contain some fitting parameters that are fitted to a series of numerical simulations in a wide parameter range. The good fitting of our numerical results indicates the validity of the physical intuition underlying our semi-analytical approach and promotes our formulae to a concise summary of the numerical results.
Of course, our numerical results were obtained for a finite size system and we can therefore not settle what
will happen for much larger system sizes, where it remains possible that the KPZ nonlinearity may destabilize the algebraically ordered phase [13; 17], even though recent numerical work has shown that KPZ scaling can be witnessed in 2D nonequilibrium condensates without the phase coherence being destabilized by the formation of vortex antivortex pairs [22; 23].
## Acknowledgements
We thank Iacopo Carusotto for continuous stimulating discussions. VG was financially supported by the FWO-Vlaanderen through grant nr. G061820N.
|
2309.08988 | Multi-objective tuning for torque PD controllers of cobots | Collaborative robotics is a new and challenging field in the realm of motion
control and human-robot interaction. The safety measures needed for a reliable
interaction between the robot and its environment hinder the use of classical
control methods, pushing researchers to try new techniques such as machine
learning (ML). In this context, reinforcement learning has been adopted as the
primary way to create intelligent controllers for collaborative robots, however
supervised learning shows great promise in the hope of developing data-driven
model based ML controllers in a faster and safer way. In this work we study
several aspects of the methodology needed to create a dataset to be used to
learn the dynamics of a robot. For this we tune several PD controllers to
several trajectories, using a multi-objective genetic algorithm (GA) which
takes into account not only their accuracy, but also their safety. We
demonstrate the need to tune the controllers individually to each trajectory
and empirically explore the best population size for the GA and how the speed
of the trajectory affects the tuning and the dynamics of the robot. | Diego Navarro-Cabrera, Niceto R. Luque, Eduardo Ros | 2023-09-16T13:06:36Z | http://arxiv.org/abs/2309.08988v1 | # Multi-objective tuning for torque PD controllers of cobots
###### Abstract
Collaborative robotics is a new and challenging field in the realm of motion control and human-robot interaction. The safety measures needed for a reliable interaction between the robot and its environment hinder the use of classical control methods, pushing researchers to try new techniques such as machine learning (ML). In this context, reinforcement learning has been adopted as the primary way to create intelligent controllers for collaborative robots, however supervised learning shows great promise in the hope of developing data-driven model based ML controllers in a faster and safer way. In this work we study several aspects of the methodology needed to create a dataset to be used to learn the dynamics of a robot. For this we tune several PD controllers to several trajectories, using a multi-objective genetic algorithm (GA) which takes into account not only their accuracy, but also their safety. We demonstrate the need to tune the controllers individually to each trajectory and empirically explore the best population size for the GA and how the speed of the trajectory affects the tuning and the dynamics of the robot.
torque control, genetic algorithms, PD control
## I Introduction
Collaborative robotics is an emerging field that studies the creation and development of robots designed for a safe human-machine interaction i.e. human-robot collaboration. The motion control of these cobotic systems is a complex problem since it incorporates both active safety measures, such as torque control that aims to minimize the force applied by the joints, and passive measures, like the integration of elastic elements that provide a higher level of compliance in case of an impact with humans or objects in the environment. These measures hinder the calculation of the analytical dynamic model of the cobot, which prevents the use of classical torque-based control algorithms that rely on widely used rigid simple models. Furthermore, position-based control is not well suited for human-robot interaction (HRI) as the commanded motion can carry significant levels of inertia, posing a risk to human safety.
To overcome the reliance on an analytical definition of system dynamics in traditional control theory, machine learning (ML) is being profusely used [1]. ML offers promising control solutions for operating model-free dynamic systems, enabling accurate and safe task performance. Among various learning types, reinforcement learning emerges as the most prevalent due to its capability for generalization and data capture through practice [4]. However, this learning approach does come with certain drawbacks for real systems, including a lengthy learning period and an exploration stage that can pose risks to both the robot and its environment [4].
As a result, in this work, we focus on studying the methodology required to create a database that enables the data-driven learning of a cobot's dynamic model, rather than calculating it analytically [2]. Building upon the previous discussion, our main goal is to generate a dataset that facilitates the study and development of supervised learning models, so that they can be used for avoiding risks during the learning stages with reinforcement learning or other adaptive control alternatives. This approach takes advantage of optimized position control scheme for gathering data.
The database we propose captures the relationship between the reached position and velocity of the cobot and the corresponding applied torque values. Depending on the direction of this relationship (reached position to applied torque values or vice versa), the database can serve as either an inverse dynamic model or a forward dynamic model of the cobot. This database is obtained by executing a representative set of trajectories with the cobot operating in torque control, guided by a proportional-derivative (PD) controller. The PD controller is adjusted using a multi-objective GA that optimizes movement precision and torque values to ensure safety. The extracted data from this process will be used to train the subsequent ML controller, providing optimal torque sequences for the cobot to accurately perform the desired trajectories, akin to accurate position control, while minimizing torque requirements.
The PD torque control requires precise adjustment of the PD parameters for each target trajectory. Each data sequence of torque value-reached position, obtained from individual PD adjustments, is generated specifically to train a subsequent ML controller. This ML controller will be able to generalize the control action and adapt it to various types of trajectories [11].
## II Related work
The PD control architecture is widely used in robotic manipulators due to its simplicity [5]. This technique involves adjusting only two parameters per robotic joint and provides accurate control for simple tasks within a limited range of motion.
PD adjustment using GA is widely used in the industry, leading to a wide range of proposed GA techniques [3]. While most of these works focus on single-objective GA techniques, in our collaborative robot approach the goal is not only to maximize controller accuracy but also to ensure HRI safety by minimizing torque values, PD adjustment requires the use of multi-objective GA. An example of such an GA is the NSGA-II [6], which enables the optimization of multiple control goals simultaneously.
The adjustment of PD controllers using multi-objective GA has been previously addressed by [7], where a PID (proportional-integral-derivative) controller was tuned using NSGA-II. In [8] multi-objective cuckoo search algorithm (MOCSA) is used for the same problem but no comparison between algorithms is provided so we cannot say whether MOCSA is more appropriate than NSGA-II for this problem. [9] also uses NSGA-II and compares it with a variation of the same algorithm which uses decision maker preference information to reduce the decision parameters' search space. All of these results, while promising, were only tested in simulation with a relatively simplistic planar two-degree-of-freedom (d.o.f.) robot arm model. In this work, our aim is to validate the effectiveness of the NSGA-II solution using the more complex Kuka iiwa LBR robot arm, equipped with 7 d.o.f. and flexible joints [10].
Despite the proven usefulness of learned dynamic cobot models [11], to the best of our knowledge, there is currently no publicly available dataset that captures the relationship between torque values and motion of a cobot in a manner suitable for learning its dynamic model. Therefore, the objective of this work is to present and discuss the methodology used to collect the necessary data required for learning a dynamic cobot model
## III Proposed solutions
To ensure a balance between optimal torque utilization and accuracy in the collected data, we will utilize a custom-tuned PD controller. As mentioned earlier, PD adjustment can result in highly accurate torque-based control for specific trajectories. However, as we will demonstrate later, the accuracy diminishes significantly when performing dissimilar trajectories located far from the PD working point.
For the PD adjustment, we propose the use of a multi-objective GA to jointly optimize accuracy and safety, specifically maximizing accuracy while minimizing the torque values involved. To achieve this, we incorporate two objectives within the objective functions. The first objective assigns weight to the accuracy error, measured as the mean Euclidean distance between the end effector and the desired Cartesian coordinates. Meanwhile, the second objective assigns weight to the torque values applied throughout the trajectory. The torque values are calculated using Eq. (1), where \(U\) represents the vector of commanded torques, \(T\) denotes the number of steps in a trajectory, and \(u_{i}\) corresponds to the torque applied at time \(i\).
\[f_{t}(U)=\frac{1}{T}\sum_{i=1}^{T}(u_{i}-u_{i-1})^{2} \tag{1}\]
Our methodology divides the data collection process into four main layers, as shown in Figure 1:
* Sensor/Actuator layer: This layer comprises the sensors and actuators used by the cobot. It receives instructions from the controller and provides data on the joint states.
* Control layer: The PD controller is located at this layer and receives information regarding the next desired setpoint as well as the parameters (Kp and Kd) to be used. It sends corresponding torque commands.
* System layer: This layer sends data about the desired trajectory to the control layer.
* Analytic layer: The GA in this layer is used to adjust the PD controller gains based on the system performance.
System, control and actuator/sensor layers work on a real-time loop at 500Hz. In this period of time the system layer sends the trajectory to the control layer which sends the torque command to the cobot and receives the updated sensor data. The torque, position and velocity of each joint is registered in an array and written to a file once the trajectory is finished. Once the the data file is created, the analytic layer reads it to evaluate performance and communicates asynchronously with the control layer to update the PD gains.
This division facilitates the scalability of our methodology by separating the analytic and system layers from the control and sensor layers. Furthermore, it allows for the parallel utilization of multiple cobots with the same trajectory.
Regarding the trajectories included in the dataset, and following the findings in [11], we incorporate spiral and random trajectories. These trajectory types generate meaningful data sets while avoiding excessive data size, making them suitable for effective training of the ML controller. Additionally, we introduce pyramid-like trajectories that combine linear movements with sharp turns. These trajectories aim to better teach a ML controller how to function when working with high acceleration and velocity gradients, resulting in larger inertia values. Fig. 2 depicts some examples of the trajectory dataset.
Finally, to compare the GA solutions for the PD parameters, we utilize the hypervolume indicator metric [15]. This metric takes a reference point (e.g. the maximum values between all the controllers tested). It then calculates the area between the Pareto front and this reference point. A visual example of this metric can be seen in Figure 3
## IV Progress to date
The results presented in this work are obtained from a simulated environment (and the application and validation in a non-simulated environment are left for future work). For our robotic simulation platform, we use ROS2 (Robot Operating System) [14], and Gazebo [13] as our dynamic simulator. The close integration of Gazebo with ROS makes it a suitable choice for the performed study.
Each experiment in this section is repeated 5 times accounting for the stochastic nature of the GA. This number of trials balances computation time (over a couple of weeks) and reliability of the data and conclusions. Box plots are used to represent the locality and spread of the results. These experiments are conducted to demonstrate the feasibility of the proposed methodology.
In one set of experiments, various population sizes are compared to find the optimal balance between accuracy and computation time. Fig. 4 depicts that the algorithm (NSGA-II) achieves the best results with a population size of around 30 individuals. Increasing the population size carries similar results, but with significantly longer execution times.
Once the GA is configured, we compare the accuracy achieved by a generic track controller and a specific track controller. The generic controller is adjusted to perform globally on all trajectories in the dataset, while the specific controller is tuned for a single trajectory. Fig. 5 demonstrates that the specific controller outperforms the generic controller, not only in terms of precision but also in minimizing the applied torque values. This indicates that while it is possible to achieve high accuracy (at least in simulation) by overloading the joint motors, achieving smooth and safe movements requires a well-tuned specific PD controller. Since the goal of the PD optimization is to be able to perform different movements with different optimal accuracy/torque profiles, it is key to use a specific optimized controllers for each trajectory. Then, all the data gathered from the different trajectories (and specific controllers) will be added to the database. This specific optimization stage is required, because during the trajectory execution stage (gathering the dataset) it is captured both the robot dynamics but also the properties of the controller used. Thus optimizing specific controllers leads
Fig. 1: Architecture of the proposed system. General framework adapted from IMOCO4.E [12]. First the system layer sends the desired setpoint to the controller, then the control layer sends torque commands to the cobot and finally the sensor layer returns the position and velocity of each joint. The extracted data is saved into a file used asynchronously by the analytic layer to update the PD controller gains.
Fig. 4: Comparison of the number of evaluations needed for convergence (a) and the hypervolume of the obtained pareto front (b) based on population size.
Fig. 3: Diagram depicting the calculation of the hypervolume indicator. This metric is obtained measuring the area between the pareto front and a reference point.
Fig. 2: Examples of useful trajectories for data gathering.
to a richer database in terms of accuracy and torque trade-off (Figure 5).
Finally, the speed of the trajectory is one of the key factors that significantly impacts the dynamics of a cobot. Thus, we investigate the extent to which the speed of the trajectory influences the PD adjustment and determine the optimal speed at which the trajectories in our dataset shall be executed.
To accomplish this, we create multiple variations of the same target trajectory (a spiral), each with a different duration ranging from 3 to 6 seconds. This range was selected on the consideration that faster trajectories are not achievable, and slower trajectories would exhibit negligible differences in their dynamics. As the duration of the trajectory increases, the motion commands required for the cobot to track it become slower. Next, we adjust a set of PD controllers for each individual trajectory and assess their performance on the other trajectories. The results of this study are illustrated in Figure 6, where \(X^{\prime\prime}controller\) represents a set of controllers that were specifically adjusted using a trajectory of \(X\) seconds.
From these results, two conclusions can be drawn. Firstly, the speed of the trajectory has a notable impact on the accuracy of the control, with accuracy rapidly decreasing at higher speeds. The accuracy stabilizes at around 5 seconds, making it the optimal duration for this trajectory as it strikes the best balance between execution time and controller accuracy.
Secondly, although there is a slight drop in performance when transferring a controller from one trajectory to another, the differences between sets of controllers are relatively small. This suggests that at this regime, the speed of the trajectory does not significantly affect the PD adjustment but rather the data gathered.
## V Conclusions and future research
The work presented here was focused on defining a methodology to create a dataset from which most ML solutions were able to capture the dynamics model of a cobot. To collect optimal tuples of torque-position/velocity data, we applied multi-objective GAs to finely adjust PDs that controlled the torque of a cobot throughout its working space, maximizing accuracy and minimizing torque values.
In future work, we aim to apply this methodology to a non-simulated cobot platform covering the sim2real gap and demonstrating how the main concepts indicated in this work also apply for real robots. Although the specific trajectories and optimized controllers may differ when addressing the GA, the presented work and results provide valuable insights. It is important to note that the intensive optimization effort presented in this work cannot be directly performed on a robotic platform due to the potential risk it poses to the robot integrity.
## Acknowledgment
This study was supported by the EU with the IMOCOe4.0 [EU H2020RIA-101007311] project and by Spanish national funding [PCI2021-121925]. This study was also supported by SPIKEGEG [PID2020-113422GA-I00] by the Spanish Ministry of Science and Innovation MCIN/AEI/10.13039/501100011033, awarded to NRL; DLROB [TED2021-131294B-I00] funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, awarded to NRL; MUSCLEBOT [CNS2022-135243] funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, awarded to NRL.
|
2309.14379 | Machine-assisted mixed methods: augmenting humanities and social
sciences with artificial intelligence | The increasing capacities of large language models (LLMs) present an
unprecedented opportunity to scale up data analytics in the humanities and
social sciences, augmenting and automating qualitative analytic tasks
previously typically allocated to human labor. This contribution proposes a
systematic mixed methods framework to harness qualitative analytic expertise,
machine scalability, and rigorous quantification, with attention to
transparency and replicability. 16 machine-assisted case studies are showcased
as proof of concept. Tasks include linguistic and discourse analysis, lexical
semantic change detection, interview analysis, historical event cause inference
and text mining, detection of political stance, text and idea reuse, genre
composition in literature and film; social network inference, automated
lexicography, missing metadata augmentation, and multimodal visual cultural
analytics. In contrast to the focus on English in the emerging LLM
applicability literature, many examples here deal with scenarios involving
smaller languages and historical texts prone to digitization distortions. In
all but the most difficult tasks requiring expert knowledge, generative LLMs
can demonstrably serve as viable research instruments. LLM (and human)
annotations may contain errors and variation, but the agreement rate can and
should be accounted for in subsequent statistical modeling; a bootstrapping
approach is discussed. The replications among the case studies illustrate how
tasks previously requiring potentially months of team effort and complex
computational pipelines, can now be accomplished by an LLM-assisted scholar in
a fraction of the time. Importantly, this approach is not intended to replace,
but to augment researcher knowledge and skills. With these opportunities in
sight, qualitative expertise and the ability to pose insightful questions have
arguably never been more critical. | Andres Karjus | 2023-09-24T14:21:50Z | http://arxiv.org/abs/2309.14379v1 | Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence
###### Abstract
The increasing capacities of large language models (LLMs) present an unprecedented opportunity to scale up data analytics in the humanities and social sciences, augmenting and automating qualitative analysis previously typically allocated to human labor. This contribution goes beyond simply reporting on LLM task performance, proposing a systematic mixed methods framework to harness qualitative analytic expertise, machine scalability, and rigorous quantification, with attention to transparency and replicability. It builds on the mixed methods designs of quantification or integration, and feature analysis from linguistics. 16 machine-assisted case studies are show-cased as proof of concept, in 9 diverse languages and across multiple disciplines. Tasks include linguistic and discourse analysis, lexical semantic change detection, interview analysis, historical event cause inference, detection of political stance, text and idea reuse, genre composition in literature and film; social network inference from text, historical text mining, automated lexicography, missing metadata augmentation, and multimodal visual cultural analytics. They are based on novel test data as well as direct replications of past research. It is also shown how to replace opaque topic modeling popular as a "distant reading" method, with hypothesis-driven topic classification. In contrast to the focus on English in the emerging LLM applicability literature, many examples here deal with scenarios involving smaller languages and historical texts prone to digitization distortions. In all but the most difficult tasks requiring expert knowledge, (already currently available) generative LLMs can demonstrably serve as viable research instruments and an alternative to human-only analytics. LLM (and human) annotations may contain errors and variation, but the agreement rate can and should be accounted for in subsequent statistical modeling; a bootstrapping approach is discussed. The replications among the case studies illustrate how tasks previously requiring potentially months of team effort and complex computational pipelines, can now be accomplished by an LLM-assisted scholar in a fraction of the time. Importantly, this approach is not intended to replace, but to augment researcher knowledge and skills. With these opportunities in sight, qualitative expertise and the ability to pose insightful questions have arguably never been more critical.
## 1 Introduction
Developments in generative large language models (LLMs, sometimes dubbed as AI) have broadened their applicability to various research tasks. Of particular interest to the humanities and social sciences (H&SS) is the capacity to use them as on-demand instructable classifiers and inference engines. Classifying texts or images for various properties has been available for a while in the form of supervised machine learning (ML). Yet the necessity to train such models (or
tune pretrained models) on sufficiently large sets of already labeled examples may have been one factor hampering the wider adoption of ML tools as research instruments in H&SS. Unsupervised learning approaches like word or sentence embeddings and topic modeling do allow for explorative approaches, but often necessitate complex text preprocessing, convoluted pipelines to use for confirmatory inference, and as latent models, are typically opaque to interpret (see examples below).
Zero-shot learning, as implemented via instructable, generative pretrained LLMs, offers the best of both worlds, if used in a principled manner. If applied as an on-demand classifier, the classified features or inferred variables constitute quantitative data, which in turn necessitates systematic statistical modeling, to make sure the eventual claims and interpretations remain rigorous. This contribution takes a step in that direction by proposing a framework consisting of a qualitative annotation step (by humans or machines) and a subsequent quantitative modeling step. As discussed below, it can therefore be situated as a mixed methods approach. It is shown via case studies to be applicable across a wide range of tasks across a diverse set of disciplines. This includes traditionally qualitative areas, and those dealing with data like literary or historical text. While here the focus is on one particularly flexible (quantitization-driven) design, the machine-assisted components may well be applicable in other mixed designs as well. It is not directly applicable to (true) qualitative scholarship in the sense that the outputs of the quantification (coding) step necessitates quantification. But as discussed below, purely qualitative approaches may no longer be optimal in many areas dealing with empirical data, given the availability of more systematic frameworks, and now also scalability via machine-assistance.
This proposal involves substituting some aspects of traditionally human research labor with automation. However, qualitative thinking and expert knowledge is absolutely central to the framework's successful application. To be meaningful, it necessitates qualitative work including hypothesis, coding scheme and prompt design, expert annotation of evaluation data sets, and interpretation and contextualization of the final quantitative results (e.g. regression coefficients, clusters, frequency intervals, or other numerical data). The framework employs machines such as LLMs as tools -- or in a sense, (narrow) artificial intelligence assistants -- to augment expertise, enabling more scalable research, while fostering replicability and transparency through proposed good practices in unitizing, analysis and methods documentation.
### Related LLM applicability research
This section offers a brief overview of recent LLM applicability research that this work builds upon and complements. This term is used here to denote the exploration of the feasibility and performance of pre-trained LLMs as research and analytics tools -- as distinct from the machine learning domain of LLM engineering (reviews of which can be found elsewhere).
ML and natural language processing (NLP) supported research in the (digital) humanities and (computational) social sciences is nothing new. However, only until recently, a typical machine assisted text-focused research scenario would have involved either training a supervised learning classifier (or fine-tuning a pretrained LLM, see e.g. Majumder et al.2020; de la Rosa et al.2023) on a large set of annotated examples for a given task, or using output vectors from a word or sentence embedding LLM like BERT (Devlin et al.2019) for clustering or other tasks (e.g. Fonteyn2021; Sen et al.2023). What makes a difference now is recent advances in LLM technology, to the point that it has become feasible to use them as instructable zero-shot learners (Wei et al.2022; OpenAI 2023) for classification and inference. In a zero-shot learning scenario, an LLM is instructed to generate output using an input (prompt) that includes in some form both the classification or generation instructions, as well as the input data. The generated outputs are then parsed and quantified as necessary. This removes the need for laborious annotation work to create large training sets for every specific niche task. A second contributing factor to recent LLM popularity, both as chatbots and classifiers, is arguably accessibility. Running very large LLMs requires significant hardware, while cloud services like that of e.g. OpenAI's GPT interfaces have made them accessible, albeit at a cost, to those who do not have access to such hardware or the skills to operate one.
All of this has attracted attention across various research communities well beyond NLP. The latter is currently in the process of (re)evaluating which previous specialized tasks (or entire research subfields) can be zero-shot with large enough LLMs (cf. Qin et al.2023). Other interested parties include the humanities and social sciences. In a large benchmarking exercise involving 24 (English-language) tasks drawn from multiple disciplines, Ziems et al.(2023) show that both the slightly older FLAN-T5 (Chung et al.2022) and OpenAI's third generation GPT models (GPT-3 and 3.5) achieve moderate to good results across their annotation and classification benchmarks. Other contributions have focused on single tasks or domains like discourse annotation (Fan and Jiang2023), metalinguistic abilities (Begus et al.2023), diagnosis inference
(Wang et al. 2023), political stance and affiliation detection (Tornberg 2023; Zhang et al. 2023b), text retrieval and analytics (Zhu et al. 2023), and likely many more. Gilardi et al. (2023) compare the performance of GPT-3.5 to crowdsourced workers from the Amazon Mechanical Turk platform on four (English) text classification task and find that the LLM outperforms crowdworkers on accuracy and reliability while running a fraction of the crowdsourcing costs (see also Tornberg 2023; Huang et al. 2023; Wu et al. 2023). There is also the artificial cognition strand interested in comparing machine and human behavior (Futrell et al. 2019; Taylor and Taylor 2021; Acerbi and Stubbersfield 2023).
### Related feature analytic and mixed methods research
This contribution describes a general-purpose quantitizing-type mixed methods framework -- henceforth, QMM -- where the qualitative coding of (e.g. textual) data into a fixed number of categorical or numerical variables is followed by quantitative modeling of these inferred variables. This is sometimes also spelled 'quantizing' (Fetters et al. 2013; Hesse-Biber 2010; Sandelowski et al. 2009). It is a design where the research questions are answered primarily based on the results of the quantification step (e.g. a statistical model), not the data annotation step. 'Quantitization' is used here to distinguish the process of annotating data with the explicit purpose of subsequent rigorous quantification, from annotation for any other purposes. 'Coding' is indeed frequently used in many domains, but unhelpfully also has many other meanings.
As QMM combines both qualitative analysis and quantification, it can be described as mixed methods, although it is likely not positioned in its mainstream. Much of mixed methods research is mixed mostly in the sense of using multiple data types in a single study (and therefore a method for each). These designs include e.g. sequential, concurrent, convergent, triangulating, following-a-thread (Hesse-Biber 2010; Tashakkori and Teddlie 2010; O'Cathain et al. 2010) and various unspecified designs (Huynh et al. 2019). The quantitizing design and related variants are referred to with a variety of other terms in mixed methods and related literature, including "integrated" (Tashakkori and Teddlie 2010; Creamer 2018; O'Halloran et al. 2019), "integration through data transformation" design (not to be confused with "transformative mixed methods," Mertens 2008), "qualitative/quantitative" (Young and Jaganath 2013), and "converting" (Creamer 2018). Parks and Peters (2023) propose a 'dialogical' machine-assisted framework which is similar in using NLP tools as part of the pipeline, but is not a quantitizing approach.
A similar mixed approach can also be found within content analysis (Schreier 2012), where the quantitizing is again called "coding". However, the CA community does not consider subsequent quantification of the coded variables as "a defining criterion" of the paradigm" (Krippendorff 2019), as it also includes more holistic or interpretative-qualitative approaches (Hsieh and Shannon 2005), and quantification limited to counting or simple pairwise tests (cf. Morgan 1993; Schreier 2012). Similarly limited quantification can be found in discourse analysis (e.g. O'Halloran et al. 2019). Thematic analysis also does coding but typically does not apply any statistical modeling to the distributions of codes (cf. Braun and Clarke 2012; Trahan and Stewart 2013). As a machine-assisted example, 'distant reading' in digital humanities relies on word counts or topic clusters for similar purposes (Moretti 2013).
Issues with quantification and biased sampling affect rigor and replicability (Parks and Peters 2023). Approaches that make use of quantitizing in a limited way (which can easily lead to spurious results), either by using impressionistic claims like "more", "less", "some" without actual quantification, or explicitly counting but stopping at proper statistical modeling -- will all be referred to as pseudo-mixed, going forward. In contrast, the present proposal emphasizes the need for rigorous statistics in the quantitative step to estimate uncertainty and to be able to deal with issues like confounding variables, interactions, multicollinearity, and repeated measures (e.g. of same participant).
The mixed approach of combining qualitative coding with subsequent quantification is widespread, if not the default, in strands of usage-based (corpus-based, variationist, cognitive) linguistics. It us usually not referred to as mixed however. Where it is named explicitly, "usage feature analysis" has been used, sometimes prepended with "multi-factorial" (cf. Glynn and Fischer 2010). "Behavioral profiles" refers to the same (Gries and Divjak 2009). The quantitizing, referred here too as "coding", is typically conducted by the researchers themselves (often requiring expert linguistics knowledge). When it comes to coding schemes, standard variables from past literature may be applicable and reused, for example grammatical categories (Szmrecsanyi et al. 2014), but may also be developed for a specific research question (cf. Glynn 2010). Developing a standardized coding scheme or taxonomy for future research on a given topic can also be the sole aim, as in branches of psychology (where similar methods are also used; Hennessy et al. 2016). Unlike some of the methods mentioned above, a great deal of attention is paid to rigorous statistical procedures in the quantitative modeling step
### This paper
The framework described in this contribution is essentially that, the QMM or feature analytic design, but extended beyond a single discipline like linguistics to the larger H&SS, and augmented with machine learning to solve the scaling problem of the laborious human-annotation bottleneck. "Machine-assisted mixed methods" or MAMM will be used as a shorthand, with particular reference to the quantitizing design. In other words, here the MAMM is a QMM that uses machines for the qualitative analysis step, but the machine-assisted component could very well also be integrated in the other mixed designs mentioned above.
As discussed above, the idea of applying machine learning to humanities or social science data as a form of mixed methods research is not new as such, nor is quantifying the resulting annotations. However, it is hoped that this contribution will be nevertheless useful in casting the machine analysis step explicitly as a qualitative (but quantitizing) analysis task, and bringing the aforementioned aspects together under a unified framework that would be easily referenceable, implementable, replicable, and teachable. As pseudo-mixed approaches appear to be still relatively common in the H&SS, sections in the Methods are dedicated to briefly summarize why statistical rigor in the quantitative step is not only recommended but necessary to avoid spurious and unreplicable results.
While previous similar frameworks have focused on one or a small set of disciplines (e.g. feature analysis in linguistics), this contribution is about compatibility: it is argued (and shown via the case studies) to be generally applicable to a large set of problems and questions. It can also readily be incorporated in discourse or content analysis research or approaches building on social semiotics (Halliday 1978). Usage of the MAMM or QMM does not exclude using additional, sequential, convergent, etc. analyses either. The focus here is on using machines to augment the quantitizing type designs (suitable for inherently qualitative data), but the same principles can be applied to automate pipelines in other designs. While the examples here are from academic research, the same approach may be used in analytics in business, marketing, media, etc. (for related work, see Dell'Acqua et al. 2023).
In summary, this contribution has three aims, seeking to fill one research gap and complement two others. Firstly, it encourages wider adoption of the QMM pipeline, and in particular the MAMM, given its inherent advantages over alternatives in many applications (see Methods). The general QMM approach can also be fruitfully applied across disciplines using just human annotators. However, there are demonstrable benefits to augmenting it with machine learning, in particular generative LLMs as on-demand instructable annotators. This augmentation translates to scalability to magnitudes of data that would be unfeasible in purely qualitative or human-quantitizing paradigms (see Discussion).
Secondly, the case studies here go beyond the otherwise fairly Anglo-centric focus of LLM applicability research, with tasks in 9 languages: Estonian, Finnish, German, Italian, Japanese, Latin, Russian, Turkish, and English in four varieties (19th century US, and 18th century UK, contemporary standard US, and nonstandard US American as used on social media).
Thirdly, the case studies complement already existing LLM applicability research discipline-wise, with a set of 16 case studies covering research tasks across roughly nine domains -- linguistics, discourse analysis, literature, media and film studies, history, social network science, discourse analysis, lexicography -- and finally look to the future by exemplifying possible applications of multi-modal models to visual analytics. The case studies include replications of past research, synthetic examples created for this contributions, and one benchmark. While some exemplify the MAMM pipeline, others include general practical tasks like data augmentation and content filtering, and explorative tasks like literary translation analysis and critique. Their results collectively yield an answer to the question: is an artificial intelligence or machine-assisted methodology actually applicable to complex cultural and social data, and already feasible given current LLM technology?
Unlike most LLM engineering and LLM applicability research, the focus here is not on public benchmarks or shared tasks. One reason is data contamination (Aiyappa et al. 2023). The currently most capable LLMs are trained on vast datasets likely mined at least partially from the open Internet, which may well include public benchmarks, either intentionally or not. In the one NLP benchmark utilized here (one highly unlikely to cause contamination), the zero-shot approach scores 1.5-2x above the state of the art. The second reason: a focus on benchmarks would simply not be particularly representative of research practice in the proposed framework, which encourages researchers to build their own task
specific miniature test sets -- so that they can be used to estimate machine error rates and directly incorporate them in the statistical estimates in the quantification step (see Methods). They can also be used to compare and choose models, including eventual fine-tuned models for specific tasks, or personal or research group specific models (see Discussion). The code and test sets are made public though, to complement the Anglo-centric benchmarking scene and to provide a starting point for researchers working on similar topics to experiment with this approach.
The Methods section below explicates the components of the framework and how it is universally applicable to a large variety of research questions, disciplines and data types. Practical implementation suggestions will also be provided. The Results section then illustrates this through a number of case studies.
### Three disclaimers
Some disclaimers are however in order, as "artificial intelligence" has recently attracted a significant uptick of public attention and corporate hype. Firstly, this paper explicitly does not deal with topics like data copyright, related ethical issues, possible biases in the models, environmental concerns, "AGT", or AI tractability. These issues have and will be discussed elsewhere (Bender et al., 2021; Lund et al., 2023; Rooij et al., 2023; Tomlinson et al., 2023; Liesenfeld et al., 2023; Motoki et al., 2023; Feng et al., 2023). There is also a growing literature centered around demonstrating what LLMs as such are not or should not be capable of (Asher et al., 2023; Rooij et al., 2023; Dinh et al., 2023; Barone et al., 2023; Sclar et al., 2023).
In contrast, this contribution focuses on the very pragmatic approach of using current generative LLMs and any suitably instructable future models as a class of zero-shot machine learning tools, which can be (carefully, with expert guidance and evaluation) applied to scale up otherwise laborious and time-consuming data annotation and analysis procedures, or replace otherwise complex computational pipelines. The case studies below focus on empirical performance of some already available models on realistic tasks and replications of past research, rather than their possible theoretical limitations. Whether or not the language models or machine learning classifiers in general are referred to as "AI" is not particularly important, what matters is if they work.
Second disclaimer: this not about replacing researchers or research assistants (cf. Erscoi et al., 2023), but about augmenting, complementing and empowering them, while promoting transparent and replicable research practices, and ultimately reducing repetitive labor and leaving more time for meaningful work. To put it another way: human labor does not scale well, machines do; human time is valuable, machine time is cheap. Ziems et al. (2023) suggest that "LLMs can radically augment but not entirely replace the traditional [computational social science] research pipeline." This contribution agrees with this sentiment; indeed larger gains are likely to be made from converging expert humans and powerful machines as annotators and assistants.
Third disclaimer: the LLM test results and classification accuracies reported in the case studies should only be seen as the _absolute minimum baseline_. Prompt optimization is not the main goal here, and most prompts consisted of fairly simple 1-2 sentence instructions (see Appendix). As a rule of thumb, precise and more detailed prompts tend to yield better results. Also, LLM technology is rapidly improving, as also evident from the comparisons of two subsequent GPT (generative pre-trained transformer) versions here. The accuracy rates are therefore not the point, although they are reported -- to illustrate tasks with (already present) potential for automation, and to show how to incorporate them in subsequent statistical modeling.
## 2 A machine-assisted mixed methods framework
This contribution describes a framework for analyzing qualitative data -- text, images, etc. -- in both exploratory and confirmatory settings, readily augmentable with machine assistance for automation and scaling purposes. As a quantitizing framework, it focuses on cases where the data is qualitative but can be quantitized (also called annotating, coding) into one or more discrete or numeric variables, which can then be used in quantitative modeling, followed by qualitative interpretation. The qualitative annotation step can be completed either by (or in conjunction of) humans and machines, such as supervised learning or zero-shot learning using generative LLMs, which are now reaching human performance in many relevant tasks. A typical pipeline can be summarized as follows (illustrated visually in Figure 1).
1. Research question or hypothesis design; designing or adopting a coding scheme and instructions to apply it (may
re-iterate after data collection)
2. Data collection (from corpora, databases, interviews, fieldwork, etc.)
3. Cleaning, parsing, unitizing, sampling data, as necessary, into a reasonably sized sample of reasonably sized examples
4. Qualitative annotation (quantitizing) of these examples according to the coding scheme: each example is translated into one or more categorical or numeric variables 1. If this is delegated to artificial intelligence, then also: human annotation of a test set (unless one already exists)
5. Quantitative (statistical) analysis of the inferred variables and their relationships according to the research question(s), with control for (often present) repeated measures; quantification of uncertainty or generalizability of the results 1. If the previous step involved AI annotators, incorporate their error rates in the uncertainty modeling
6. Qualitative interpretation of the quantitative results (regression coefficients, p-values, counts, etc.), potentially in combination with qualitative analysis of examples from data or theory.
### Data preparation and coding
Unitizing is a crucial step in data without natural units (for good practices, see the references in the Introduction and Krippendorff 2019). It can be helpful to think of units as rows in a table where the columns are the variables and the first column contains the example units. Given an art collection, a painting is likely a useful unit of analysis. There may be multiple paintings per artist, but the unit is fairly non-controversial, and the subsequent statistical analysis, even if the goal is to compare said artists, can and should take into account this grouping of units (see mixed effects modeling discussion below). In contrast, an entire book can but is unlikely to be a useful unit that can be distilled into a single data point in a variable. That is, unless the goal is just to count pages, authors, or variables applying to an entire book (but finer unitizing may well lead to better results in the latter case as well). If the interest is in content, a likely unit of comparison would be a paragraph or a sentence. The same applies to interview-based research: the unit, the data point, is unlikely to be an interview or a respondent, but all their (topic-relevant) utterances or answers (which can be grouped by respondent in the quantitative step).
A coding scheme consists of variables and their definitions (again see the literature in the Introduction for good practices). Categorical variables have preset levels (values) and definitions. The scheme may be entirely or partially derived from preceding research and theory, or engineered by the domain expert from scratch for a given study. The qualitative analysis proceeds according to this scheme, but the scheme may be, and often is in practice, iteratively improved based on small initial data samples or a pilot study (see also Schreier 2012). The number of levels of a categorical variable are fixed and typically kept to a minimum, to ease interpretation of the quantification step.
For example, if the data are newspaper texts, the unit a paragraph and the hypothesis that negative stances are foregrounded, then the variables and levels might be the dependent variable of stance (positive, negative), the main predictor the page (numeric; or binomial, front page or not), perhaps a control variable for type (news, opinion), and variables for text author and publication. The first three would be considered fixed effects and the last two random effects in the mixed effects statistical modeling sense; these would need to be ideally controlled for in the case of repeated measures (which is more often than not the case in H&SS research; see below).
Figure 1: A typical QMM pipeline. Qualitative elements are outlined in yellow, quantitative (statistical) procedures in blue. Steps where machine learning or other automation can be applicable are in bold font, in particular the automatable qualitative annotation step (which would make this a MAMM). Annotating a (small) additional test set is optional but strongly recommended in the case of either using multiple human annotators (e.g. crowdsourcing) or machine annotators.
### Setting up an annotator machine
While any suitable machine learning or statistical machine can be plugged into the MAMM framework, this section focuses on instructable LLMs in a zero-shot learning context as the currently most flexible option. The case studies below are not focused on model comparison (like e.g. Ziems et al. 2023; Bandarkar et al. 2023). Two models are used here, primarily OpenAI's GPT-4, with occasional comparisons with the previous-generation GPT-3.5. The model choice is mostly for practical reasons. Running inference on this cloud service is easy and fairly affordable, and does not require setting up a local LLM, which would require either hardware beyond the consumer grade, or a suitably powered and configurable cloud service. GPT-4 is also highly multi-lingual, compared to current open-source alternatives, which, based on limited attempts, did not recognize some of the smaller languages of the intended case studies. However, more and larger open-source models are continuously becoming available, and optimization research is ongoing to make them run on resource-constrained hardware (e.g. Taori et al. 2023). In the meanwhile, this section uses the cloud-based GPTs as an example, but the suggestions should be fairly generalizable.
In the case of the OpenAI models as a service, analyzing texts consists of making a call to their API (application programming interface), which consists of a number of (fairly well documented) parameters. The associated Python packages openai and tiktoken can be freely used for easy implementation, which also make it easy to keep an eye on costs (which are calculated per input and output tokens). While prompts can be tried out over the web-based ChatGPT interface, a chatbot is obviously not well suited for systematic data analysis, and likely has an above-zero temperature setting (its "code interpreter" plugin, now relabeled as 'advanced data analysis', was not found suitable either). Currently, some programming is required to use these models, both cloud and locally-run LLMs, but as this technology is gradually integrated into various software, zero-code approaches may well become available in the near future, e.g. via software like MAXQDA, or AI integration in Google Sheets and similar. This contribution comes with an open code base to foster replication and enable researchers easy experimentation with these tools.
The simplest input prompt consists of the instructions and the data to be analyzed, for example, _Tag this text as positive or negative sentiment. Text: I love ice cream_. Multiple examples can be batched into a single prompt with a request to output multiple tags, but this can easily degrade classification accuracy and induce hallucinations -- these are, after all, just text generation engines. This appeared less of a problem for GPT-4 than 3.5, and may be worth experimenting with as a cost-optimization strategy. If the input data is long, e.g. an entire book, then the window size of the chosen model must be kept in mind. Inputs that do not fit into a single prompt can be chunked and their result later aggregated. In most practical applications however, proper unitizing (e.g. paragraphs or chapters instead of entire books) is expected to yield more useful and fine-grained results anyhow.
Relevant parameters for a classification task include temperature, bias and output length (for details, see this white paper: OpenAI 2023). These may slightly differ between models but the same principles hold. "Temperature" controls the randomness or variety of the output of a neural network: in a chatbot like ChatGPT, a moderate to high value is desirable, while 0 makes sense for classification. Defining token bias allows for steering the model towards generating certain outputs with a higher probability. This is useful in a classification scenario where the output should be one of the class labels (setting their token values to 100 worked well), but should not be used where an open-ended output is expected. Finally, it is useful to limit model output to the maximum (token) length among the class labels, to make sure the model, generative as it is, does not add commentary and that the output is easy to parse (using single-token class labels where possible worked well). If using prompting strategies like chain-of-thought etc. (Zhang et al. 2023; Chen et al. 2023), longer outputs must be allowed of course, and parsed accordingly. One option is to instruct to output a machine-readable format such as JSON (for a guide, see Ziems et al. 2023). Fairly short and simple, often single-sentence prompts were used in the case studies below (prompts in the Appendix; short inputs also save on cloud service usage costs).
One way or another, if a generative LLM is used as a classifier, it is important to keep in mind that it is actually not a classifier in the traditional ML sense, and may generate nonstandard outputs. This issue may also rise the LLM detects potentially sensitive or harmful content in the input and refuses to give an output (the GPT models for example all appear to have quite extensive guardrails of that nature built in), or when a cloud service simply times out. In any case, it is good practice to build contingencies for that into the pipeline.
This is the technical side of things. The most important component however -- just like in QMM designs such as usage feature analysis -- is the qualitative coding scheme design, which precedes any annotation work. In the MAMM case, this also involves translating the coding instructions into a good prompt. In turn, the prerequisite for a good scheme and variables is a good question or hypothesis. The machine-assisted step can only augment and scale up the
qualitative expertise, and the quantification step can only estimate uncertainty to make sure the claims are reasonably likely to replicate (see next section). LLM tech at its current stage is unlikely to be a substitute for this careful and systematic qualitative work, theory grounding and expert knowledge that precedes and follows the fairly straightforward data annotation process and statistical machinery in the middle.
### Engaging in rigorous statistical modeling to avoid unintentional pseudo-mixed designs
The QMM or MAMM approaches only make sense if the inferred variables are subsequently modeled in a rigorous statistical framework to estimate the (un)certainty of the estimates, be it prevalence counts or complex multivariate relationships between the variables. In a typical hypothesis-driven research scenario, this entails minimally accounting for possible confounding variables and interactions, repeated measures (not necessarily but very often applicable) and any repeated testing. None of these issues are exclusive to quantitative research, they just appear to be often ignored in qualitative designs.
There is not enough space to delve into each of these issues (and handbooks exist which do; cf. Introduction). The bottom line is, lack of control for these issues can easily lead to false, overestimated conclusions or even diametrically opposite results. One such example is Simpson's Paradox: if the underlying grouping or hierarchical structure of a dataset is not accounted for, estimates can quite literally reverse direction (see Kievit et al. 2013). Again, this is not a problem of statistics but equally applicable to qualitative research, it is just inherently impossible to systematically control for the effects of repeated measures in the latter.
Any quantitative claim (including in a mixed methods study) should be accompanied by an estimate of confidence or uncertainty and if possible, effect size. The majority of H&SS works with samples, not populations (in the statistical sense) and the samples are often small. If claims are made about the population based on a sample, it is crucial to estimate the reliability or replicability of a claimed difference, association, tendency, prevalence, etc. The smaller the samples the more important that is, to avoid making claims based on what may just be sampling noise. Estimating effect size of e.g. difference, similarity, trend, correlation is simply impossible in qualitative designs. This is however important in quantitative modeling, to avoid making sweeping yet spurious claims based on quantification which may actually describe only a small portion of variance in a (typically complex) social or cultural system with many interacting variables. This is also the reason simple pairwise tests (Chi-squared, t-test) are often an insufficient modeling solution in H&SS. More versatile models like multiple regression enable estimating the uncertainty of the main hypothesis while controlling for confounds, interactions, and in the mixed effects (multilevel) models case, repeated measures (cf. Gries 2015; Clark and Linzer 2015; Winter 2020; McElreath 2020). This may not immediately look like an issue in some disciplines. Examples include those focused on specific cases where there is no intended extrapolation to the larger world, like micro-history or biographies. Then again, even historical data points about a single person are also a sample from the population of (often not fully known) data about their life.
Repeated measures are also very common in H&SS. Survey and interview data typically contain multiple (often different number of) responses from multiple respondents. Literary or artistic examples may be sourced from multiple works from multiple authors from multiple further groupings, like eras or nationalities. Linguistic examples are sourced from corpora (with underlying sources) or elicited from multiple informants. Social media data often contains multiple data points from one user. One of the case studies below deals with a common scenario of analyzing interview data, exemplified with synthetic generated responses. In the Appendix, as an extension to this section, there is another constructed example based on the same dataset illustrating how opposing conclusions may easily be reached if the underlying respondent structure is ignored (modeling only fixed effects). The second example in the Appendix concerns confounding variables: without controlling for a relevant confound, the researcher could easily conclude support for a hypothesis -- it is shown how including the relevant variable can make the initial hypothesized effect disappear completely.
In summary, systematic and rigorous statistical modeling is a crucial part of applying QMM or MAMM. This is not, however, something that should be seen as a complicating factor or a requirement to make the researcher's life harder. On the contrary, it makes your life easier: instead of having to worry if an analysis is replicable and representative, the uncertainty can be estimated, enabling more principled final (qualitative) interpretation and decision-making.
### Incorporating classification error in statistical modeling
In a quantitizing research design, regardless if the annotation step is completed by humans or machines, inter-rater (dis)agreement should be accounted for in any subsequent operationalization of these new data, to avoid overconfident estimates and elevating the likelihood of making Type I errors. It is far from atypical for (also human) annotation tasks to have only moderate agreement in H&SS research. This aspect is typically ignored, even in applications of QMM like linguistic usage feature analysis, which otherwise strives for statistical rigor.
As discussed in the Introduction, no methodological element in this proposal is new on its own, including that of using zero-shot LLMs to annotate data. What appears to be not yet common is the suggestion to systematically use expert knowledge to delegate coding, analysis, or annotation tasks to machines such as LLMs, while -- importantly -- also making sure the machine error rates are incorporated in statistical modeling and uncertainty estimates. Unless a closely comparable test set already exists, this will typically require a subset of data to be manually coded by human annotator(s) for evaluating the chosen machine(s).
Annotation error can be accounted for in a number of ways. If the goal is confirmatory, then one is using errors-in-variables (EIV) type regression models (if the variables are numerical), directly model measurement errors using an MCMC or Bayesian approach (Carroll et al., 2006; Goldstein et al., 2008), or use prevalence estimation techniques (often referred to as "quantification" in machine learning literature; Gonzalez et al., 2017). Distributional shift naturally remains a problem, although there are proposals to work around that too (Guillory et al., 2021).
Keeping it simple, a bootstrapping approach is considered here which applies straightforwardly to exploratory and confirmatory scenarios and makes use of the rich error information available via computing the confusion matrix between a ground truth test set predictions and predictions or annotations. The procedure is simple, involving simulating the annotation procedure by sampling from the confusion matrix (see Figure 2 for illustration; ground truth is rows and predictions are columns).
1. Compute the confusion matrix \(m\), of machine vs ground truth, for the variable of interest \(V\) which has a set of categorical levels or classes \(C\)
2. For each class \(i\in C\), normalize the count distribution of predictions against ground truth ("rows" of \(m\)) as probability distributions \(d_{i}\)
3. Perform bootstrapping, creating \(N\) number of synthetic replicates of the data, where each predicted value \(V_{j}\) is replaced with a simulated value 1. For each case \(V_{j}\) with a value \(C_{i}\), perform random weighted sampling from \(C\) using the corresponding \(d_{i}\) as weights 2. After simulating all values of \(V\), perform the statistical analysis of interest (counts, prevalence, regression, etc.) on this synthetic dataset, and record the output(s)
4. Calculate \(\pm\) confidence intervals for the statistic(s) of interest based on the estimate of the error (e.g. standard deviation) in the bootstrapped outputs.
For example, if the goal is to estimate confidence of a percentage of class \(C_{i}\) in \(V\): perform bootstrapping on the raw new (classified or annotated) data some large number of times (e.g. 10000), by sampling from the test set confusion matrix for each case in \(V\), and calculating the statistic (percentage) in each replicate; then, calculate e.g. 95% confidence intervals as 1.96 \(\cdot\)\(\sigma\). The intuition is: if the outputs of the (human or machine) annotator match with the designated ground truth 100%, then there will be no variance in the sampling either, and the confidence intervals will be \(\pm\)0. The
Figure 2: A bootstrapping-driven pipeline for estimating the uncertainty in a machine-annotated categorical data variable \(V\). The crucial component is the test set for comparing human expert annotation (ground truth) and machine predictions (or that of human coders). This provides an estimate of annotator accuracy and class confusion within the variable, which can then be used in bootstrapping the confidence intervals for the statistic of interest.
more confusion in the confusion matrix, the more variance in the replicates, leading to higher error estimate or wider confidence intervals. See the first case study in the Results section for a practical demonstration. This is the simplest approach, and potentially better and more robust procedures may be considered where possible.
### Ensuring transparency and replicability of qualitative and machine-assisted research
One criticism that can be raised against qualitative research, quantitizing mixed methods, as well as any machine-assisted designs, is that they are not completely transparent and replicable. All of these approaches involve subjective decision making, and therefore annotator and analyst biases, and inherent stochasticity in the case of machine annotators, if models that are not 100% deterministic are used (such as the current GPTs).
The way to get around these issues and still allow for research on cultural, humanistic and qualitative data, is to strive towards maximal transparency and good open science practices in all phases of a given study (McKiernan et al., 2016; Vicente-Saez and Martinez-Fuentes, 2018; Nosek et al., 2018; Kapoor et al., 2023). In the QMM case, this includes describing and publishing the coding scheme and unitization principles (possibly as a pre-registration), the annotated or analyzed unitized data, and code or instructions to reproduce the quantitative analysis step. MAMM adds the need to publish the prompts and information about the model that was used. As open-source LLMs become more capable and available, it is not unfeasible that the open data accompanying a study would include the (potentially fine-tuned) model as well.
In cases where the source data itself is of sensitive nature and cannot be publicized, the coded variables (which typically do not reveal identities in the source) are still enough for the quantification and subsequent interpretations to be reproducible. In cases where even that would be an issue, synthetic or anonymization methods can be used to generate comparable data (James et al., 2021).
To avoid underestimating the model error rates and subsequent uncertainty estimates (discussed above), setting up the test data can follow the proven machine learning philosophy of using independent training, evaluation and test sets. In the zero-shot case, there is no training set, but any prompt engineering should be ideally completed on a separate evaluation set, only then to be tested (once) on the test set, in order to avoid overfitting and biasing confidence of the eventual analysis where the test set results will be used to estimate confidence or uncertainty.
There are large discrepancies between disciplines when it comes to open science practices. While publishing data and the concept of replicability are still practically unheard of in some, others like psychology have learned their lessons from the "replication crisis" (Shrout and Rodgers, 2018; Scheel et al., 2021). It is vital to adopt transparent practices when using the MAMM to avoid such pitfalls.
### Why use this framework?
Machine-assisted mixed methods is a proposal for analyzing qualitative data at scale. It incorporates most of the advantageous aspects of the involved approaches (traditional qualitative, quantitative) while overcoming their inherent limitations, as summarized below. Quantitative can be seen to include methods referred to in (digital) humanities as "distant reading", and qualitative includes "close reading". Mixed methods below refer primarily to the integrating or quantitizing type, like the usage feature analysis discussed in the Introduction (where pseudo-mixed is also discussed). The word "primarily" is used, as any quant research can be seen as "mixed" in the sense that everything ultimately requires a final qualitative interpretation. To summarize the pros and cons:
* Qualitative methods:
* Typically deeply focused, can consider wider context, reception, societal implications, etc. and self-reflections by the author
* Hard to generalize and estimate uncertainty of claims; typically hard to replicate, practically impossible to reproduce; involves inherently subjective analysis
* Very hard to scale to large data
* Pseudo-mixed quantitizing methods:
* Same as above; focused, contextual, reflexive, etc.
* Systematic codes, if present, make it easier to replicate (if documented), but relationships and their uncertainty remain impressionistic
* Otherwise same downsides as above, incl. hard to scale. False confidence in quantitative results without uncertainty modeling can lead to spurious results
* Primarily quantitative methods:
* Applicable to big data and scalable; relationships and their uncertainty can be estimated; may be seen as more objective
* Easier to replicate (or reproduce if data and procedures are all made available)
* May lack the nuance and depth of qualitative analysis of meaning, context and power relationships, especially for complex societal or cultural phenomena. Only applicable to counted or directly quantifiable data types.
* Quantitizing mixed methods (e.g. feature analysis)
* Inclusion of the qualitative step comes with most if not all benefits of qualitative-only analysis; including ability to handle virtually any human-readable data type
* While the qualitative step involves subjectivity, it can be replicated, and the quantitative reproduced (given data and procedures); relationships and their uncertainty can be estimated
* Hard to scale to large data
* Machine-assisted (quantitizing) mixed methods (MAMM)
* All the benefits of qualitative analysis
* All the benefits of mixed methods, rigorous quantification, replicability
* Yet applicable to big data and scalable
The list above is obviously simplified and these archetypes may not describe every research scenario. Importantly however, this framework is general enough to be applicable to both exploratory and confirmatory designs, a variety of questions and data types, including free running text, regardless of discipline. It is well suited for any empirical research scenario which requires more in-depth contextualization and interpretation than basic quantification allows for, but where the size of the dataset makes manual-only analysis laborious.
Even without the machine assistance component, quantitizing mixed methods provides a systematic framework promoting replicability and rigorous quantification, that is likely more practical compared to alternatives limited in these aspects. Incorporating machine learning, in particular instructable generative LLMs, enables simplification of previously complicated computational pipelines (see case study examples below) and easy scaling to much larger datasets than before.
This includes typically "small" datasets like interviews or fieldwork data which may be in principle manageable by hand, but can still benefit from systematic coding and at least a first pass of machine annotation. It also includes approaches aimed at getting at the "bigger picture" or themes, like thematic analysis: unitizing and subsequent systematic quantitative modeling can only improve the induction of overarching themes, and unlike the pseudo-mixed practices, also help control for confounds, repeated measures issues etc.
Metaphorically: it is true that to someone with a hammer, everything looks like a nail -- it's just that this is a particularly efficient, multi-purpose hammer with robot arms. The rest of this contribution is dedicated to demonstrating that these arms already work, even given the present level of generative LLM technology (which is likely to improve).
## 3 Results of case studies
This section summarizes the results of 16 case studies. These range from brief explorations to emulated tasks based on synthetic data, to replications of existing published research. Tasks covered include explorative and confirmatory research as well as practical technical tasks such as automated data augmentation. This section has two goals: to demonstrate the applicability of (currently already available) LLMs as suitable annotator machines or "artificial assistants" in a machine-assisted mixed methods framework, and to illustrate the pipeline proposed in the Methods section with realistic research cases. The case studies rely on two models, gpt-3.5-turbo-0613 and gpt-4-0613, current at the time of writing. They will be referred to simply as GPT-3.5 and GPT-4 (OpenAI 2023).
Accuracy and Cohen's kappa are used as evaluation metrics in most case studies (summarized in Table 1). The interest here is in agreement with human-annotated ground truth, rather than information retrieval metrics e.g. F1. Tasks with ordinal outputs use Spearman's rho instead. The kappa is adjusted agreement, taking into account observed and expected agreement that can arise by chance: \((p_{e}-p_{e})/(1-p_{e})\). While accuracy illustrates empirical performance, kappa takes into account that some tasks are easier than others due to different number of possible classes in a given classification task.
Most of the case studies emulate or replicate only a segment of a typical research project pipeline. This is by design, to investigate the applicability of LLMs for a variety of different tasks in different stages of research. Many of the examples in the case studies below boil down to multi-class classification. This is natural, as much of science too is concerned about measurement, classification and taxonomies, to be able to make predictions and discover connections. Almost none of the tasks exhibit 100% agreement between human and machine annotations and analyses. This is not unexpected -- less so because these machines have room to improve, but more so because these are mostly qualitative tasks requiring some degree of subjective judgment. In many if not most cases, it would be unrealistic to expect even for multiple human annotators to agree 100%. The upper bound of machine accuracy can be estimated by comparing the agreement of multiple human raters (which some cases below do).
### Confirmatory topic classification using LMMs instead of latent topic modeling
In fields like digital humanities, computational literature studies and computational social science among others, topic modeling is a commonly used tool to discover themes, topics and their historical trends in texts such as newspapers and literature. The bag-of-words LDA (Blei et al., 2003) is still commonly used, as well as more recent sentence embedding driven methods (Angelov, 2020; Groetendorf, 2022). They are also used in what is called "distant reading" (Moretti, 2013; Janicke et al., 2015). They all boil down to various forms of soft or hard clustering. While good for exploration, it is
\begin{table}
\begin{tabular}{l l c c l l} Task & Language & Acc & Adj & Data domain & Complexities \\ \hline Topic prediction & Russian & 0.88 & 0.85 & Cultural history, media & Historical, abbreviations \\ Event cause detection & Estonian & 0.88 & 0.83 & Maritime history & Historical, abbreviations \\ Interview analytics & English & 1 & 1 & Discourse/content analysis & \\ Relevance filtering & English & 0.92 & 0.82 & Text mining, history, media & Low quality OCR \\ Text\&idea reuse & Eng, Rus & 1 & 1 & History of ideas & Multilingual \\ Usage feature analysis & Eng (18\({}^{\text{th}}\) c) & 0.94 & 0.89 & Linguistics, culture & Historical \\ Semantic change & English & \({}^{\prime}\)0.81 & & Linguistics, NLP & Historical \\ Semantic change & German & \({}^{\prime}\)0.75 & & Linguistics, NLP & Historical \\ Semantic change & Latin & \({}^{\prime}\)0.1 & & Linguistics, NLP & Historical \\ Semantic variation & English & \({}^{\prime}\)0.6 & & Sociolinguistics & Social media text, emoji \\ Stance: relevance & Estonian & 0.95 & 0.91 & Media analytics & \\ Stance: polarity & Estonian & 0.95 & 0.92 & Media analytics & \\ Lit. genre detection & English & 0.8 & 0.73 & Literature & Books mix genres \\ Translation analysis, censorship detection & Eng, Italian, Japanese & 0.96 & 0.95 & Translation studies, culture & Multilingual \\ Novel sense inference & Eng, Est, Turkish & \(\sim\)1 & & Lexicography, linguistics & Minimal context \\ Data augmentation & Finnish & 0.72 & Media studies & Minimal context \\ Visual analytics & - & * & * Film \& art, cultural analytics & Multi-modal \\ Social network inference & English & * & * Network science, literature & Many characters, ambig. references \\ \hline \end{tabular}
\end{table}
Table 1: Summary of case studies in this contribution, replicating and emulating various humanities and social sciences research tasks. The Acc column displays raw accuracy of the best-performing LMM at the task (compared to human-annotated ground truth; results marked with \(\rho\) are Spearman’s rho values instead of accuracy). The Adj column shows the kappa or baseline-chance adjusted agreement where this is applicable. Open-ended results are marked with an asterisk*.
suboptimal for confirmatory research. Often, historical or humanities researchers may have hypotheses in mind, but they are difficult to test when they need to be aligned to ephemeral latent topics. Instead of such an exercise in reading tea leaves, one could instead classify texts by predefined topics of interest. Until now, this would have however required training (or tuning an LLM as) a classifier based on labeled training data. Annotating such data is time-consuming -- and the easy, out of the box unsupervised applicability of methods like LDA have therefore remained attractive (cf. Jelodar et al., 2019; Jacobs and Tschotschel, 2019; Sherstinova et al., 2022).
With instructable LLMs, laboriously annotating training data is no longer necessary, and topics in textual data can be predicted instead of derived via clustering processes. Good prompt engineering is still necessary, but this is where qualitative scholars can be expected to shine the brightest. Ziems et al. (2023) worry that topic modeling may be challenging for transformer-based language models, given their context window size limitations. The latter is becoming less of an issue with newer models (the GPT-4 model used here has a window size of 8000 tokens or about 6000 words), but longer texts can always be split into smaller units and results later aggregated.
The zero-shot topic classification approach is exemplified here using a historical dataset of short Russian-language news-reel summaries from the Soviet Union. For more details on this dataset and the history of newsreels, see (Oiva et al., 2023). In short, the dataset consists of synopses of story segments that make up the roughly 10-minute weekly newsreels video clips from 1945-1992 (12707 stories across 1745 issues in total; the synopses are short, about 16 words on average). As part of the aforementioned collaboration (Oiva et al., 2023), an expert cultural historian predetermined a set of 8 topics of interest based on and preceding research and close viewing of a subset of the reels: politics (including domestic and international relations), military and wars, science (including industrial, space, aviation progress), social (including lifestyle, arts, culture, education, health), disasters (which was not found in the actual dataset), sports, agriculture, industry and economy (see the Appendix for the prompt and definitions; note that we used an English-language prompt despite the data being in Russian, as this yielded better accuracy in early experiments). An additional "miscellaneous" topic was defined in the prompt to subsume all other topics. Such an "everything else" or negative category is incidentally a naturally available feature in the zero-shot approach, that would require much more complicated modeling in traditional supervised machine learning.
The expert annotated a test set of 100 stories for these topics, one topic tag per story. Two OpenAI models were applied here, GPT-3.5 and GPT-4, with preliminary experiments with various prompting strategies. In general, a single example per instruction prompt yielded the highest accuracy of 0.88 (kappa 0.85, for GPT-3.5; 0.84 or kappa 0.8 for GPT-4), but is of course the more expensive option when using cloud services like that of OpenAI that charge per input and output length. While this deserves further, more systematic investigation, batching multiple examples (preceded by an instruction prompt, requesting multiple output tags) generally seemed to reduce classification accuracy, although less so for the newer GPT-4. This can however be used as a cost-optimization strategy, saving on the number of times the prompt has to be parsed.
While the 88% accuracy is not perfect, it should be kept in mind that this is on a 9-class problem on a historical dataset rife with archaic terms and abbreviations that may not all exist in the training data of a present-day LLM. The synopses are also short, yet may contain mentions of different themes and topics. For example, a featured tractor driver in an agricultural segment may also be lauded as a Soviet war hero or a local sports champion. In other words, it is unlikely that humans would have perfect inter-rater agreement on this task either. On qualitative inspection, we did not come across any misclassifications that would be entirely off the mark in that sense.
Following testing, GPT-3.5 was applied to the rest of the corpus of 12707 stories, producing an estimation of topics in the newsreels covering most of the Soviet period 3.A. Among the trends, there is a notable increase in the Social topic, towards the end of the period. Given the uncertainty of the classifier, and the fact that there is fewer issues and therefore fewer data points in the latter years, this could potentially be sampling noise. To test this, one can fit for example a logistic regression model to the period of interest (1974-1989), predicting topic (as a binomial variable, Social vs everything else) by year. This model indicates there is an effect of \(\hat{\beta}=0.064,p<0.0001\): each passing year multiplies the odds of encountering a Social topic in the reels by a factor of \(\ell^{0.064}=1.07\)).
However, this is based on the predicted topics, and not all predictions may be accurate. One way to incorporate this uncertainty in the inference would be to use bootstrapping, as discussed in the Methods section: simulate the classification procedure by sampling from the test set confusion matrix (our annotated 100 synopses), then rerun the statistical model over and over on a large number of bootstraps of the simulated data. This yields bootstrapped distributions for each statistic of interest, from which confidence intervals can be calculated. In this case, since the classifier is fairly accurate,
the 95% confidence interval around the log odds estimate is \(\pm 0.02\), and for the p-value, \(\pm 0.00002\) (in other words, the upper bound is still well below the conventional \(\alpha=0.05\)).
The same procedure can be applied to percent estimates on graphs like 3.A: simulate, aggregate into percentages, bootstrap, infer confidence intervals. Latent topics models may still be useful for exploration and discovery, but this exercise shows that zero-shot topic prediction is a viable alternative for testing confirmatory hypotheses about topical trends and correlations.
In a limited exploratory exercise, a sample of about 200 random synopses (about 8000 words in Russian, or 16k tokens) was fed into a GPT-3.5 version with the larger context window (gpt-3.5-turbo-16k), prompting it to come up with either any number and then also a fixed number of general topics. By their own assessment, these lists were quite similar to the one initially produced by our expert historian.
### Historical event and cause detection
Detecting and extracting events, their participants and causes from texts, including historical documents, is not only an interest central to many fields of humanities but also to method-oriented NLP researchers (Sprugnoli and Tonelli 2019; Lai et al. 2021). Ziems et al. (2023) experimented with applying zero-shot LLMs to binary event classification and event argument extraction in English. Here, GPT-4 is applied to detecting the causes of shipwrecking events in an Estonian-language dataset of maritime shipwrecks in the Baltic sea (part of the Estonian state hydrograph database HIS 2023). Each entry contains a description of the incident based on various sources and fieldwork (n=513, ships wrecked between 1591-2006, but mostly from the 20th century). The dataset has already been enriched by domain experts, providing a ground truth to test against. One such variable is the primary cause of the wrecking, as a term or a phrase. There were 54 unique values, which were simplified here into four groups: warfare-related causes like assaults and torpedo hits; mines (both particularly frequent during the two world wars), mechanical faults like leaks and including intentional abandonment; and broadly navigational errors, such as getting caught on shallows, or in a storm or fog. Naturally, some categories may interact and contribute jointly as causes, complicating the inference task.
The descriptions used as input to the LLM range from very brief statements such as _Sunk by a mine at the mouth of Lahepere Bay on July 21, 1941_ to longer stories such as _Perished en route from Visby to Norrkoping on the rocks of Vastervik in April of 1936. After beaching in Gotland, Viljandi had been repaired and had set sail from Visby to Norrkoping around mid-month. In a strong storm, the ship suffered damage to its rudder near Storklappen, losing its ability to steer. The injured vessel drifted onto the rocks of Slado Island, where it was abandoned. Local fisherman Ossian Johansson rescued two men from the ship in his boat. One of them was the ship's owner, Captain Sillen. The wreck sank to a depth of 12 meters._ (this is marked as a navigation and weather related wrecking).
Figure 3: (A) Zero-shot prediction of predefined topics in the corpus of Soviet newsrel synopses. Vertical axis shows yearly aggregate percentages. Bootstrapped confidence intervals are added on the trend of the Social topic. There are less data in the latter years, reflected in the wider intervals. (B) Wrecking causes of ships found in the Baltic sea, mostly in Estonian waters, as annotated by experts based on field notes and historical documents (left), compared to zero-shot prediction of said categories based on the same data, with bootstrapped confidence intervals on the counts. Due to fairly good classification accuracy, the counts end up roughly similar.
The model accuracy is fairly high: the (albeit simplified) primary cause prediction matches with human annotation 88% of the time (kappa 0.83). This is very good for this task where there are often multiple interacting causes and the primary one may be somewhat arbitrary. Some classes are easier than others to detect: for example the "mine" class has a 100% recall. Figure 3.B illustrates how much a predicted distribution of causes would differ from an expert-annotated one, with bootstrapped confidence intervals added to the counts using the same approach as in the previous section on topic prediction. This exercise shows even current LLMs are already fairly capable of completing annotation and inference tasks that would otherwise have required manual work by domain experts.
### LLM-powered interview analysis
Interview-based studies across many disciplines are often qualitative. Despite this, researchers may make approximate quantitative claims about "more", "less", "many" etc., without systematic quantification or statistical modeling of the (un)certainty of such claims (cf. Hrastinski and Aghaee 2012; Norman et al. 2021; Paasonen et al. 2023). Even in explicit mixed-methods approaches, modeling is often limited to counting coded themes or variables. This contribution encourages the usage of the more systematic QMM approach. This section demonstrates how to incorporate a machine annotator in analyzing interview data, and how to quantify the outcomes.
The data are synthetic, generated using GPT-4, which was promoted to output a variety of responses, as if uttered by college students, concerning benefits and downsides of doing group assignments online as opposed to meeting live. This emulates a scenario where students would be interviewed about their study experiences, and the researcher has already unitized the data as relevant passages or responses, and extracted those discussing online vs live. The latter step could be done either manually, by searching for keywords, or as another machine classification step (e.g. prompting an LLM to determine if a given passage or response is relevant for a research question or not). The synthetic data includes examples such as: _You know, one of the things that bothers me about online meetings is that it's harder to have those spontaneous moments of laughter or fun that make the work enjoyable, and that's something I really miss._ In this synthetic dataset, responses are randomly grouped by "respondents" (multiple responses per student, who are also assigned an age each) and assigned to either on-campus or off-campus living group (with a bias, to simulate a difference). The resulting data has 192 responses (rows of data) from 53 "students", where 109 off-campus responses are split 36/73 negative-positive; 64/19 for on-campus. Given the simulated nature of the data, the main variable of interest, stance towards online group work, is already known (as if coded by a human annotator).
The example (admittedly simplistic) hypothesis is: controlling for age, students living on campus see more negative aspects in doing group assignments online than those off campus. This can be tested using a mixed effects binomial regression model; the random effects structure is used to take into account the repeated measures. The model can be conveniently run using the popular lme4 package in R with the following syntax:
Logistic regression: \[\log\left(\frac{p_{ij}}{1-p_{ij}}\right)=\beta_{0}+\beta_{1} \cdot\text{campus}_{ij}+\beta_{2}\cdot\text{age}_{ij}+u_{j}\] In lme4 Syntax: \[\text{online}\sim\text{campus}+\text{age}+(1|\text{id})\]
In the constructed model, "off-campus" is the reference category for the campus variable and "off" for the response. Living on campus is associated with a decrease in the log-odds of a positive response \(\beta=-1.9\) or \(e^{1.9}=0.14\) times, \(p<0.001\). Regression model assumptions were checked and were found to be met. The p-value indicates the probability of observing that effect is exceedingly small (0.00000000438) if the null hypothesis (no effect of living on campus) was true, so the alternative hypothesis (on-campus students don't like online) can be accepted.
This test was conducted directly on the synthetic data, equivalent to a scenario where a human annotated the interpretations. The same could be completed by an LMM instructed to determine the stance or attitude of the student towards online group assignments in each response (regardless of overall sentiment of the response). The LLM accuracy results are easy to report here: a suitably instructed GPT-4 detected stance towards online learning from the narrative-form responses with a 100% accuracy; i.e. the machine interpretations did not differ from ground truth in this case. Note that this exercise was independent of the data synthesis. The fact that GPT-4 both generated the initial data and was used for classification has no bearing on the accuracy, which likely stems from the combination of relatively clear stance expressions in the data and the easy of inferring them for GPT-4. If the accuracy was considerably lower -- in a real research scenario, measured using e.g. a small human-annotated test set -- then the error rate should be incorporated in the regression modeling to avoid biased estimates. See the Methods section for one approach how to do that.
While this example focused on a confirmatory case, interview-based research could benefit from machine assistance on other tasks as well. The retrieval of relevant examples was mentioned; another may be clustering interviewees (cf. Kandel et al. 2012). This could indeed be done using e.g. latent topic models, but as in the confirmatory topic classification example above, can be approached in a more principled way by having the LLM annotate responses for specific themes of interest, and then using those for clustering.
### Social network inference from literary texts
This short section showcases the potential of using LLMs as information retrieval engines. Figure 4.A depicts a character network manually constructed from "Les Miserables" by Victor Hugo, often used as a textbook example in network science and related fields. Figure 4.B is a network of interacting characters inferred automatically from the full text of the same book, by feeding each chapter into GPT-3.5 with the prompt to list pairs of characters who directly converse in the chapter. The result may well have some errors -- some anomalous pairs like street names and unspecific characters ("people" etc.) were filtered out post-hoc. Better results may well be achieved by better prompts and using more capable models like GPT-4. Still, the result is also much richer than the smaller manual version, including non-plot characters discussed by Hugo in tangential sections of the book. This limited exercise shows that LLMs can be used for information retrieval tasks like this in H&SS contexts, while preprocessing with specialized models (named entity recognition, syntactic parsing, etc.) is no longer strictly required (cf. Elson et al. 2010).
### Relevance filtering and OCR correction in digitized newspaper corpora
Digitization efforts of historical textual data such as newspapers and books have made large-scale, computer-assisted diachronic research feasible is many domains where this was not possible before. However, finding relevant examples from vast swathes of digitized data, more so if it is plagued by optical character recognition (OCR) errors, can be a challenge. A part of the pipeline from a recent work (Kanger et al. 2022) is replicated here. The study sought to measure central trends in dominant ideas and practices of industrial societies with a focus on the topic of nature, environment and technology, based on digitized newspapers from multiple countries.
Their pipeline for retrieving relevant examples on the topic for further analysis consisted of assembling a set of broad keywords, extracting passages where these occur, and estimating if they match the topic (as "nature" can also refer to "human nature", etc.). They used LDA topic modeling, which required cleaning and lemmatizing the texts, and annotating the latent LDA topics for relevance. Such pipelines can be streamlined with as a single operation on an LLM. The authors (Kanger et al. 2022) kindly provided a human-annotated test set of 99 excerpts for this exercise. The experiments here include both GPT-3.5 and GPT-4, and also evaluate the effect of adding an OCR-correction step before the classification step. While most many of the corpus texts are fairly readable, they also contain examples such as this:
_principally to casing in \(\Rightarrow\) u j allan consolidated bonds nine Issues Siorln \(\approx\) falli and on'y two Issues galnl \(\approx\) 8 The liteti Included
Figure 4: Social networks of directly interacting characters in "Les Misérables" by Victor Hugo, manually constructed textbook example on the left (A) and as automatically inferred using LLMs on the right (B; GPT-4 was used to infer the gender of the character; men are blue and women are orange).
the 3. per cent 1942 in which lagi pa'ack were bou.ht The Syd Iii, banks lollungulabel a small gait of recent trim Arinstnatru-raleacilon in \(t\). limited \(S\), \(r\) of issues the main body of Indu- irai continued to find keen support._
The GPT-4-cleaned version: _principally to using in Australian consolidated bonds; nine issues showing a fall and only two issues gaining. The latter included the 3 per cent 1942, in which large parcels were bought. The Sydney banks relinquished a small part of recent gains. As a natural reaction in the limited set of issues, the main body of industrial continued to find keen support._
While such operations may suffer from LLM hallucination issues, we can test if this step degrades or improves the downstream classification results.
The case turns out to be the latter. Given a single-sentence prompt to classify a given input as having mentioned "nature or environment in the biological natural world sense, including nature tourism, landscape, agriculture, environmental policy" (see Appendix), the results are as follows. Without the cleaning, GPT-3.5 gets 0.79 accuracy (0.49 kappa) and GPT-4: 0.9 (0.77). With cleaning, GPT-3.5 gets 0.82 (0.56) and GPT-4: 0.92 (0.82 kappa). This is again on a task with very limited, often historical period-specific contexts. More precise prompting would likely help. There were for example cases such as "atmosphere of natural gas [on Pluto]" and "nature provides nourishment for the newborn [fetus]" where the machines and humans disagreed.
In summary however, using zero-shot or fine-tuned LLMs may well provide a simpler and faster alternative to complex processing and annotating pipelines such as those described in Kanger et al. (2022), as well as obviate the need for parameterizing and carrying out mathematical operations to make embedding vectors usable (cf. Sen et al. 2023). LLMs can also assist in situations with distorted data such as OCR-processed text. The combination of initial rough search (keywords or regex) with a follow-up LLM-based filtering step may well be a fruitful approach, as running an entire corpus through a cloud service LLM like GPT-4 can be very costly (and time-consuming, even if using a local model).
### Text and idea reuse detection
Political studies, history of ideas, cultural history, and the science of science are among disciplines interested in the spread and reuse of ideas, texts and other content (see Chaturvedi et al. 2018; Linder et al. 2020; Salmi et al. 2021; Gienapp et al. 2023). Automated detection of reuse may be based on keywords, latent embeddings or hybrid approaches, but is considered hard for a variety of reasons (Chaturvedi et al. 2018; Manjavacas et al. 2019). While tracking verbatim reuse of an entire news article or a passage is not difficult, reuse and spread of smaller units and abstract ideas is, more so if it crosses the boundaries of languages. A synthetic test set of pseudohistory-style blog posts is used here, modeled directly after Oiva and Ristila (2022), who surveyed the landscape of pseudo-historical ideas and their outlets in Russian-language online spaces. The classification tasks involves detecting the occurrence or "reuse" of the idea that "Russians are descendants of the Huns", apparently common in such user groups.
The data is generated as follows. GPT-4 was first instructed to compile 50 short paragraphs in English on various other pseudohistorical topics drawn from Oiva and Ristila (2022) that would include this claim, and 50 that would not. These 100 items were then modulated and distorted in a variety of ways, again using GPT-4: rephrasing the claim, inducing "OCR errors", translating into Russian -- and combinations thereof. As an example of original text and its maximal modulation:
_It's an often-overlooked fact that all the weapons used in seventeenth-century Europe were produced by the Russians. This massive weapons production and export reflect an advanced civilization, attesting to the fact that Russians are descendants of the Huns. It's a narrative that resists the distortions of history, reaffirming Russian heritage._ This becomes:
_Zho nagagn
also catch such items. In summary, as shown here, text and idea reuse detection is very much feasible using instructable LLMs such as GPT-4, including cases of idea transfer across languages.
### Linguistic usage feature analysis
As discussed in the introduction, machine learning, including large language models, has found various uses in branches of linguistics (beyond the explicitly labeled computational one). The application of LLMs in the usage feature analysis framework appears to be still novel. This case study replicates a part of the pipeline of a recent work on linguistic value construction in 18th century British English advertisement texts (Mulder et al., 2022). The researchers were interested in modifiers such as adjectives as a way of expressing appreciation in language, and in testing hypotheses about historical prevalence trends of both modifier usage and the advertised objects themselves.
The paper goes into detail about the process in developing the categories of Evaluative and Descriptive modifiers via systematic annotation exercises, and normalizing spelling in the historical texts (heterogeneous by nature and plagued by OCR errors) via a process involving edit distance metrics, word embeddings and manual evaluation. While cleverly utilizing computational tools, it is evident that no small amount of manual effort was expended in that project. Most of such manual work can be streamlined and automated using zero-shot LLMs. As shown above in the relevance filtering and text reuse sections, models like GPT-4 are quite capable both at fixing low-quality OCR as well as working with OCR-distorted texts.
Replicating the annotation step consisted of instructing GPT-4 to detect whether a given phrase such as _servants stabling_ or _fine jewelry_ is objectively descriptive or subjective (evaluative) in nature. The model achieves strong agreement with the human annotations in the paper (accuracy 0.94, kappa 0.89). For context, in the first iteration of the annotation process, the paper reports the kappa agreement between two researchers annotators to have been at 0.84. This is clearly not an easy task either and may require subjective decisions; e.g. _servants horse_ is tagged objective yet _gentleman's saddle_ as subjective in their final dataset, which may be debatable. This exercise used an example from the lexical semantics domain with links to cultural history, but the same approach could equally be used to automate or augment linguistic feature analyses in domains like grammar and syntax (cf. Begus et al., 2023; Qin et al., 2023; Beuls and Van Eecke, 2024).
### Lexical semantic change detection
Unsupervised lexical change or shift detection is a task and research area in computational linguistics that attempts to infer changes in the meanings of words over time, typically based on large diachronic text corpora (Gulordava and Baroni, 2011; Hamilton et al., 2016; Dubossarsky et al., 2019). Data from the domain of historical linguistics or annotated test sets may be used to evaluate different approaches (Schlechtweg et al., 2018). Such result may be of interest to lexicologists, NLP scientists looking to improve the robustness of their language models, or linguists looking to understand language change. Schlechtweg et al. (2020) reports on a shared task at the SemEval 2020 conference, where a large number of competing approaches were pitted against an annotated test covering four languages and centuries of language change. There were two subtasks: 1) binary classification to determine which words have and have not lost or gained senses between the given time periods, and 2) a graded change detection task which was evaluated by comparing the rankings of the test words, ranked according to how much they had changed. There were 27-38 test words per language.
This sparked follow-up research in the field: for example, while type-based word embeddings were (somewhat surprisingly) more successful in that task than more recent token-based (contextual, BERT-like) models, later research has shown how to adapt LLMs to such tasks (Rosin and Radinsky, 2022). The latter is the highest scoring approach (on subtask 2) on this task since the original controlled shared task but on the same test set, according to a recent large-scale survey on the topic (Montanelli and Periti, 2023).
As a simple experiment setup, GPT-4 was instructed to determine if the meaning of a given target word in two example sentences is either the same, closely related, distantly related or unrelated (see Appendix). This is loosely based on the DURel schema used in annotating original test data to produce the gold standard classes and rankings (cf. Schlechtweg et al., 2018). The task dataset contains pairs of moderately sized, randomly sampled subcorpora for each language, representing two distinct time periods each (e.g. 1810-1860 vs 1960-2010 from the Corpus of Historical American English). The procedure involved sampling 30 sentence pairs for each word in the test set (with replacement, as not all words would occur frequently enough). For the classification task, a threshold of 2 or more "unrelated" judgments was used to
indicate that a sense has emerged or disappeared (an optimized threshold might improve the results).
In the binary classification task, this simple zero-shot approach, based on evaluating just a handful of examples, performs as well as the state of the art best model reported in the SemEval task in English (70% accuracy; Figure 5.A). It is just above the random (majority) baseline but below SOTA in German; and practically at random for Latin. In the second, semantic change subtask however, it goes well beyond the best SemEval result for English (\(\varphi=0.81\) vs \(0.42\), a \(2\)x improvement; Figure 5.B). It also surpasses the more recent Rosin and Radinsky (2022) LLM-based architecture that had a \(0.52\) correlation with the test set. In German, the result of \(0.75\) is between that and the best SemEval model (\(0.76\) and \(0.73\), respectively). Judging by the trend, it may improve if given more than just \(30\) examples (the SemEval models used entire training corpora; better prompting may also increase scores). Latin at \(0.1\) performs below both comparisons, but above random (which given it is Spearman correlation would be \(0\)). For further context, other specialized LLM fine-tuning architectures have gotten as high as \(0.74\) on the same task in a different Indo-European language, Spanish (Zamora-Reina et al., 2022).
There are two takeaways here. Zero-shot, using a large enough generative LLM, can perform on par or surpass purpose-built architectures based on smaller LLMs or embeddings, while requiring minimal effort to carry out. Setting up this experiment here required writing a one-sentence prompt to be iterated with the example pairs on the GPT-4 API. In contrast, the authors of the models featured in the SemEval task paper (Schlechtweg et al., 2020) clearly put no small amount of work into developing their various embedding, ensemble and LLM based architectures (each spawning at least one paper on their own). Rosin and Radinsky (2022) is a full-length paper in a high-ranking NLP conference. On the flip side, pretrained instructable LLMs are only as good as their training data. Clearly, there is not enough Latin in GPT-4 for it to perform well here.
### Challenging linguistic data annotation
Recent work has shown that large enough LLMs like GPT-4 can perform at near human annotator level in various linguistic and textual tasks (Begus et al., 2023; Gilardi et al., 2023; Fan and Jiang, 2023; Huang et al., 2023; Qin et al., 2023; Ziems et al., 2023). One such use case is reported here, using a setup similar to the lexical change detection case study above. In a separate study focusing on linguistic divergence in US American English (Karjus and Cuskley, 2023), we looked into modeling differences between two groups of users, those aligned with the political "left" and those with the "right". The data was mined from the social media platform Twitter (now "X"). We experimented with using word embeddings of the type that performed well in the shared task discussed above (Schlechtweg et al., 2020), as well as annotating a small dataset by hand following the DURel framework (cf. Schlechtweg et al., 2018) mentioned above for evaluation purposes. This involved comparing the usage and therefore meaning of a target word, phrase or emoji in contexts derived from the tweet corpus, for example (the examples have been rephrased here in order to preserve author anonymity):
_We have a kitten right now who is in a bad condition, need to get him to a **vet**, got many more here like this --_ compared to _-- This is a president that knows how to withdraw forces when necessary. Perhaps if more **vets** ran for office we would have people in charge who can do what is needed._
Figure 5: Lexical semantic change detection using GPT-4 on two tasks, binary classification (A) and graded change (B), in three languages. The trend lines illustrate how well the zero-shot approach performs given an increasing number of example pairs (bootstrapped average values). The top results from the SemEval task are highlighted with solid lines. The gray lines are random baselines for binary classification. A later LLM-based result in (B) is shown with the dotted line. GPT-4 performs best in English as expected, even surpassing past approaches on the second task.
We also applied GPT-4 to the same annotation task in the role of a 'third annotator'. There were 8 target words and emoji, three comparison sets for each to determine difference as well as in-group polysemy; 320 pairs in total. The two human annotators had a good agreement rate of \(\rho=0.87\) (measured in Spearman's rho, given the ordinal DURel scale). GPT-4 achieved moderate agreement of \(\rho=0.45\) with one and 0.6 with the second annotator. The lower rate compared to the human inter-rater agreement was partially affected by the emoji in the test set, which the humans also found difficult to annotate. There was also very little context to go on, and social media texts can include rather non-standard language and abbreviations.
This exercise shows that while LLMs have become good enough for many textual and linguistic annotation and analysis tasks, it is important to check their accuracy rates against human annotator and preferably expert judgments. This exercise did not involve iteratively improving the simple single-sentence prompt -- better agreement may be achieved with more detailed, and potentially iterative or step-wise prompting (Chen et al. 2023a).
### Stance and opinion detection
In a recent paper (Mets et al. 2023), we investigated the feasibility of using pretrained LLMs for stance detection in socio-politically complex topics and lower-resource languages, on the example of stances towards immigration in Estonia. Estonian is not low-resource in the sense that there is written corpora and some NLP tools available, but given the number of speakers is small (population of Estonia is 1.3M), the amount of training data available is limited compared to English or German. We experimented with fine-tuning the previous generation of BERT-class models (Devlin et al. 2019) on a hand-annotated dataset of thousands of examples of pro/against/neutral stances towards immigration. The best-ranking fine-tuned RoBERTa model performed on par with a zero-shot GPT-3.5 approach (which had just come out, in late 2022). Naturally, zero-shot is a much cheaper alternative, obviating the need for costly manual training set construction and LLM fine-tuning (which either requires beyond consumer-grade hardware or paying for a cloud service). Emergent LLM applicability research reported similar results (Zhang et al. 2023a; Gilardi et al. 2023).
In another upcoming work (Karjus in prep), we report on a collaboration with the Estonian Police and Border Guard Board on a cross-sector project to analyze large media datasets to determine societal stances towards the institution of police and police personnel. Estonia is a multilingual society: while the media primarily caters to the Estonian-speaking majority, there are newspapers, TV and Radio stations in Russian as well as outlets with news translated into English. This necessitates a multilingual approach. We apply a pipeline similar to the immigration case study: a first pass of keyword search across corpora of interest followed by LLM-based filtering of the found examples for relevancy, and LLM-powered stance analysis applied to this filtered set. Finding contextually relevant examples from simpler keyword matches is crucial for accurate stance detection. For example, if the target of interest is Estonian Police, articles discussing police news from other countries, metaphorical expressions ('fashion police') and fictional contexts (films featuring police) should be excluded.
While in the recent past accurate stance detection or aspect-based sentiment analysis would have required complex machine learning pipelines (Kucuk and Can 2020; Nazir et al. 2022; Rezapour et al. 2023) and model tuning, this can now be solved with zero-shot LLMs. We annotated a 259-sentence test set in Estonian; there were 90 non-relevant examples, and of the relevant 31 were negative, 199 neutral, 19 positive. This is quite representative, most police-related reporting is neutral about the police itself. In detecting relevant examples, GPT-3.5 only gets to 76% accuracy (kappa=0.4, i.e. accounting for baseline chance; mean F1 at 0.54) but GPT-4 achieves 95% accuracy (kappa=0.9, F1=0.9). We included both the target sentence and the title of the source article in the prompt for context, but some cases are difficult, e.g. where it is only implied indirectly that police of another country is discussed. More context (e.g. paragraphs or fixed larger content window) might help, but is more costly (more tokens to parse). In stance detection, GPT-3.5 agrees with human annotations at a rate of 78% (kappa=0.36; F1=0.51), while GPT-4 gets to 95% accuracy (kappa=0.88; mean F1=0.92). Again, this is quite good, as many examples are ambiguous. For example a sentence can be overtly negative while the police may be mentioned as a neutral participant; or the police might be reported to have done their job, which could be seen as neutral or positive, depending on perspective. Yet LLMs show promise as universal NLP tools in media monitoring contexts.
### Genre detection in literature and film scenes
Computational literature studies is an emerging field within digital humanities. In their large-scale LLM applications paper, Ziems et al. (2023) discuss the computational analysis of themes, settings, emotions, roles and narratives, and benchmark some of these tasks. Instead of testing against another benchmark, a real world study (Sobchuk and Sela 2023) is replicated here, to illustrate how instructable LLMs can be used as a simpler alternative to complex computational pipelines, which often require extensive parameterization. The latter study seeks to compare approaches of capturing and clustering thematic (genre) similarity between literary texts. This is tested by comparing how well clustering of automatically extracted features matches a manually assigned genre system of Detective, Fantasy, Romance and Sci-Fi. They evaluate a large set of combinations of text embedding algorithms (bag-of-words, topic models, embeddings), their parameters, preprocessing steps and distance measures for the final step of clustering. The target measure is the Adjusted Rand Index (or ARI; Hubert and Arabie 1985).
Given that Cohen's kappa score is comparable to the Adjusted Rand (Warrens 2008), the performance of an LLM set up to classify genres can be directly compared to their clustering task results. The authors generously shared their labeled 200-book test set for this purpose. For this exercise, rather than parsing entire books, 25 random passages were sampled from each (5000 in total). GPT-3.5 was instructed to label each presented passage as one of the 4 genres (briefly defined in the prompt; see Appendix), and the assigned label for a book was simply the most frequent label (better results may well be achieved by parsing more data). The best-performing parameter and model combination in Sobchuk and Sela (2023) used strong preprocessing, a 300-dimensional doc2vec model (Le and Mikolov 2014), and cosine similarity. The preprocessing involved lemmatizing, named entity detection and removal, part-of-speech tagging for stopword removal, and lexical simplification (replacing infrequent words with more frequent synonyms using an additional word embedding). This combination yielded an ARI of 0.7.
Our simple zero-shot LLM approach here achieved a (comparable) kappa of 0.73 (0.8 accuracy) without any of preprocessing (and only judging a small subset of random passages per book, using the cheaper GPT-3.5 instead of 4). Some genres were easier than others, e.g. Fantasy had a 100% recall; while books combining multiple genres complicate the task. These results echo the message of the first case study in this section: instead of clustering or topic modeling, zero-shot learning enables direct prediction and classification, and using LLMs obviates or at least eases the need for complex processing pipelines (see also Chaturvedi et al. 2018; Sherstinova et al. 2022).
Instead of labeling entire books with a single genre label, on-demand classification like this can instead yield a more informative distribution of genres for each work of fiction under examination. Figure 6 illustrates another related proof of concept of another application of zero-shot text classification. In a similar way as the genre classification exercise above, two texts were split into manageable chunks: P.K. Dick's "Do Androids Dream Of Electric Sheep?", and the script of "Blade Runner" based on the latter, by H. Fancher and D. Peoples (the 1981 version with the happy ending and voice-overs). The script is split up by scenes (but merging very short scene descriptions) and the book into equally sized chunks (a larger number, as the book is 3 times longer). Each segment is classified using GTP-3.5 with the same prompt as above (with the addition of the thriller class). Here things are kept simple and the classifier accuracy is not incorporated in the visualization, assuming it to be good enough for explorative purposes.
Differences between the book and adaption are revealed: the movie is more of a thriller with sci-fi and detective story elements, while the book delves into various other topics. Both have most of the detective elements in the first half, and
Figure 6: Zero-shot classification of genre across one book and its film adaption, split into equally-sized segments and scenes, respectively. Frames from the film are added for illustration. Differences and similarities become readily apparent, and can provide basis for follow-up qualitative or quantitative comparisons.
romantic elements around the middle. The one segment labeled as "fantasy" does include the following: _"The donkey and especially the toad, the creatures most important to him, had vanished, had become extinct; only rotting fragments, an eyelash head here, part of a hand there, remained. At last a bird which had come there to die told him where he was. He had sunk down into the tomb world."_
This exercise is of course only a very rough approximation -- one could also take into account running time, or try to align a book and its adaption (cf. Yi et al. 2023). Still, this exercise illustrates the potential of using zero-shot LLMs to explore qualitative data, without the need of training a specialized classifier for genre, mood, action, etc. Multi-modal models (explored in the last subsection below) can add another dimension of zero-shot scene analytics.
### Automated literary translation analysis and a semantic edit distance
This section describes two case studies, one explorative and the other testing the accuracy of LLMs as multilingual semantic distance evaluators. The first experiment consists of automatically aligning and then qualitatively evaluating the English and translated Italian version of the first paragraphs of G. Orwell's "1984" (until the "war is peace, freedom is slavery, ignorance is strength" part). This involved using two tools. BERTalign (Liu and Zhu 2023) was used to split and align the sentences of the source and translation, yielding 47 sentence pairs. The second step was to prompt GPT-4 to examine each pair, outputting if there is any significant lexical or stylistic differences, and if any to briefly explain. The outcome was then examined by two native Italian speaking literature scholars (see Acknowledgments). Both concluded that the alignment as well as GPT-4's inferences were largely correct and insightful, with no significant misinterpretations. While here only a qualitative initial assessment, it shows that the approach of combining multilingual LLM-driven aligners such as BERTalign with generative LLM-driven interpretation can easily enable scaling up translation and literary analysis to much larger datasets than a single human researcher could manually read in their lifetime.
Since generative LLMs can be prompted to classify anything on demand, here is also an experiment to implement a kind of a'semantic edit distance". String edit distances are widely used in linguistics, NLP and information retrieval among others (Manning et al. 2008; Wichmann et al. 2010). For example, Levenshtein distance operationalizes string distance as the optimal required number of additions, deletions or substitutions (of e.g. letters) to transform one string (word) to another. The distance of _dog_ to _log_ is 1 (1 replacement); to _cogs_ it is 2 (1 substitution, 1 addition). This approach works for comparing texts in the same language, but not across different languages, nor can it capture semantic equivalence if synonyms are used.
While machine translation algorithms or multilingual sentence embeddings can output a numeric similarity between two sentences in different languages, it would be useful to have a more fine-grained, interpretable metric, for example in fields like literary and translation studies. As an experiment, GPT-4 is prompted here to determine if a given source and translation differs, and if so -- inspired by Levenshtein -- is it addition, deletion or substitution. The test set is synthetic, generated also using GPT-4, prompted to output sentences from a children's story about a rabbit in a forest, in English and Japanese. 25 pairs match closely, but in 25 the rabbit is replaced with a bear in the Japanese version, in 25 a moose character is added, and in the last 25 the rabbit kisses his rabbit girlfriend in English, which is redacted in the Japanese version (emulating censorship scenarios not uncommon in the real world; cf. Inggs 2011). As an "edit distance', the translations as a text would have a ground truth total distance of 75/100. The results are very good, with accuracy at 96% across the four classes (0.95 kappa; the simple sum of non-close classes, or 'distance', is 74/100, i.e. 1 off). This demonstrates the applicability of LLMs to translation studies and other scenarios which require semantic comparison of source texts to translated, altered or censored variants, but beyond simple numeric similarity scores.
### Zero-shot lexicography for novel words
In this and the next section, two practical applications of instructable LLMs are considered which can but do not need to be part of a larger MAMM approach. Both computational approach such as word embeddings or LLMs, as well as qualitative approaches can be used for determining the meaning of novel words such as borrowings, be it for linguistic or lexicographic purposes such as dictionary building. Here the utility of using generative LLMs as a "zero-shot lexicographer" is demonstrated, using a synthetic test set. This was generated also by GPT-4, instructed to compile a set of unrelated sentences that would use one of these three target senses: _bear_, _glue_ and _thief_ (representing both animate and inanimate subjects, countable and mass nouns), in three languages: English, Turkish and Estonian (representing different
language families and speaker population sizes). Each target sense is instructed to be expressed with a placeholder word, _zoorplick_, instead. This was chosen as a word that would be unlikely to be in the training data. GPT-4 was also queried to guess its "meaning" and the machine came up with nothing. The context is intentionally just a sentence to make the task harder.
In the testing phase, GPT-4 was instructed to infer the meaning of the placeholder given the separately generated contexts. The LLM output was not constrained, making this an open-ended exercise Some leeway was given: _adhesive_ would be accepted as correct for _glue_, and _burglar_, _robber_, _pickocket_ as types of _thief_. The results, illustrated in Figure 7, are promising: _glue_ and _thief_ can be correctly inferred in all three languages already based on 3-4 examples. _bear_ is more difficult, as with only sentence-length contexts, the LLM mistakes it for various other wild predators, but accuracy improves with more examples. This exercise shows lexicography and dictionary making can benefit from applying zero-shot generative models either in lieu or in conjunction with specialized models or human lexicographers.
### LLMs for missing data augmentation
Working with large but incomplete databases poses a challenge across many fields. While numerical prediction-based missing data imputation approaches exist, they can lead to biased estimates. Here is an experiment with LLM-driven semantic imputation on a real dataset. In a recent public service media study (Ibrus et al. 2022), we explored a large dataset of television metadata from a broadcast management system (BMS; essentially a production database as well as an archive) of the channels of the Estonian Public Broadcasting (ERR). The study covered 201k screen time hours across 408k program entries and investigated dynamics of content types and production countries between 2004-2020 among other things. In a follow-up study (in prep.), this is being compared to a similar BMS dataset of neighboring Finland's public broadcaster YLE. The data is again partial, with notably production countries missing from about 23% of daily programming entries. Such missing data could be manually added by reading through the rest of the metadata like program title, synopsis or description entries and searching for the origin of the shows and films from additional sources. This would of course be incredibly time consuming.
This is an attempt to infer production country directly from limited metadata using GPT-4, on a randomized test set of 200 unique program entries where the true production country is actually present (15 different countries ended up in the test set). The task is complicated by the fact that the entire BMS is in Finnish language, including the titles and the (very) short descriptions which are used to prompt the LLM. Besides many country names like _Yhdsysullat_ (USA), smaller place names can also be translated, e.g. Lake Skadar (also Skadarsko, Scutari; in the Balkans) is referred to as Skutarijarvi in one of the synopses, which might as well be a Finnish place name. Non-Finnish names are also modified according to Finnish morphology, e.g. _Jamie odotata tuomisana toimenpanao_**Wentworth**lin_ _limassa_, _mutta pian hanta odotata kuolemaakin hurjempi kothtalo. Claire pance henkensal likeoon pelastaakseen mienensa sadistisen_**Randallin kynsisa**. This synopsis is also typical in length; the median in the test set is 206 characters.
The task is set up without constraining output classes to a fixed set like in most other classification tasks here, to give the LLM free reign to take an educated guess. Despite these complexities and the open-ended nature of the task, the results are promising, with an accuracy at 72%. Most mismatches make sense too: mixing up English-speaking countries is the most common source of errors, followed by the German-speaking, and also the Nordic countries. This illustrates the applicability of LLMs for data imputation and augmentation in complex social and media science datasets, but also the necessity to account for error rates in any subsequent statistical modeling based on the augmented data, to avoid biases,
Figure 7: Lex
as discussed in Methods. If augmented data are added to an existing database, it should of course be transparently flagged as such.
### Visual analytics at scale using multimodal AI
The case studies above have focused on text analytic capabilities of large language models. There is a clear direction towards multimodal models though, and GPT-4 is, technically, one of them (OpenAI 2023). At the time of writing, these capabilities were accessible via Microsoft's Bing AI web app, running on a version of GPT-4, according to Microsoft. Figure 8 depicts four examples of utilizing multimodal AI for image analytics. To avoid the possibility of the LLM drawing too much on "memorized" content in the training data, all images in Figure 8 were generated (using an image model, Stable Diffusion XL1.0), except for The Matrix lobby scene still, which was also captured by the author.
While these are all toy examples, scaling up such questions and inquiries to large datasets holds promise of an unprecedented scale of analytics in fields like film studies, art history, visual anthropology, etc. The narrative descriptions of images may not be useful for quantification as such, but illustrate the already available capacities of this class of models, which are able to reason about multilingual multimodal jokes (Figure 8.A) and produce coherent descriptions of realistic scenes as well as abstract visuals. Detecting and segmenting objects on images (cf. Chen et al. 2023b; Kirillov et al. 2023) or inferring art styles and aesthetics (Mao et al. 2017; Karjus et al. 2023) is nothing new as such. What LLMs bring to the table is the ability to "reason" (see Figure 8.A,B) and perform zero-shot classification (8.C). The example results are not necessarily perfect: indeed, Neo is not falling backwards but doing a cartwheel, and it may be a stretch to call any of the dresses pink on the painting in C. These are however early models and will likely improve.
Figure 8: Examples of visual analytics using a multimodal LLM (Microsoft Bing AI, August 2023 version). See the Appendix for the full prompts and outputs which have been truncated here. (A) Prompt: Explain this joke. — Output: The joke is based on a wordplay that involves removing the letter B from the word “bananas” and getting the word “ananas”, which is the French word for ”pineapple”. (B) This is a still from a movie. Analyze it like a film scholar. — The image depicts a scene of violence and conflict, as indicated by the presence of guns, masks, and broken glass. The scene is likely part of an action or thriller genre, which typically features such elements to create tension and excitement for the audience. The scene also suggests a power imbalance between the four men, as three of them are shooting at one, who is falling backwards. The falling man is in a vulnerable and defeated position, while the other three are in a dominant and aggressive stance. This could imply that the falling man is either the protagonist or an ally of the protagonist, who is facing a threat or a challenge from the antagonists. The scene is shot from a low angle, which enhances the sense of drama and danger. /.../ The image is slightly blurry, which suggests fast movement and adds to the feeling of chaos and urgency. The setting of the scene is a gray stone building with columns, which could indicate a public or official place, such as a government office, a bank, or a museum. /.../ The debris and broken glass scattered around the scene add to the sense of destruction and disorder caused by the shootout. (C) Make a nested list of all prominent objects and characters in this painting. — [painting [sky [clouds]] [buildings [roofs] [walls] [windows] [doors]] [street [cobblestones] ] [people [man in black suit and top hat] [woman in blue dress and white hat] [boy in brown shirt and pants] [girl in pink dress and bonnell] [animals [horse] [cart] [chickens] [rabbits]]. (D) Analyze this painting like an art scholar. — This painting is an example of abstract expressionism, a style of art that emerged in the 1940s and 1950s in the United States. The painting does not depict any recognizable objects or scenes, but rather uses colors, shapes, and textures to create a visual impact. The painting has a dynamic and energetic composition, with a burst of colors that radiate from the center. The colors are mostly warm (orange, yellow, red) and cool (blue, green, purple), creating a contrast and a sense of tension. The black and white colors add depth and contrast to the painting, as well as suggesting light and shadow. The painting has a thick and textured surface, with visible brushstrokes that show the movement and force of the artist’s hand. /.../ The possible interpretations of this painting are varied and subjective, depending on the viewer’s perspective and imagination: /.../ an explosion or a firework /.../ a flower or a sun /.../ a galaxy or a star /.../ or the artist’s psyche or emotions.
Discussion
There is no substitute for expert human judgment, reasoning and calibration, when it comes to designing, conducting and interpreting research. However, human time is a bottleneck. With suitable instructable machines, these capacities can be augmented and scaled up to enable research on much larger volumes of data than human alone could process, enabling researchers to elevate to more representative sample sizes as well as ask new questions that were perhaps not yet feasible in the past. Humanities, social sciences, and other neighbors of philosophy are very well positioned to make use of this opportunity, with long traditions in theory building, qualitative reasoning and expert knowledge driven analytics. These are all competencies crucial in the application of a MAMM approach, which requires solid questions or hypotheses, a well corresponding coding scheme, expert-annotated test sets for evaluation, and last but not least meaningful interpretation of the results of quantification of potentially very large datasets.
The quantitizing mixed methods approach, as exemplified by usage feature analysis in linguistics, provides a flexible and replicable framework, as a more rigorous alternative for analyzing qualitative data in a systematic quantitative manner, compared to pseudo-mixed methods, as discussed above. The MAMM is an augmentation of the QMM with machine learning. Here the machines of choice were instructable LLMs as flexible zero-shot classifiers and reasoners -- but any suitable model applies.
Continuing to use any potentially pseudo-mixed designs would thus seem difficult to justify, when objectively more efficient and transparent methods are available. Purely qualitative research naturally has its place; but applying qualitative designs in empirical scenarios, if the actual goal is quantification and extrapolation, can lead to unintentional pseudo-mixed practices and spurious results. Using (and extrapolating based on) small sub-samples is no longer necessary either, as previously time-consuming annotation and analytic tasks can now be delegated to a (suitable expert-instructed) LLM. This all is of course not to say LLMs or ML or AI should be applied to everything everywhere all at once. Allocating research tasks to a machine is rather an optimization problem between monetary resources, human time, and output quality. However, as shown above and in recent literature, using currently already available LLMs does not always decrease, and can in some cases even improve upon human output quality (cf. Gilardi et al.2023; Tornberg2023).
### Limitations
#### 4.1.1 Technological limitations
A possible technical factor limiting the applicability of current LLMs is that their instruction-training process typically involves at least some form of censorship to stop the final model from generating harmful or offensive content. The extent of this varies, but it can also hinder using the model as a classifier in valid contexts: if a given input with potentially sensitive content triggers such an adverse reaction, the model may refuse to respond or respond with a refusal. However, contingencies for such occasions can be built into the analytic pipeline (see Methods). Current text-centric models are also limited in applicability to multimodal data. For example, natural human communication is inherently multimodal: not just uttered words but gesture, tone and other factors play a role (cf. Rasenberg et al.2022). This may well improve in the near future however.
As stated in the Introduction, this contribution is limited in scope in terms of prompt optimization or model comparison, which have and are being done elsewhere. To emphasize once more, the case study results should not be considered the upper limit of the accuracy and capacity of current and future LLMs, but the baseline of what is already possible.
#### 4.1.2 Proficiency-based limitations in applying a machine-assisted framework
The SAGE Handbook of Mixed Methods in Social & Behavioral Research (Tashakkori and Teddlie2010) on lists the following hindrances to mixed methods research: "costs of conducting it, unrealistic expectations regarding an individual researcher's competence in both QUAL and QUAN methodology, complexity of putting together teams to carry out such research when groups (rather than individuals) conduct MMR, and (last, but not least) the impossibility of an individual or even a team's examining issues from different perspectives/worldviews."
The same, by extension, applies to the MAMM framework. While zero-code applications may well become available in the future, the low-code pipeline described in the Methods does require some proficiency in a suitable programming
language, and of either using APIs or deploying local models. The quantification step furthermore necessitates a basic understanding of statistics and the skills to conduct modeling in a software package or a programming language like R. There are two options here: either the scholar takes time to learn the basics of programming and statistics, or, collaborates with somebody who already has them. However, investment in learning (and teaching students) basic programming and statistics is worthwhile, with the added effect of broadening career prospects.
#### 4.1.3 Other arguments against LLMs as research instruments, and ways forward
One critique leveraged against the use of machine learning such as LLMs to annotate data is that they can be unreliable or unreplicable, because their outputs may be stochastic. This can be due to the nature of the underlying neural network model, or because updates to cloud service LMMs may not be well documented and traceable. A related critique is that LLMs, like all trained ML models, can be biased due to some skewed distributions or content in their (in commercial cases, often unknown) training data (Feng et al., 2023). This is more so an issue with close-source models like GPT-4 where training data and procedures are not fully known. However, as pointed out by Tornberg (2023), these issues are not categorically unique to machines, and also apply to human analysts (and crowd-worker annotators, research assistants). To put it another way, humans too are stochastic and closed source.
Engaging in analytic tasks requiring subjective judgments and reasoning can propagate and amplify biases. There is no way around this in qualitative (and by extension, mixed methods) research. The solution is to be mindful, reflect and acknowledge this, follow good open science practices, and generally strive towards transparency and foster replicability where possible -- regardless if using machine or human analysts. These are unfortunately not yet seen as relevant issues in all fields of H&SS. While qualitative research can only account for uncertainty and bias informally, quantitative approaches can furthermore enable systematic accounting and modeling of biases and other issues (see Methods).
Using open-source LLMs based on well documented training procedures and data is preferable in that it can help with transparency and replicability (cf. Liesenfeld et al., 2023). Running a fixed version of a local model can ease the replication issues that current cloud services may have, if the model is public (in a persistent manner) or can be publicized along with the research. However, this is not always feasible, such as at the time of writing this paper, where the only models capable of working with the smaller languages were the commercial closed-source ones.
One might also criticize using LLMs in research for the fact that using them can cost money -- either in the form of commercial cloud service fees or investments into hardware capable of running the bigger models locally. The ecological footprint of using LLMs has also been raised. Then again, arguably any research activity has costs and a footprint, including hiring a research assistant or crowd-workers to annotate data, or using one's own time -- the most valuable resource -- to complete a given analysis (see also Tomlinson et al., 2023).
One way or another, LLMs are already being used in research, likely in ways also described in this contribution. The cost and effort of running a typical "paper-sized" study has therefore significantly decreased in many disciplines, especially those not requiring experimentation or primary data collection. The writing process is also used by LLM-based tools like ChatGPT. Anecdotally: the core of a typical usage-based linguistics paper applying feature analysis consists of (in addition to the write-up) the annotation of anywhere around 500-5000 linguistic examples, often sourced from a corpus; a PhD thesis about thrice that. Such a task can now be completed in hours using an LLM (at least at some level of quality). If a discipline (or a journal) allows itself to be flooded by low-effort, low-insight papers, this is bound to eventually erode its reputation and trustworthiness and hinder real progress. Transparent practices and replicability (including in review) have thus never been more important than now, and research evaluation should focus less on volume (as scaling is cheap now) and more on insight and intellectual contribution.
### Future research and opportunities
While the case studies here covered a number of disciplines and task types, this contribution is by no means comprehensive in that regard. Using LLMs and eventual multimodal models as zero-shot classifiers and inference machines hold obvious potential for fields including humanities, social sciences and cultural analytics, which routinely deal with complex textual, visual and otherwise "qualitative" data. As demonstrated in the case studies, already currently readily available LLMs can be plugged into research pipelines for classification and analysis as well as data processing and filtering. As shown, a single LLM prompt can often do an equally well (or better) job than complex, multi-model pre
processing pipelines -- which obviously were necessary up until very recently, to the point of sometimes being research goals themselves (cf. Chaturvedi et al. 2018; Sherstinova et al. 2022; Ash et al. 2023; Sobchuk and Sela 2023). If a researcher or research group makes regular use of LLMs, it may well make sense to deploy a custom model on in-house hardware or a private cloud, and fine-tune it for their domain and most common use cases, or a set of branching models for specific cases. I would be surprised if that would not become commonplace in the near future.
There are various other domains not considered in the case studies here where machine assistance may be useful. One is experiments employing artificial languages or visual stimuli, as used in psychology, experimental semiotics, cognitive science and linguistics (Kirby et al. 2008; Galantucci et al. 2012; Tamariz and Kirby 2015; Karjus et al. 2021). LLMs could be used to generate stimuli, visual AI models for any desired visual or artistic stimuli, and LLMs can be used to analyze any open-ended responses. LLMs can be used to build the codebase for the website or app used to run the experiment. These are all tasks typically shared between a research team, but allocating some to machines means for example a cognitive scientist no longer needs to act as a full-stack developer, web designer, artist, and analyst, all in one. Speaking of linguistics, one case study that did not make it to this contribution due to being too preliminary consisted of inferring typological categories such as dominant word order based on a small corpus of sentences from an 'undocumented" (artificially generated) language. Initial results on GPT-4 were promising, with it being able to reason and infer linguistic categories about a novel 'language' that it does not have in its training data.
While a number of domains were covered by the case studies, there were no experiments in areas of law, educational sciences or pedagogy. Like in the cases covered here, empirical data like interviews, observations, practice reports but also laws and regulations etc. could be analyzed in a MAMM framework. In an educational setting, LLMs may be used for assessment and other tasks (Baidoo-Anu and Owusu Anshan 2023; Kasneci et al. 2023). This however requires in turn assessing the performance and suitability of these tools, where the QMM or MAMM is likely applicable. Another scenario that LLMs could be used for annotation or classification is where the content of the data is potentially harmful, toxic or triggering. Instead of subjecting a crowd-worker or research assistant to the task, it can now be allocated to a machine.
As discussed above, one framework that explicitly relies on (machine-assisted) quantification of qualitative data in the humanities is that of distant reading (Moretti 2013), which typically relies on interpreting word counts or latent topics. Naturally these representations are removed from the nuances of the content itself (the domain of 'close reading'). One critique of Moretti-style distant reading (Ascari 2014) states that its reliance of predefined genre labels and "abstract models that rest on old cultural prejudices is not the best way to come to grips with complexity." The MAMM presents a solution. Instead of operating with broad genre labels or abstract topic models, it is now possible to model texts as distributions or sequences (of theory-driven units) at any chosen level of granularity, while the machine component enables meaningfully processing volumes of text that would be unfeasible for human-only close reading. In that sense, distant reading can now be replaced with machine reading, which embodies the best of both worlds.
Data analysis in the form described here is however not the goal across all H&SS disciplines. For example, a semiotician or philosopher may be more interested in developing concepts, prescriptive frameworks, or discussing possible interpretations and reception of a text. If the research is purely qualitative in that manner, the MAMM framework would not be applicable (unless the design is actually pseudo-mixed, see Introduction). LLMs might still be useful as AI research assistants, for summarizing texts or filtering out contextually complex examples beyond what a keyword search would be capable of.
### Time and efficiency gains
Ziems et al. (2023) suggest that the resources saved from allocating some tasks to LLMs would be put to good use by training expert annotators (or research assistants). This is a good point: let machines do repetitive labor and humans more interesting and meaningful work. The time savings can be considerable. For example, the dataset of the first case study on newsreels features a modest dataset of 12707 synopses totaling about 281k words. Assuming a reading speed of 184 wpm (words per minute; average for Russian language text; Trauzettel-Klosinski et al. 2012), merely reading through that would be over 19 hours of work, with annotation work likely taking as much again. At least a full work week in total. That is assuming the availability of a speaker of Russian who is knowledgeable of the historical Soviet context and able to interpret the various abbreviations and references in the text. Running it through the OpenAI API was a matter of leaving the script running in the background for an hour or so -- yielding results very close to what an expert human
with said qualifications would judge (as evident from the test set accuracy).
The English translation of "Les Miserables" used in the network inference example above is about 558k words, and contains a long list of major and minor characters. Reading through that would take over 40 hours (assuming the English average of 228 wpm), and taking meticulous notes of all pairs of interacting characters in each passage would likely double that. Again easily two weeks of work. Or a few minutes or hours on an LLM.
The Corpus of Historical American English (19-20th century; Davies 2010) is a commonly used resource in historical and computational linguistics (see references in the lexical semantic change case study). While NLP methods have been used to parse the entire corpus to infer e.g. lexical change, reading through its entire 400M words would take a human over 14 years (assuming 250 8h-workdays per year without a lunch break). No English scholar in their right mind would undertake this, so either small samples or aggregation via NLP methods is used. With instructable LLMs, reading, annotating or meaningfully analyzing every single sentence therein is entirely feasible.
One of the largest and most prominent exercises in distant reading is likely still the "Quantitative Analysis of Culture Using Millions of Digitized Books" by Michel et al. (2011). Even just the English segment of their corpus (361B words) would be 13k years of work to read through. While purporting to launch a field of "culturomics", their results were based not on "books" but rather counts of words and phrases aggregated across books. Given a similar dataset, processing it with an LLM in a MAMM framework would indeed take more than a few hours, but would not be impossible, while enabling asking more meaningful questions than word frequencies can provide.
## 5 Conclusions
Building on past mixed methods and linguistics research, this contribution proposed and evaluated a machine-assisted (quantitizing-type) mixed methods framework. Large language models were shown to be a flexible solution for the machine or artificial intelligence component, and were applied to 16 case studies characteristic of humanities and social science research topics. It was shown how both time-consuming human annotation and analytic labor, as well as complex computational pipelines, can be either augmented or substituted with zero-shot learning, without a significant loss in (or even potentially improving) annotation quality. The MAMM framework emphasizes the need for transparency and replicability of both the qualitative and quantitative component, which can be achieved by transparent research practices, rigorous statistical procedures, and following general good open science principles.
## Data and code availability
The data and code are available at
[https://github.com/andreskarjus/MachineAssistedMixedMethods](https://github.com/andreskarjus/MachineAssistedMixedMethods)
The prompts are also listed in the Appendix below.
## Acknowledgments
The author would like to thank Mila Oiva for collaboration on the newsreels topic classification example which became an extended case study in this contribution, providing expertise in interpreting the additional results and feedback; Christine Cuskley for the collaboration on the Twitter paper, one component of which was also used here as an expanded example; Pritit Latti for providing a version of the maritime wrecks dataset, Daniele Monticelli and Novella Tedesco for providing expert evaluation in the English-Italian translation task, Laur Kanger and Peeter Tinits for discussions and for providing the test set for the historical media text filtering task, Oleg Sobchuk and Arjoms Sela for providing a test set for the literary genre detection task, and Tanya Escudero for discussions that led to expanding the literature review. Thanks for useful discussions and feedback go to Vejune Zemaityte, Mikhail Tamm, and Mark Mets. The author is supported by the CUDAN ERA Chair project, funded through the European Union's Horizon 2020 research and innovation program (Grant No. 810961). |
2309.03605 | Virtual segmentation of a small contact HPGe detector: inference of hit
positions of single-site events via pulse shape analysis | Exploring hit positions of recorded events can help to understand and
suppress backgrounds in rare event searching experiments. In this study, we
virtually segment a small contact P-type high purity germanium detector (HPGe)
into two layers. Single-site events (SSEs) in each layer are selected by an
algorithm based on two pulse shape parameters: the charge pulse drift time
($T_{Q}$) and current pulse rise time ($T_{I}$). To determine the shapes and
volumes of the two layers, a Th-228 source is placed at top and side positions
to irradiate the detector. The double escape peak events from 2614.5 keV
$\gamma$-ray are selected as typical SSEs, their numbers in the two layers are
used to calculate the volumes and shapes of those layers. Considering the
statistical and systematic uncertainties, the inner layer volume is evaluated
to be 47.2\%$\pm$0.26(stat.)\%$\pm$0.22(sys.)\% of the total sensitive volume.
We extend our analysis for SSEs in 1400-2100 keV, the spectra of inner layer
events acquired from experimental data using the selection algorithm are in
good agreement with those from the simulation. For sources outside the HPGe
detector, the outer layer can act as a shielding for the inner layer. Selecting
the inner layer as the analysis volume can reduce the externalbackground in the
signal region of Ge-76 neutrinoless double beta (0$\nu\beta\beta$) decay. We
use the Th-228 source to evaluate the background suppression power of the
virtual segmentation. After performing the single and multi-site event
discrimination, the event rate in the 0$\nu\beta\beta$ signal region can be
further suppressed by 12\% by selecting the inner layer as the analysis volume.
The virtual segmentation could be used to efficiently suppress surface
background like electrons from Ar-42/K-42 decay in 0$\nu\beta\beta$ experiments
using germanium detector immersed in liquid argon. | W. H. Dai, H. Ma, Z. Zeng, L. T. Yang, Q. Yue, J. P. Cheng | 2023-09-07T10:00:26Z | http://arxiv.org/abs/2309.03605v1 | Virtual segmentation of a small contact HPGe detector: inference of hit positions of single-site events via pulse shape analysis
###### Abstract
Exploring hit positions of recorded events can help to understand and suppress backgrounds in rare event searching experiments. In this study, we virtually segment a small contact P-type high purity germanium detector (HPGe) into two layers. Single-site events (SSEs) in each layer are selected by an algorithm based on two pulse shape parameters: the charge pulse drift time (\(\mathbf{T_{Q}}\)) and current pulse rise time (\(\mathbf{T_{I}}\)). To determine the shapes and volumes of the two layers, a Th-228 source is placed at top and side positions to irradiate the detector. The double escape peak events from 2614.5 keV \(\mathbf{\gamma}\)-ray are selected as typical SSEs, their numbers in the two layers are used to calculate the volumes and shapes of those layers. Considering the statistical and systematic uncertainties, the inner layer volume is evaluated to be 47.2%\(\pm\)0.26(stat.)%\(\pm\)0.22(sys.)% of the total sensitive volume. We extend our analysis for SSEs in 1400-2100 keV, the spectra of inner layer events acquired from experimental data using the selection algorithm are in good agreement with those from the simulation. For sources outside the HPGe detector, the outer layer can act as a shielding for the inner layer. Selecting the inner layer as the analysis volume can reduce the external background in the signal region of Ge-76 neutrinoless double beta (\(0\mathbf{\nu\beta\beta}\)) decay. We use the Th-228 source to evaluate the background suppression power of the virtual segmentation. After performing the single and multi-site event discrimination, the event rate in the \(0\mathbf{\nu\beta\beta}\) signal region can be further suppressed by 12% by selecting the inner layer as the analysis volume. The virtual segmentation could be used to efficiently suppress surface background like electrons from Ar-42/K-42 decay in \(0\mathbf{\nu\beta\beta}\) experiments using germanium detector immersed in liquid argon.
small contact HPGe, pulse shape analysis, detector segmentation
## 1 Introduction
Small contact high purity germanium (HPGe) detectors are widely used in searching for rare events from physics beyond Standard Model, such as the neutrinoless double beta (\(0\nu\beta\beta\)) decay and dark matter [4, 5, 6, 7]. Those searches need an extremely low background level in the signal region to achieve sufficient sensitivity. The discrimination of background and signal via pulse
shape analysis is a powerful background suppression technology and widely used in HPGe based experiments. [8, 9, 10, 11].
The energy depositions from \(0\nu\beta\beta\) decay events and dark matter interactions are typically within about a millimeter and are regarded as single-site events (SSEs). Backgrounds can be single-site or multi-site events (MSEs), depending on their origination. Small contact HPGe detectors, such as point contact Ge (PCGe) and broad energy Ge (BEGe), have been demonstrated to have SSE and MSE discrimination capability utilizing pulse shape analysis [3, 9, 10, 11]. After the SSE/MSE discrimination, signals are still mixed with SSE-like backgrounds, such as single Compton scattering of incoming \(\gamma\) or direct energy depositions from beta decay electrons penetrating the surface layer of the detector. Signals are expected to have a uniform distribution in the detector, while the backgrounds tend to be close to the detector surface. Therefore, inference of the SSE position can help to understand and suppress the SSE-like backgrounds.
Previous studies [12, 13, 14] have demonstrated that the charge collection time in a small contact HPGe detector depends on the energy deposition position. Past work [13] has shown that the rise time of the event pulse can be used to estimate the distance of energy deposition from the contact in a PCGe detector. Pulse shape simulation in [12] also showed that the signal shape depends on the interaction position.
This work explores the position discrimination power of a small contact \(p\)-type HPGe detector via pulse shape analysis. The detector is virtually segmented into two layers, and single-site events with hit position in the inner layer are identified. The shape and volume of the inner layer are modeled, determined, and validated in a series of Th-228 irradiation experiments. We also discuss the background suppression potential of this method towards possible application in future \(0\nu\beta\beta\) experiments.
## 2 Experimental setup
The detector used in this work is a small contact \(p\)-type HPGe detector produced by ORTEC. The detector crystal has a height of 42.6 mm and a diameter of 80.0 mm, and the thin \(p+\) contact is about 3.1 mm in diameter and is implemented in a 1 mm deep hole on the bottom surface of the crystal. The \(n+\) surface of the detector crystal, formed by the lithium diffusion, contains an inactive layer and reduces the sensitive mass of the detector. The thickness of the inactive layer is evaluated to be 0.87 mm in our previous work [15]. Subtracting the inactive layer, the total sensitive mass of the detector is 1.052 kg.
As shown in Fig.1, the data acquisition (DAQ) system is based on commercial NIM/VME modules and crates. The detector is operated under 4500 V bias voltage provided by a high voltage module. The output signal from the \(p+\) contact is fed into an resistance-capacitance (RC) preamplifier. The RC-preamplifier provides two identical
Figure 1: Schematic diagram of the DAQ system.
Figure 2: Experimental setup at CJPL.
output signals. One is loaded into a shaping amplifier with a gain factor of 10 and shaping time of 6 \(\mu\)s. The output of the shaping amplifier and the other output of the RC-preamplifier are fed into a 14-bit 100 MHz flash analog-to-digital convertor (FADC) for digitalization. The digitalized waveforms are recorded by the DAQ software on a PC platform.
A detector scanning device is built in China Jinping Underground Laboratory (CJPL) [16]. As shown in Fig.2, the detector and the liquid nitrogen (LN) Dewar are installed with the scanning device. A Th-228 source with an activity of 500 Bq is mounted on the source holder with a step motor controlling the source position.
## 3 Pulse processing and event discrimination
### Digital pulse processing
Typical pulses from the shaping amplifier and preamplifier are illustrated in Fig.3. After subtracting the baseline, the integration of the shaping amplifier pulse is used to estimate the event energy (as shown in Fig.3(a)). Energy calibration is performed by the measured Th-228 spectrum with characteristic \(gamma\)-ray peaks from decays of radionuclides in the Th-228 decay chain.
The pulses from the preamplifier are used to estimate the time features of the event (as shown in Fig.3(b)). The charge drift time (\(T_{Q}\)) is defined as the time between the moments when charge pulse reachs 0.2% and 10% of its maximum amplitude. The current pulse is extracted from the charge pulse by a moving average differential filter, and the current rise time (\(T_{I}\)) is the time between the moments when the current pulse reachs 0.2% and 20% of its maximum amplitude.
### Single and multi-site event discrimination
The single/multi-site event discriminator (A/E) is defined as ratio of the maximum amplitude of the current pulse (A) and the reconstructed energy (E). It has been discussed in various literature [9, 11, 17, 18] that SSE tends to have higher A/E value than MSE in a small contact HPGe detector. Therefore, we apply a cut on A/E to select the SSEs. The acceptance region of the A/E cut is determined by the double escape peak (DEP) events from a measured Th-228 spectrum. DEP events are typical SSEs and their A/E distribution is fitted by a Gaussian function to determine the mean (\(\mu_{SSE}\)) and standard deviation (\(\sigma_{SSE}\)) of A/E parameter for SSEs. As shown in Fig.4, the cut threshold is set to \(\mu_{SSE}-5\sigma_{SSE}\), leading to about 80% survival fraction of DEP events and 9% survival fraction of single escape peak events (typical MSEs).
Figure 3: (a) an example of shaping amplifier pulse, the blue region indicates the integral of the pulse after subtracting the baseline, and it is used as the energy estimator; (b) an example of smoothed preamplifier pulse and the extracted current pulse. Pulse time parameters \(T_{Q}\), \(T_{I}\), and parameter ”A” in the A/E discriminator are also illustrated. The current pulse is rescaled for demonstration.
Fig.5 shows typical Th-228 spectra before and after the A/E cut. Main characteristic peaks from the Th-228 source and radionuclides in the surrounding materials are labeled. The full-width-at-half-maximum (FWHM) of the double escape peak (1592.5 keV) before (after) the A/E cut is \(2.19\pm 0.05\) keV (\(2.18\pm 0.03\) keV). The FWHM of the 2614.5 keV peak before (after) the A/E cut is \(2.51\pm 0.01\) keV (\(2.46\pm 0.02\) keV). A slight improvement in the energy resolution is observed after the A/E cut.
### Linear and nonlinear event discrimination
The \(T_{Q}\) and \(T_{I}\) distribution of SSEs demonstrates two types of events: events gathered in a rodlike region in Fig.6(a) are referred to as linear events, and other events gathered in a cluster are referred to as nonlinear events. As shown in Fig.6, the charge drift time (\(T_{Q}\)) and a linearity index (\(L\)) are used to discriminate the linear and nonlinear events. The linearity index is defined as:
\[L=T_{I}-\left(k\times T_{Q}+b\right), \tag{1}\]
where fit parameters \(k\) and \(b\) are calculated via fitting \(T_{Q}\) and \(T_{I}\) of typical linear events with the function (\(T_{I}=k\times T_{Q}+b\)). First, initial values of fit parameters (\(k_{0}\) and \(b_{0}\)) are calculated by fitting events with \(T_{Q}\) and \(T_{I}\) below 500 ns. Then events with linearity \(L=T_{I}-\left(k_{0}\times T_{Q}+b_{0}\right)\) in [-50, 50] ns are fitted to give the final value of \(k\) and \(b\). As shown in Fig.6(b), the distribution of linearity index \(L\) is fitted with two Gaussian functions corresponding to linear and nonlinear events, respectively. The cut limit is set to (\(\mu_{L,linear}-3\sigma_{L,linear}\)), where \(\mu_{L,linear}\) and \(\sigma_{L,linear}\) are the mean and standard deviation of \(L\) distribution for linear events. The distribution of \(T_{Q}\) for nonlinear events selected by linearity index \(L\) is fitted with a Gaussian function, and the cut limit is set to (\(\mu_{T,linear}-3\sigma_{T,linear}\)), where \(\mu_{T,linear}\) and \(\sigma_{T,linear}\) are the mean and standard deviation of \(T_{Q}\) distribution for nonlinear events as shown in Fig.6(c). The red dashed line in Fig.6(a) shows the discrimination limit set by the linearity index \(L\) and the charge drift time \(T_{Q}\).
## 4 Detector segmentation model
### Demonstration of spatial distribution of linear and nonlinear events via pulse shape simulation
We perform a pulse shape simulation (PSS) for the HPGe detector to demonstrate the spatial distribution of the linear and nonlinear events. The electric field and weight potential field in the detector are calculated using the \(mjd\_fieldgen\) package [19], assuming a linear impurity profile
Figure 4: A/E distributions of DEP and SEP events in Th-228 calibration data. The dashed line is the A/E cut threshold (\(\mu_{SSE}-5\sigma_{SSE}\)).
Figure 5: Typical Th-228 spectra before and after the A/E cut. The characteristic peaks from decay daughters of Th-228 (Tl-208, Bi-212) and other radionuclides (K-40, and Bi-212) are labeled in the spectra. The double-escape peak (DEP) of Tl-208 2614.5 keV \(\gamma\)-ray is marked in red.
in the Z-direction with an impurity density of \(3.7\times 10^{9}\) cm\({}^{3}\) and \(8.0\times 10^{9}\) cm\({}^{3}\) at the top and bottom surface of the crystal. SSEs with 1 MeV energy deposition are placed at different positions in the crystal. The corresponding charge pulses are calculated via the SAGE-PSS package [20] and added with electric noise extracted from measured pulses.
Fig.7 demonstrates the \(T_{Q}\) and \(T_{I}\) as a function of the interaction position. As shown in Fig.7(a) and (b), SSEs close to the \(p+\) contact have shorter \(T_{Q}\) and \(T_{I}\). With the distance to contact increasing, the \(T_{Q}\) and \(T_{I}\) of induced pulses increase simultaneously, for instance, the SSE-3 and SSE-4. These events are typical linear events in Fig.7(c). However, when SSEs near the top and side surfaces of the detector, their \(T_{Q}\) and \(T_{I}\) are not sensitive to their positions. Those SSEs, such as SSE-1 and SSE-2 are typical nonlinear events. It can be explained by the Schockley-Ramo theory [21]: when SSEs deposit energy near the outer surface of the detector, the induced charge and
Figure 6: Discrimination of linear and nonlinear events. Data in the figure are from DEP events (1592.5\(\pm\)5 keV, after A/E cut) in a Th-228 calibration experiment (source placed at the center of detector top surface). (a) Distribution of \(T_{Q}\) and \(T_{I}\). The blue dashed line is the fitted linear function of \(T_{Q}\) and \(T_{I}\). Red dashed line is the cut limit for inner layer events; (b) Histogram of event linearity index \(L\), and the Gaussian fit of linear (blue line) and nonlinear (red line) events; (c) \(T_{Q}\) Histogram for nonlinear events selected by \(L\) cut in (b). The black dashed lines in (b) and (c) are the cut limit for inner layer events.
Figure 7: Pulse shape simulation for SSEs in different positions of the detector. (a) Charge drift time (\(T_{Q}\)) for SSE as a function of the interaction position; (b) Current rise time (\(T_{I}\)) for SSEs as a function of the interaction position; (c) Distribution of \(T_{Q}\) and \(T_{I}\) for pulses in (a) and (b), those events are gathered in two clusters with a linear and nonlinear relationship between \(T_{Q}\) and \(T_{I}\). Red crosses mark the positions of four selected SSEs.
current pulses will not exceed the 0.2% of their maximum amplitude as charge carriers drift in the weak electric and weight potential field area near the surface. Thereby, the \(T_{Q}\) and \(T_{I}\) of those SSEs are not sensitive to the energy deposition position.
### Parameterized segmentation model
According to the pulse shape simulation, the linearity between \(T_{Q}\) and \(T_{I}\) of the SSE can be use to infer its hit position. We segment the detector into two layers referring to the positions of linear and nonlinear SSEs. The boundary between the two layers is related to the electric and weight potential field of the detector. And due to the lack of precise knowledge of the impurity profile within the Ge crystal, we can't rely on the PSS to calculate the shape of the two layers but take it as a reference. Therefore, we take an empirical approach to build a segmentation model with 14 parameters to described the boundary.
As shown in Fig.8, the boundary of the inner layer is the linear connection of 8 spatial points. It is worth noting that the number of spatial points in the model is arbitrary, and it will be demonstrated later that the 8 points model is sufficient for this study. Table.1 lists the bound for each model parameter. As the model only requires the two layers to be continuous, the first spatial point \((r_{1},z_{1})\) could be on the top surface or the central axis. To determine the value of each model parameter, we design and conduct a Th-228 scanning experiment.
## 5 Optimization of segmentation model parameters
### Th-228 source scanning experiment
A Th-228 source is used to perform a scan of the detector top and side surfaces at 19 different positions as shown in Fig.9. A background measurement is also conducted for the detector.
Events in the DEP region (1592.5\(\pm\)5 keV) are selected as SSE candidates. After removing MSEs by the A/E cut, the linear events in the remaining SSEs are selected using the method in Sec 3.3. The ratio of linear events from the Th-228 source (\(R_{L,DEP}\)) is then calculated by:
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Parameter bound \\ \hline \((r_{1},z_{1})\) & \(r_{1}=0\), \(0<z_{1}<H\) \\ & or \(z_{1}=H\), \(0<r_{1}<R\) \\ \((r_{2},z_{2})\) & \(r_{1}\leq r_{2}\), \(z_{2}\leq z_{1}\) \\ \((r_{3},z_{3})\) & \(r_{2}\leq r_{3}\), \(z_{3}\leq z_{2}\) \\ \((r_{4},z_{4})\) & \(r_{3}\leq r_{4}\leq R\), \(z_{4}\leq z_{3}\) \\ \((r_{5},z_{5})\) & \(r_{5}\leq R\), \(z_{5}\leq z_{4}\) \\ \((r_{6},z_{6})\) & \(r_{6}\leq r_{5}\), \(z_{6}\leq z_{5}\) \\ \((r_{7},z_{7})\) & \(r_{7}\leq r_{6}\), \(z_{7}\leq z_{6}\) \\ \((r_{8},z_{8})\) & \(0\leq r_{8}\leq r_{7}\), \(z_{8}=0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bounds for segmentation model parameters, \(R\) and \(H\) are the radius and height of the Ge crystal.
Figure 8: Parameterized segmentation model of the detector, where \(H\) and \(R\) are the height and radius of the crystal. The top spatial point \((r_{1},z_{1})\) could be on the top surface (\(z_{1}=H\)) or on the central axis (\(r_{1}=0\)) of the crystal. The green shadow region is the inner layer in the segmentation model, and the gray shadow is the inactive layer in the \(n+\) surface.
\[R_{L,DEP}=\frac{N_{L,S}-N_{L,B}\cdot t_{S}/t_{B}}{N_{T,S}-N_{T,B}\cdot t_{S}/t_{B}}, \tag{2}\]
where \(N_{T,S}\) and \(N_{T,B}\) are total numbers of selected single-site DEP events in Th-228 and background measurements, respectively. \(N_{L,S}\) and \(N_{L,B}\) are numbers of selected linear events. \(t_{S}\) and \(t_{B}\) are the live time of source and background measurements. The uncertainty of \(R_{L,DEP}\) is calculated by propagating the Poisson uncertainties of event counts in Th-228 and background measurement through Eq.(2). Fig.10 shows the linear event ratio of SSEs in the DEP region as a function of Th-228 source positions. The \(R_{L,DEP}\) decreased from 33.3% to 24.0% as the source moved from the top center to the edge of the detector. About 2.9% changes in \(R_{L,DEP}\) is observed when moving the source along the detector side surface.
### Spatial distribution of DEP events
As the linear events are located in the inner layer of the segmentation model, the linear event ratio \(R_{L,DEP}\) can be modeled by:
\[R_{L,DEP}=\iint M(r,z\mid\theta)F_{DEP}(r,z)\cdot\mathrm{d}r\mathrm{d}z, \tag{3}\]
\[M(r,z\mid\theta)=\begin{cases}1\ (r,z)\in\mathrm{inner\,layer}\\ 0\ (r,z)\in\mathrm{outer\,layer}\end{cases}, \tag{4}\]
where \(M(r,z\,|\,\theta)\) is the select function for the inner event using the segmentation model, \(\theta\) represents the model parameters in Table.1, \(F_{DEP}(r,z)\) is the spatial distribution of SSEs in the DEP region. The energy deposition of \(\gamma\) emitted by the Th-228 source is simulated by Geant4 [22]. The energy depositions occured in the inactive layer of the detector are not recorded in the simulation. The
Figure 11: \(\delta_{D}\) histogram for simulated DEP events with the Th-228 source is placed at the center of the top detector surface.
Figure 10: Ratio of the linear event in selected DEP events as a function of Th-228 source positions. Error bars indicate the 1\(\sigma\) uncertainty.
Figure 9: Schematic of Th-228 source positions in calibration experiments. The red points indicate the position of the Th-228 source. The red, blue, and green dashed boxes mark the selected measurements for sub-datasets in the uncertainty assessment. The Th-228 source is mounted on a source holder. The carbon fiber vacuum cryostat and the copper crystal holder are also shown.
single-site events are selected by the \(\delta_{D}\) parameter. \(\delta_{D}\) is the average distance between the energy deposition points to the charge center of the event:
\[\delta_{D}=\frac{1}{n}\sum_{i=0}^{n}\sqrt{(x_{i}-\hat{x})^{2}+(y_{i}-\hat{y})^{2 }+(z_{i}-\hat{z})^{2}}, \tag{5}\]
\[\hat{x}=\sum_{i=0}^{n}x_{i}\frac{E_{i}}{E_{tot}},\hat{y}=\sum_{i=0}^{n}y_{i} \frac{E_{i}}{E_{tot}},\hat{z}=\sum_{i=0}^{n}z_{i}\frac{E_{i}}{E_{tot}}, \tag{6}\]
where \(n\) is the number of steps in one event, \((x_{i},y_{i},z_{i})\) and \(E_{i}\) are the hit position and energy deposition of the i-th step. \((\hat{x},\hat{y},\hat{z})\) and \(E_{tot}\) are the charge center and total energy deposition of the event. Events with \(\delta_{D}<\delta_{D,SSE}\) are selected as SSEs, where \(\delta_{D,SSE}\) is determined by matching the survival fraction of DEP events in simulation with that of the A/E cut in the experiment. Fig.11 demonstrates a typical \(\delta_{D}\) distribution of simulated DEP events when the Th-228 source is at the top center of the detector. The charge center of the selected SSE is then used to simulate the spatial distribution \(F_{DEP}(r,z)\). Fig.12 shows the simulated \(F_{DEP}(r,z)\) for the Th-228 source at two different positions.
### Optimization of model parameters
As shown in Fig.12, the position of the Th-228 source affects the spatial distribution of DEP events and therefore leads to different observed linear event ratios in Fig.10. Thus, we use a minimum-\(\chi^{2}\) method to calculate the model parameters (\(\theta\)), in which \(\chi^{2}\) is defined as:
\[\chi^{2}=\sum_{k=1}^{19}\frac{\left(R_{k,exp}-\iint M(r,z\mid\theta)F_{DEP}(r, z)\mathrm{d}r\mathrm{d}z\right)^{2}}{\sigma_{k}^{2}}, \tag{7}\]
where \(R_{k,exp}\) is the measured linear event ratio for Th-228 source at position \(k\) (\(k\)=1,2,...19), \(\sigma_{k}\) is the corresponding uncertainty of \(R_{k,exp}\). \(F_{DEP,k}(r,z)\) is the simulated spatial distribution of single-site DEP events for the Th-228 source at position \(k\). The minimalization of \(\chi^{2}\) is implemented by the genetic algorithm using a python-based calculation package Geatpy [23]. Fig.13 shows the optimized results. The volume of the inner layer is 47.2% of the total sensitive volume of the detector. The linear event ratios calculated by Eq.3 using the optimized model parameters are shown in Fig.14. The fit result agrees well with the measurements, the \(p-value\) of the \(\chi^{2}\) fit is 0.701.
## 6 Uncertainty assessment and model validation
Uncertainties of the shape and volume of the inner layer in the optimized model mainly consist of three parts:
1. Uncertainty of the linear event ratio (\(R_{L,DEP}\)) propagated by the \(\chi^{2}\)-method is evaluated using
Figure 12: Spatial distribution of simulated SSEs in DEP region. (a) Th-228 source in the center of the top surface; (b) Th-228 source on the side of the detector. The labels of the color bar represent the distribution density (arbitrary unit).
a toy Monte Carlo method. 3000 Monte Carlo datasets are generated assuming a Gaussian distribution for the \(R_{L,DEP}\) with the mean and standard deviation equal to the measured value and uncertainty, respectively. Model parameters are recalculated for each dataset following the same analysis in Sec 5.3. The distribution of inner layer shapes and volumes for the 3000 samples are illustrated in Fig.15. The distribution of inner layer volume is fitted with a Gaussian function, and the standard deviation, \(\pm 0.26\%\), is adopted as the statistical uncertainty.
2. Systematic uncertainty due to the choice of dataset: we divide the measured data in Fig.10 into three sub-datasets. Sub-dataset I and II each consists of ten measured data (marked by red dashed boxes for sub-dataset I, and blue dashed boxes for sub-dataset II in Fig.9). sub-dataset III consists of six measured data (green dashed boxes in Fig.9). The fitting of model parameters are performed in each sub-dataset, and the largest difference in inner layer volume between all sub-datasets and the full dataset (Fig.16 (a)) is \(\pm 0.22\%\) as a systematic uncertainty.
3. Systematic uncertainty due to the construction of the segmentation model: we reconstruct the segmentation model using 6 spatial points (10 free parameters) and 10 spatial points (18 free parameters) and calculate the model parameters using the full dataset. Fig.16(b) shows the optimized results for the reconstructed models. The overall shape and volume of the inner layer are similar in the three models, and the largest difference in inner layer volume is 0.02%, which is about 10 times smaller than the other two uncertainties and thereby negligible. This indicates the 8-point segmentation model is sufficient in this study.
Figure 16: (a) Optimized results using different datasets, full dataset (black line) consists of all measured data, sub-dataset I, II, III are selected from the full dataset. (b) Optimized results for three different models, the chi-square (\(\chi^{2}\)) and \(p\)-\(value\) are given to demonstrate the fit goodness of each model. The gray shadow regions in both figures are the inactive layer on the detector \(n+\) surface.
Figure 15: (a) Inner layer shapes of the 3000 Monte Carlo datasets. The green, yellow, and blue shadow bands are corresponding to 68%, 95%, and 99.7% quantiles, respectively. The gray shadow is the inactive layer on the \(n+\) surface. (b) Distribution of inner layer volumes. The red line is the fit of inner layer volumes using a Gaussian function, \(\mu\) and \(\sigma\) are the mean and standard deviation, respectively.
uncertainties in the simulation. In this case, the systematic uncertainty is taken as the discrepancy between linear event ratios corresponding to the innermost and outmost shape of the 68% quantile of the inner layer (the green region in fig.15(a)). Fig.17(b) is the comparison of measured and simulated spectra, it demonstrates that the \(\delta_{D}\) cut in the simulation is a good approximation for the A/E cut, and the spectra of inner layer events also show a good agreement between the simulation and measurement in the 1400-2100 keV energy region.
## 7 Background suppression performance of virtual segmentation
In the search for Ge-76 \(0\nu\beta\beta\) decay using HPGe detectors, backgrounds, mostly \(\gamma\)-rays and electrons from outside the detector, have to penetrate the outer layer of the detector to deposit their energy in the inner layer. Thus, the outer layer in the virtual segmentation could act as a shielding for the inner layer, and a lower background level of the inner layer may improve the detection sensitivity.
We use the Th-228 scanning data to evaluate the background suppression power of the virtual segmentation. The count rates in spectra are normalized to unit sensitive mass to include the mass loss due to the analysis volume selection. The masses of the detector are 1.052 kg and 0.496 kg for the total sensitive volume and the inner layer, respectively. Fig.18 demonstrates spectra before and after A/E cut and inner layer event selection when the Th-228 source is placed on the side of the detector. First the whole detector is selected as the analysis volume and the A/E cut is applied to removes multi-site events (gray and blue regions in Fig.18). Then the inner layer of the virtual segmentations is selected as the analysis volume, a further reduction on the event rate is shown in Fig.18 (red region). It is expected that the SSEs mostly come from the single Compton scattering of high energy \(\gamma\)-rays emitted from the source and are clustered near the surface of the detector. Thereby the inner layer has a lower background level in the detector.
Fig.19 shows the event rate in the \(0\nu\beta\beta\) signal region (1900-2100 keV) as a function of the Th-228 source positions. The highest background suppression power is achieved when the Th-228 source is at the side of the detector. In this case, the A/E
Figure 17: Comparison of simulation and experiment for Th-228 source placed on the side of the detector. (a) The linear event ratio as a function of energy, The uncertainty band for simulation (the green shadow) consists of uncertainty from the inner layer shape (68% quantile region in Fig.15(a)) and statistical uncertainty in simulation. The normalized residuals are shown in the bottom figure, (b) Measured and simulated spectra in 1400-2100 keV region.
cut reduces the event rate by 62%, and the virtual segmentation yeilds a further reduction of 12% on the basis of the A/E cut.
In future \(0\nu\beta\beta\) experiments using small contact HPGe detectors, this method might be used to further suppress background in the signal region. Especially for experiments using a liquid argon (LAr) veto system where the HPGe detector is directly immersed in LAr, such as GERDA [1], the LEGEND [24], and CDEX-300\(\nu\) experiments [3]. The background from K-42 (daughter of cosmogenic Ar-42 in LAr) beta-decay is mainly located in the surface of the detector, therefore might be suppressed if the inner layer is selected as the analysis volume. It should be noted that the balance between a lower background and the loss in detector sensitive mass should be considered in the the searching for the \(0\nu\beta\beta\) signal.
Furthermore, the discrepancy between the inner and outer layer SSE spectrum could be used to infer the location of the background source. A more precise background model could be built by fitting the spectra of events in the inner and the outer layer simultaneously.
## 8 Summary
In this study, we develop a virtual segmentation model for a small contact HPGe detector and demonstrate its background suppression capability in the Ge-76 \(0\nu\beta\beta\) signal region. The HPGe detector is virtually segmented into two layers, and a selection algorithm based on charge pulse drift time (\(T_{Q}\)) and current rise time (\(T_{I}\)) is established to identify the position of the single-site event. The shape and volume of the inner layer in the segmentation model are determined using the DEP events in a series of Th-228 source calibration experiments. The volume of the inner layer is evaluated to be 47.2%\(\pm\)0.26(stat.)%\(\pm\)0.22(sys.)% of the total sensitive volume of the detector.
The background suppression power of the virtual segmentation in Ge-76 \(0\nu\beta\beta\) signal region is evaluated by the Th-228 scanning data. Choosing the inner layer as the analysis volume, a further 12% reduction of background is achieved when the Th-228 source is on the side of the detector. Other backgrounds in the \(0\nu\beta\beta\) signal region, especially those clustered on the surface of the detector, such as Ar-42 in future \(0\nu\beta\beta\) experiments, could also be reduced by the virtual segmentation.
The principle of the virtual segmentation can be extended to other small contact HPGe detectors, for instance, point-contact Ge (PCGe) and broad energy Ge (BEGe) detectors.
Figure 19: Event rate in \(0\nu\beta\beta\) signal region (1900-2100 keV) as a function of Th-228 source position. The left and right figures show the event rate for the Th-228 source placed on the top and side surface of the detector, respectively.
Figure 18: Measured spectra for the Th-228 source on the side surface of the detector. cpkkd represents counts per kg per keV per day, \(Q_{\beta\beta}\) is the energy of Ge-76 \(0\nu\beta\beta\) signal.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China (Grant No. 2022YFA1604701) and the National Natural Science Foundation of China (Grants No. 12175112). We would like to thank CJPL and its staff for supporting this work. CJPL is jointly operated by Tsinghua University and Yalong River Hydropower Development Company.
|
2309.15067 | Logic Locking based Trojans: A Friend Turns Foe | Logic locking and hardware Trojans are two fields in hardware security that
have been mostly developed independently from each other. In this paper, we
identify the relationship between these two fields. We find that a common
structure that exists in many logic locking techniques has desirable properties
of hardware Trojans (HWT). We then construct a novel type of HWT, called
Trojans based on Logic Locking (TroLL), in a way that can evade
state-of-the-art ATPG-based HWT detection techniques. In an effort to detect
TroLL, we propose customization of existing state-of-the-art ATPG-based HWT
detection approaches as well as adapting the SAT-based attacks on logic locking
to HWT detection. In our experiments, we use random sampling as reference. It
is shown that the customized ATPG-based approaches are the best performing but
only offer limited improvement over random sampling. Moreover, their efficacy
also diminishes as TroLL's triggers become longer, i.e., have more bits
specified). We thereby highlight the need to find a scalable HWT detection
approach for TroLL. | Yuntao Liu, Aruna Jayasena, Prabhat Mishra, Ankur Srivastava | 2023-09-26T16:55:42Z | http://arxiv.org/abs/2309.15067v1 | # Logic Locking based Trojans: A Friend Turns Foe
###### Abstract
Logic locking and hardware Trojans are two fields in hardware security that have been mostly developed independently from each other. In this paper, we identify the relationship between these two fields. We find that a common structure that exists in many logic locking techniques has desirable properties of hardware Trojans (HWT). We then construct a novel type of HWT, called Trojans based on Logic Locking (TrollL), in a way that can evade state-of-the-art ATPG-based HWT detection techniques. In an effort to detect TroLL, we propose customization of existing state-of-the-art ATPG-based HWT detection approaches as well as adapting the SAT-based attacks on logic locking to HWT detection. In our experiments, we use random sampling as reference. It is shown that the customized ATPG-based approaches are the best performing but only offer limited improvement over random sampling. Moreover, their efficacy also diminishes as TroLL's triggers become longer (_i. e._ have more bits specified). We thereby highlight the need to find a scalable HWT detection approach for TroLL.
Logic Locking, Hardware Trojans, ATPG +
Footnote †: publicationid: pubid: 978-1-6654-3274-0/21/$31.00 © 2021 IEEE
## I Introduction
The fact that most chip designers outsource the production of their chips to off-shore foundries raises concerns about the privacy of the chip's intellectual property (IP) and the integrity of the fabrication process. There has been a significant amount of research in both topics. For IP protection, numerous design obfuscation techniques have been proposed to mitigate attacks such as counterfeiting and over production, among which logic locking is by far the most prominent and well-studied class of protection techniques [1]. Logic locking adds key inputs and key-controlled gates into the circuit to make the locked circuit's functionality key-dependent. As the correct key is not known to the untrusted foundry, neither is the correct functionality, and hence the privacy of the design is preserved. Pertaining to the integrity of fabrication, the term Hardware Trojans (HWT) is often used to describe stealthy malicious modifications in the design. Logic locking and HWT's have been studied mostly independently so far. Apart from some studies on how to use design obfuscation to prevent HWT insertion [2, 3, 4, 5] and how to compromise obfuscation with HWT's [6], little attention was paid to the relationship between logic locking and HWT's. In this work, we will discuss how to utilize logic locking techniques to construct novel HWT's, and how to convert attacks against design obfuscation to HWT detection techniques. The contribution of this work is as follows.
* We analyze a class of state-of-the-art logic locking techniques and highlight that their infrastructure can be viewed as a composition of a modification unit (MU) and a restore unit (RU).
* While state-of-the-art Trojans are triggered based on rare events, we propose a fundamentally different way of designing Trojans based on Logic Locking (TroLL) by inserting only the MU into the design (equivalent to dropping the RU from a locked design).
* We propose evolved versions of existing ATPG-based HWT detection approaches to account for TroLL's trigger selection strategy.
* We adapt the Boolean satisfiability (SAT)-based attack on logic locking to the detection of both TroLL and conventional HWT's.
* Experimental results demonstrate that TroLL is much more resilient to existing state-of-the-art ATPG-based HWT detection approaches including statistical test generation [7] and maximum clique sampling [8]. While they can detect nearly all conventional HWT's, their efficacy drops drastically for TroLL. In comparison, the evolved ATPG-based approaches and SAT-based detection perform better on TroLL without sacrificing the efficacy on conventional HWT's. However, the percentage of TroLL detected still drops drastically as the trigger length increases no matter which detection approach is used.
The rest of this paper is organized as follows. In Section II, we introduce the technical background of hardware Trojans and logic locking. Section III presents our analysis of state-of-the-art logic locking techniques and the construction of TroLL. The evolved ATPG-based detection approaches and the adaptation of SAT-based attacks on logic locking to HWT detection is formulated in Section IV. In Section V, we present the experiment details on TroLL and the results on detecting TroLL with approaches based on ATPG, SAT, and random sampling. Lastly, we conclude the paper in Section VI.
## II Background and Related Work
In this section, we provide relevant background and survey related efforts in three broad categories. First, we describe the working principle of hardware Trojans. Next, we survey existing test generation efforts for detection of hardware Trojans. Finally, we provide an overview of logic locking techniques.
### _Hardware Trojans_
Hardware Trojans (HWT) are stealthy malicious modifications to hardware designs. HWT's usually consist of two components, trigger and payload. The trigger is a condition that activates the HWT, and the payload is the effect of the HWT once activated. The trigger condition can be determined by the circuit's input and/or internal signals in the original design. The HWT payload can have various possible effects, including functionality corruption [9], information leakage [10, 11, 12], denial-of-service [13], bypass of security properties [14], etc. An illustration of an HWT-infested circuit is given in Fig. 1 where the relationship between the original design and the HWT's trigger and payload is shown.
HWT's can be inserted in almost any phase in the VLSI hardware supply chain, including in third-party IP cores, by a malicious CAD tool, by a rogue designer, by an untrusted fabrication facility, etc. [15, 16]. The HWT's inserted before the fabrication stage are present in the design files (_e.g._ RTL and/or gate-level netlists). Therefore, it is possible to use formal methods, such as logic equivalence checking, to tell whether an HWT exists [17, 18]. However, for HWT's inserted by the foundry, the netlist of the HWT-infested circuit is not available to the designer. Some researchers have proposed to use reverse engineering to obtain the layout of the HWT-suspicious chip [19, 20]. However, IC reverse engineering is increasingly expensive and error-prone as technology node scales down [21], and there is no report of successful reverse engineering of any chip with technology node below 14nm to the best of our knowledge. Hence, testing is still the most practical way to detect HWT's inserted by untrusted foundries. Besides, testing-based methoes are also applicable to HWT's inserted by IP providers, CAD tools, rogue designers, etc. The state-of-the-art automatic test pattern generation (ATPG) approaches for HWT detection will be introduced in Section II-B.
### _ATPG-based HWT Detection_
Both combinational and sequential HWT triggering mechanisms have been proposed in the literature. However, since the designer likely has access to testing infrastructure that allows the circuit to be tested combinationally (_e.g._ scan-chain), a sequential HWT trigger can be broken down into a series of combinational ones. We hence focus on combinational HWT triggers in this work. State-of-the-art combinational HWT insertion methodology utilizes rare signals (_i.e._ an internal node's value that is functionally difficult to sensitize) as the trigger, ensuring that the HWT is only triggered in rare circumstances [22, 23]. Based on this property, many HWT detection methods have been developed based on ATPG principles. Existing approaches explored two complementary directions when dealing with test generation for activation of rare signals: 1) statistical test generation, and 2) directed test generation. A promising avenue for statistical test generation is to rely on \(N\)-detect principle [24] by activating each rare signal \(N\) times to increase the statistical likelihood of activating the unknown trigger in the HWT. MERO [7] tries to generate test vectors to activate the same rare signal \(N\) times by flipping input vector bits one at a time. Saha _et al._ improved the test generation performance using genetic algorithm and Boolean satisfiability [25]. Pan _et al._ improved the performance further by flipping bits using reinforcement learning [26].
While \(N\)-detect methods try to activate one rare signal at a time, Lyu _et al._ focused on activating as many rare nodes as possible using maximal clique sampling (TARMAC [8]). TARMAC first creates the satisfiability graph for all the rare signals. In this graph, each vertex stands for a rare signal, and there is an edge connecting two vertices if and only if there exists an input pattern that sensitizes the two rare signals simultaneously. Next, the maximal cliques from the satisfiability graph is computed. Finally, TARMAC generates tests to activate randomly sampled set of maximal cliques. If any of the generated tests is able to activate the trigger, the HWT will be detected.
### _Logic Locking_
Logic locking has emerged as a protection mechanism against potential piracy and overbuilding threats in untrusted fabrication facilities. These techniques obfuscates the hardware by adding key inputs into the circuit without disclosing the correct key to the fab. Hence, the fab will not know the full functionality of the design. When the fabrication is done, the chip designer (or a trusted third party) will provide the key to the chip by connecting a piece of tamper proof memory. This process is called _activation_. This way, only the authorized users will have access to an _activated chip_ which has the correct functionality.
There have been many attacks formulated against logic locking, among which the ones based on Boolean satisfiability theory, a.k.a. SAT-based attacks [27], have both mathematical guarantee to find the correct key and strong practical performance. The flow of SAT-based attacks is demonstrated in Fig. 2. As demonstrated, a miter circuit is built. The miter contains two copies of the locked netlist that share the same input but are keyed separately. Their outputs are XOR'ed. Essentially, if the miter's output is \(TRUE\), the input is causing different outputs with the two keys. The SAT-based attacks are iterative. In each iteration, a Boolean satisfiability problem is solved to find an input pattern and two keys that satisfy the miter circuit. The input pattern is called the _distinguishing input (DI)_. The activated chip is then queried to get the correct output value. Then, clauses are added to the miter-based SAT formula so that all the wrong keys that causes an incorrect output for the DI are pruned out. A correct key will be found when the DIs found have pruned out all the wrong keys.
Fig. 1: Illustration of an HWT-infested Circuit
Many SAT resilient logic locking techniques have been proposed to thwart the attack. In this work, we will examine these techniques and summarize the structural similarities among them. We then show how these logic locking techniques can guide the construction of novel hardware Trojans.
## III Locking Inspired HWT Construction
In this section, we provide a brief overview of existing obfuscation art. We then explore how the properties of these techniques can be leveraged in order to construct difficult-to-detect HWT's by slightly modifying their logical topologies, maintaining their rigorous mathematical guarantees but retargeting them to HWT application. The intuition behind such conversion is that, for both locking and Trojan, error is injected into a circuit only when the circuit has a specific input pattern:
* For locking: The input pattern is among those that are corrupted by the given wrong key.
* For Trojans: The input pattern matches the trigger.
Because HWT's should be triggered only by very few input patterns to evade detection [7, 8], the logic locking schemes suitable for converting to HWT's should also corrupt very few input patterns given a wrong key. Such logic locking techniques do exist and they are mainly designed to thwart SAT-based attacks. These techniques include Anti-SAT [28], SARLock [29], stripped functionality logic locking (SFLL) [30], Robust Strong Anti-SAT [31], CASLock [32], etc. In this work, we first analyze the commonality among these locking approaches. Next, we present the HWT construction based on these locking algorithms.
### _Commonality among Logic Locking_
No matter how distinct these logic locking constructions seem to be, we find that they can all be decomposed into two functional units that interact together to inject error for specific input pattern given a wrong key. We call them a _modification units (MU)_ and a _restore unit (RU)_. Essentially, the MU modifies the circuit's functionality for some input patterns and the RU tries to restore the modified functionality. When the correct key is applied, the RU restores the correct input patterns modified by the MU and so the locked circuit produces correct output to all input values. When the key is incorrect, however, the error injected by the MUs will not be corrected by the RU. In this case, if the input's functionality is modified by the MU, its output will be corrupted. The number of input patterns modified by the MU should be very small in order for the logic locking approach to be resistant to SAT-based attacks [33]. The rarity of such input patterns makes them suitable for HWT triggers. We use SFLL, SARLock, and Anti-SAT as examples of SAT-resilient locking techniques and briefly review how the MU and RU interact in each of them.
#### Iii-B1 Stripped Functionality Logic Locking (SFLL)
Fig. (a)a shows the block diagram of SFLL. It is composed of two parts: a functionality stripped circuit (FSC) and a restore unit (RU). The FSC is the original circuit with the functionality altered for a set of protected input patterns (PIP), denoted as \(\mathbf{P}\). The FSC's internal structure that modifies the functionality of the PIPs is the MU of SFLL. Notice that RU of SFLL coincides with our general definition of RU. The structure of the RU in SFLL is a look-up table (LUT). If the circuit's input matches the LUT key, the LUT will produce a restore signal. If the LUT contains the correct key, the restore signal will reverse the corruption caused by the FSC. If the LUT contains an incorrect key, both the PIPs and the input patterns that correspond to the key will be corrupted.
#### Iii-B2 Anti-Sat
The structure of Anti-SAT is shown in Fig. (b)b. The MU and the RU have similar structure. For the MU, there is an array of XOR gates followed by an AND tree. Depending on \(\vec{K}_{1}\), there is only one input value of \(\vec{X}\) such that the MU will evaluate to logic 1. Let us call this value \(\vec{X}_{M}\). The RU's structure is very similar to the MU's, and the only difference is that the AND tree's output is inverted. Depending on \(\vec{K}_{2}\), there is only one input value of \(\vec{X}\) that will make the RU evaluate to logic 0. Let us call this value \(\vec{X}_{R}\). Corruption is injected into the circuit when both the MU and the RU evaluate to logic 1, _i.e._ when \(\vec{X}=\vec{X}_{M}\) and \(\vec{X}\neq\vec{X}_{R}\). Therefore, a correct key must be such that \(\vec{X}_{M}=\vec{X}_{R}\). This way, the RU will output logic 0 when MU outputs logic 1 and prevent the original circuit from being corrupted.
#### Iii-B3 SARLock
SARLock also contains an MU and an RU, as shown in Fig. (c)c. Its MU is the same as the one in Anti-SAT: depending on the key, there is one input value that will let the MU evaluate to logic 1. The RU checks if the key input contains the correct key. If so, it will mask the MU's output and prevent it from corrupting the original circuit.
### _Advantages of Locking based Trojans_
In each of the above-mentioned logic locking techniques, the MU is capable of injecting error into the circuit, and the RU will prevent the error from being injected if the correct locking key is provided. We notice that the MU naturally offers properties desirable for the trigger of HWT's:
Fig. 2: The basic procedure of SAT-based attacks
* Corruption is injected for very few (or just one) input pattern in the exponential input space, which makes random sampling based detection very difficult.
* The corrupted input patterns need not have any correlation with the original netlist's structure, so that they can be chosen to avoid ATPG or rare signal based detection approaches such as [7] and [8].
* These trigger patterns are completely known to the attacker. Contrarily, enumerating triggers of conventional rare signal based Trojans is mathematically intractable in general because it is a satisfiability counting problem [34]. Hence, it is much easier for the attacker to control when to trigger the Trojan and avoid unintended triggering using TroLL.
### _Construction of TroLL_
These properties indicate that the MU's of logic locking can serve as ideal HWT trigger circuitry. Building upon this discovery, we present Trojans based on logic locking (TroLL), which employs the MU of logic locking to modify the functionality of the original circuit. We present a generalizable way to convert a logic locking technique to TroLL as follows:
1. Identify MU and RU in the locked netlist and remove the RU. Hard-code the RU's output value to the one that does not invert the output of the MU.
2. If the MU has a key input (such as Anti-SAT), hard-code the key such that the desired HWT trigger can cause the MU to corrupt the circuit.
Essentially, when building TroLL from SFLL, we only need to remove the RU and make sure that the PIP's represent the Trojan trigger patterns we want. For Anti-SAT and SARLock, we need to remove the RU and hard-code the MU keys to incorporate the triggers. E.g., for the Anti-SAT construction in Fig. (b)b, we need to remove the RU and fix its output at logic 1. For the MU, we fix \(\vec{K}_{1}\) to be the bitwise-inverted trigger pattern. A constant sweep is then performed to simplify the circuit. In this way, the key inputs of logic locking will be all removed and the TroLL-infested circuit has the same I/O pins as the original circuit. No matter which logic locking technique TroLL is made from, the functionality of TroLL will be identical. Besides, as each of the above steps is a strict subtraction from logic locking infrastructure, TroLL's overhead will be much lower than that of logic locking. Notice that, although we describe a gate-level operation to build TroLL in the above example, TroLL can be incorporated at RT or behavioral level using the two-step process as well.
TroLL needs to evade HWT detection. As introduced in Section II-A, existing state-of-the-art HWT detection approaches find test patterns that sensitize rare signals in the original design. To evade these detection approaches, TroLL trigger patterns need to avoid sensitizing any rare nodes. To begin with, we use a random sampling approach to determine the rare value of each internal node, \(r_{i}\), and its associated probability, \(p_{i}\). Although an alternative to random sampling is the signal probability skew analysis [35], the complexity of such analysis often increases exponentially if the correlation between signals is to be accounted for [36]. Then we use Algorithm 1 to determine the trigger pattern for TroLL. Essentially, the algorithm finds an input pattern with the maximum probability threshold \(p_{max}\) such that no rare value below this probability will be realized by the trigger. Such a process is illustrated in Fig. 4. In the sample circuit, the rare values and their probabilities are annotated for each internal
Fig. 3: MU and RU in Logic Locking Constructions
node. A list of randomly generated input patterns are shown under the circuit diagram. The signal sensitized by each input pattern that has the lowest probability are highlighted in pink. Algorithm 1 will choose the input pattern that maximizes the lowest probability. In this example, the trigger pattern will be the one in the last row since it does not sensitize any rare value. TroLL triggers selected by this process will be immune to the existing rare value based detection approaches such as those introduced in Section II-A.
The fact that TroLL triggers does not sensitize any rare signal does not mean that TroLL can be triggered by high probability signals or can be easily detect by random sampling. On the contrary, TroLL is essentially creating a new rare node that only the trigger pattern can sensitize. Since the defender does not have the netlist of the HWT-infested circuit and can only base the detection on the original circuit, they do not have any information about the new node and hence cannot generate test patterns aimed at sensitizing the new node. Also notice that the triggers selected using Algorithm 1 has the full input length. This will likely cause high overheads. As we later demonstrate in Sections V-B, practical resilience against HWT detection can be attained when only a subset of input bits are taken as the TroLL trigger.
## IV Detection Techniques for TroLL
In this section, we introduce a few novel approaches that are aimed to detect TroLL more effectively. The first type of approaches are based on the trigger selection process of TroLL: by avoiding any test pattern that sensitize any rare node value, ATPG-based HWT detection mechanisms will generate test patterns that are more likely to match the trigger of TroLL. The second approach is based on the fact that TroLL originates from logic locking, and SAT-based attacks are the most formidable attacks on logic locking. Therefore, we can formulate a Trojan detection approach that emulates the a SAT-based attack on logic locking.
### _Customizing ATPG-based HWT Detection Approaches for TroLL_
Given TroLL's trigger selection mechanism, we can customize existing ATPG-based HWT detection approaches to detect TroLL. TroLL's trigger selection process eliminates any input pattern that sensitizes any rare internal node value as described in Algorithm 1. The same principles can be applied to the test generation algorithms for HWT detection: instead of targeting the rare values, the ATPG algorithms can choose the test patterns that satisfy as many prevalent values as possible. Following the notations used in Algorithm 1: say that \(n\) internal nodes of a combinational circuit that implement Boolean functions \(G_{1}\ldots G_{n}\) have rare values \(r_{1}\ldots r_{n}\) that are below a certain threshold \(p\) where \(0<p<0.5\). In other words, these \(n\) nodes have prevalent values \(\tilde{r_{1}}\ldots\tilde{r_{n}}\) that have probabilities above \(1-p\). While existing HWT detection algorithms aim to find test patterns \(X\) that satisfy as many \(G_{i}(X)=r_{i}\) as possible (\(i=1\ldots n\)), a TroLL-specific detection algorithm should instead find input patterns that satisfy \(G_{i}(X)=\tilde{r_{i}}\) for as many \(i\) as possible.
Given such a principle, it is surprisingly convenient to customize existing HWT detection approaches for TroLL. We can indeed run the same ATPG algorithms, such as statistical test generation [7] or maximal clique sampling [8], and target the same set of internal nodes. The only change is to invert the targeted Boolean values of these nodes. Statistical test generation (such as \(N\)-detect) can target to generate test vectors to activate each prevalent node value \(N\) times, whereas maximal clique sampling can build the satisfiability graph on the prevalent values instead of the rare values.
Because the defender does not know the type of HWT when the test patterns are generated, the test patterns should be able to detect conventional HWTs as well. Therefore, for each ATPG algorithm, we combine the test patterns that are generated to sensitize the rare values (for conventional Trojans) and those generated to avoid sensitizing rare values (for TroLL). We refer to such an approach as _Evolved Statistical Test Generation_ and _Evolved Maximal Clique Sampling_. In Section V-B, we will present the efficacy of these evolved HWT detection approaches.
### _Adapting SAT-based Attacks on Logic Locking for HWT Detection_
Attacks on logic locking try to find the correct key, whereas Trojan detection aims to find the trigger of HWT's. Since TroLL is based on logic locking, it is natural to associate logic locking attack with the detection of TroLL. However, since the defender does not know which type of HWT is potentially inserted, the detection approaches must not be limited to TroLL but generalizable to any type of HWT. In this section, we present how to adapt the SAT-based attacks on logic locking to detecting HWT's. A SFLL-like auxiliary circuit will be constructed based on the HWT-suspicious circuit where the Trojan's trigger and payload are represented by keys. Then, the SAT attack formulation is used to find a key that can represent the HWT. The HWT is detected when such a key
Fig. 4: Illustration of how to Choose TroLL Trigger using Algorithm 1 on a Sample Circuit
is found. In Section V, this SAT-based detection approach as well as the ATPG-based approaches will be used to evaluate the detectability of TroLL and conventional HWT's.
#### Iii-B1 Construction of the Auxiliary Circuit
A defender has the netlist of the original circuit and the fabricated HWT-suspicious circuit. The netlist of the fabricated circuit is not available. In order to search for a trigger pattern, an SFL-like auxiliary circuit to emulate an HWT-infested circuit is constructed. As shown in Fig. 5, the auxiliary circuit is built by adding a look-up table to emulate the trigger and payload of the HWT. The trigger key \(K_{T}\) is compared with the circuit input \(X\). When they are the same, the payload key \(K_{P}\) is bit-wise XOR'ed with the output \(Y\).
Note that SAT-based detection does not assume any knowledge information about the potentially existing HWT, and the construction of the auxiliary circuit is independent from the actual trigger and payload of the HWT. The purpose of the auxiliary circuit is to emulate the trigger and payload of HWT's rather than being functionally equivalent to the HWT-suspicious circuit. Since only one trigger needs to be found to detect the HWT, we only need to have one entry in the LUT of the auxiliary circuit.
#### Iii-B2 Detection Flow
The flow of SAT-based Detection is laid out in Fig. 6. Similar to the SAT-based attack against logic locking introduced in Section II-C, a miter circuit is built using two copies of the auxiliary circuit and their outputs are XOR'ed. Let \(F(\vec{X})\) be Boolean function of the original circuit, \(F_{A}(\vec{X},\vec{K}_{T},\vec{K}_{P})\) be that of the auxiliary circuit, and \(H(\vec{X})\) be that of the HWT-suspicious circuit. In the first iteration, the following SAT formula is solved to obtain the distinguishing input (DI):
\[F_{A}(D\vec{I}_{1},\vec{K}_{Ta},\vec{K}_{P})\neq F_{A}(D\vec{I}_{1},\vec{K}_{ Tb},\vec{K}_{P}) \tag{1}\]
The subscript of \(DI\) stands for the iteration number. Then, both the original circuit and the HWT-suspicious circuit are queried with the DI. If the results are not equal, \(F(DI_{1})\neq H(DI_{1})\), then the HWT is detected and \(DI_{1}\) is an HWT trigger. If they are equal, then let \(O_{1}=H(DI_{1})\). In the second iteration, clauses are added to ensure that the new keys found should produce correct output for \(DI_{1}\) since it is not the trigger:
\[\begin{split} F_{A}(D\vec{I}_{2},\vec{K}_{Ta},\vec{K}_{P})& \neq F_{A}(D\vec{I}_{2},\vec{K}_{Tb},\vec{K}_{P})\\ \bigwedge& F_{A}(D\vec{I}_{1},\vec{K}_{Ta},\vec{K}_{ P})=F_{A}(D\vec{I}_{1},\vec{K}_{Tb},\vec{K}_{P})=O_{1}\end{split} \tag{2}\]
The added clause will exclude _any_ trigger key \(K_{T}\) that mistakes a non-trigger \(D\vec{I}_{1}\) as a trigger, which makes SAT-based detection potentially more efficient than purely testing-based detection approaches which only determine whether the test pattern is an HWT trigger or not.
The process of SAT-based detection have some key differences from the SAT attack on logic locking:
* The oracle used in the formulation is the HWT-suspicious circuit under detection, instead of an activated chip.
* An early exit condition is added. If the DI produce a different output on the HWT-suspicious circuit compared to the original circuit, the detection process will terminate because an HWT is detected.
* The same payload key is applied to both copies of the auxiliary circuits to ensure that the output difference of the two copies is caused by the trigger key.
* The correct key found by SAT attack on logic locking will make the locked circuit have the same functionality as the original circuit, whereas the SAT-based detection is not meant for replicating the exact functionality of the HWT-free circuit.
### _Summary_
In this section, we introduce two types of novel HWT detection techniques that have potentials to detect TroLL more effectively than existing approaches. The evolved ATPG-based detection aims at finding the trigger based on TroLL's trigger selection algorithm, whereas SAT-based detection is an effort to take advantage of TroLL's resemblance to logic locking. In the next section, we will examine these techniques alongside the existing ones to evaluate TroLL's ability to evade detection.
Fig. 5: Construction of the auxiliary circuit for SAT-based detection
Fig. 6: The SAT-based HWT Detection Flow
## V Experiments
In this section, we present details on TroLL implementation and evaluation. We also compare the detection approaches introduced in Section IV with existing state-of-the-art ATPG-based HWT detection approaches and random sampling on both TroLL and conventional HWT's.
### _Trojan Implementation and Overhead_
In this work, we implement both TroLL and conventional hardware Trojans, including rare node triggered Trojans and random node triggered Trojans. We use three benchmarks for the evaluation: DES, a 32-bit multiplier, and SHA-256, with a range of sizes as shown in Table I. For each benchmark, we use 100,000 random testing samples to analyze and determine the rare values and associated probability of each internal node. For rare node triggered Trojans, the triggers are selected directly based on this analysis. For TroLL, we choose trigger patterns using Algorithm 1 introduced in Section III-D. Notice that the length of these triggers are the same as the circuit's input. When a shorter trigger length is needed, we choose a random subset of bits from the trigger patterns. For the HWT payload, we choose a subset of output pins to flip when the trigger condition is satisfied and the payload is the same across all the HWT instances for the same benchmark. This avoids combinational loops in rare and random node triggered HWT's and ensure that the differences in overhead and detectability are only caused by trigger mechanisms.
is because TroLL's trigger selection algorithm, as presented in Section III-D, intentionally avoids sensitiizing any rare nodes within the original circuit. As both statistical test generation [7] and maximal clique sampling [8] ensures that each test pattern will sensitize some rare nodes in the circuit, they are unlikely to sensitize the triggers of TroLL.
In the middle row of Table II, we show the HWT detection results with the evolved ATPG-based approaches. Compared to the original ATPG-based approaches, the evolved ones are able to detect more TroLL-type HWTs. The improvement is most significant with trigger length between 12 an 20 bits. Thanks to the customization (flipping of targeted node value), the ATPG-based approaches are able to generate test patterns that fit the trigger criteria of TroLL, which is the main cause of the improvement.
SAT-based detection is implemented based on the code framework of SAT-based attacks on logic locking presented in [27]. We limit the time of each SAT-based detection run to 48 hours and a Trojan is considered as not detected if no trigger pattern is found within this time frame. In the bottom left division of Table II, we show the percentage of HWT detected by SAT for each benchmark and type of HWT.
The random sampling detection results are shown in the bottom right division of Table II. We should take the random sampling detection as the baseline case as it does not require any specialized algorithm. From Figure 8, it can be observed that the evolved ATPG-based approaches have higher efficacy on TroLL whereas their original versions perform worse than random sampling. This indicates that the customization of the ATPG-based approaches presented in Section IV-A is effective against TroLL. The SAT-based detection has overall similar efficacy compared to random sampling. This is expected because such an approach essentially converts a Trojan detection problem to a SAT attack problem on SFLL, a logic locking technique that essentially forces a SAT attacker to choose the distinguishing input pattern randomly in each iteration.
## VI Conclusion
In this paper, we present a novel type of Hardware Trojans based on logic locking, TroLL. TroLL is constructed by retaining the modification unit (MU) and removing the restore unit (RU) of state-of-the-art logic locking techniques. The trigger patterns of TroLL are selected in a way that avoids sensitiizing the internal rare signals of the original circuit, thereby evading state-of-the-art ATPG-based detection schemes. In an attempt to formulate an effective detection approach against TroLL, we tried several different approaches, including evolving the ATPG-based approaches targeting the internal nodes' prevalent values in addition to the rare values, and adapting the SAT-based attacks on logic locking to HWT detection. We also use random sampling as a reference. We found that the evolved ATPG-based approaches performed better than random sampling, but even these approaches' efficacy diminishes as TroLL's triggers get longer. Therefore, we have identified TroLL as a new threat to the integrity of hardware manufactured in untrusted fabrication facilities, and it is necessary to find a scalable detection approach against TroLL.
On a broader scale, this paper reminds us that even a design protection scheme (such as logic locking) can be a double edged sword. Meanwhile, just like the SAT attack can be turned to an HWT detection scheme, we can examine other attacks against logic locking in the search for a more effective detection approach against TroLL.
|
2309.15701 | HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models | Advancements in deep neural networks have allowed automatic speech
recognition (ASR) systems to attain human parity on several publicly available
clean speech datasets. However, even state-of-the-art ASR systems experience
performance degradation when confronted with adverse conditions, as a
well-trained acoustic model is sensitive to variations in the speech domain,
e.g., background noise. Intuitively, humans address this issue by relying on
their linguistic knowledge: the meaning of ambiguous spoken terms is usually
inferred from contextual cues thereby reducing the dependency on the auditory
system. Inspired by this observation, we introduce the first open-source
benchmark to utilize external large language models (LLMs) for ASR error
correction, where N-best decoding hypotheses provide informative elements for
true transcription prediction. This approach is a paradigm shift from the
traditional language model rescoring strategy that can only select one
candidate hypothesis as the output transcription. The proposed benchmark
contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs
of N-best hypotheses and corresponding accurate transcriptions across prevalent
speech domains. Given this dataset, we examine three types of error correction
techniques based on LLMs with varying amounts of labeled
hypotheses-transcription pairs, which gains a significant word error rate (WER)
reduction. Experimental evidence demonstrates the proposed technique achieves a
breakthrough by surpassing the upper bound of traditional re-ranking based
methods. More surprisingly, LLM with reasonable prompt and its generative
capability can even correct those tokens that are missing in N-best list. We
make our results publicly accessible for reproducible pipelines with released
pre-trained models, thus providing a new evaluation paradigm for ASR error
correction with LLMs. | Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Macro Siniscalchi, Pin-Yu Chen, Eng Siong Chng | 2023-09-27T14:44:10Z | http://arxiv.org/abs/2309.15701v2 | # HyPordise: An Open Baseline for Generative Speech Recognition with Large Language Models
###### Abstract
Advancements in deep neural networks have allowed automatic speech recognition (ASR) systems to attain human parity on several publicly available clean speech datasets. However, even state-of-the-art ASR systems experience performance degradation when confronted with adverse conditions, as a well-trained acoustic model is sensitive to variations in the speech domain, e.g., background noise. Intuitively, humans address this issue by relying on their linguistic knowledge: the meaning of ambiguous spoken terms is usually inferred from contextual cues thereby reducing the dependency on the auditory system. Inspired by this observation, we introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction, where N-best decoding hypotheses provide informative elements for true transcription prediction. This approach is a paradigm shift from the traditional language model rescoring strategy that can only select one candidate hypothesis as the output transcription. The proposed benchmark contains a novel dataset, "HyPordise" (HP), encompassing more than 334,000 pairs of N-best hypotheses and corresponding accurate transcriptions across prevalent speech domains. Given this dataset, we examine three types of error correction techniques based on LLMs with varying amounts of labeled hypotheses-transcription pairs, which gains a significant word error rate (WER) reduction. Experimental evidence demonstrates the proposed technique achieves a breakthrough by surpassing the upper bound of traditional re-ranking based methods. More surprisingly, LLM with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list. We make our results publicly accessible for reproducible pipelines with released pre-trained models, thus providing a new evaluation paradigm for ASR error correction with LLMs.
## 1 Introduction
Automatic speech recognition (ASR) has become increasingly important in modern society, as it enables efficient and accurate transcription of spoken languages. This capability facilitates access to information and enhances communication across various domains, including education [7], healthcare [50], and business [36]. Driven by the recent advances in deep learning, remarkable success has been achieved on several ASR tasks through end-to-end training techniques [28; 27; 9; 22; 30; 100; 15]. However, a major challenge of applying ASR in practical conditions lies in effectively handling variations in speech caused by different factors such as background noise [11], speaker accent [85], and speaking styles [82; 2]. These adverse factors are common and inevitable in speech signal, significantly affecting the accuracy of the recognition results [55].
Humans demonstrate remarkable robustness when faced with the above variations in acoustic environment, as the human recognition system does not only rely on acoustic cues - we usually speculate the ambiguous or distorted spoken terms based on speech context and our inherent linguistic knowledge. Similarly, current ASR system typically employs an independent language model (LM) for rescoring during the decoding process [83, 46, 43, 25]. As shown in Fig. 1, given N-best hypotheses generated by an ASR engine with beam search decoding, a trained language model (LM) can be used to re-score each utterance and select the one with the highest likelihood (referred to as the \(1^{st}\) utterance) as the output of the ASR; whereas, the other sentences (the \(2^{nd}\) - \(N^{th}\) utterances) are discarded. However, it is widely believed [68] that the N-best list contains useful information [87, 37, 56], as each hypothesis is an independent textual representation of the input speech. Consequently, discarded sentences might also carry correct tokens for accurately predicting the true transcription. To validate this belief, we have conducted experiments on the LibriSpeech dataset [66], counting the probabilities of two scenarios observed during LM rescoring: (i) the discarded utterances contain a better candidate with lower word error rate (WER), and (ii) the other discarded hypotheses can provide the right answer for the wrong tokens in \(1^{st}\) utterance. The statistical results of \(2^{nd}\sim 20^{th}\) utterances are shown in the left part of Fig. 1. Taking \(2^{nd}\) discarded utterance as example, it has a 14% probability of having a lower WER than the \(1^{st}\) utterance. Furthermore, given a wrong token in \(1^{st}\) utterance, there is a 34% probability of finding the correct token in the \(2^{nd}\) utterance.
To better mine the information in N-best hypotheses, we propose the first attempt on publicly available **ASR generative error correction benchmark** that directly predicts a true transcription, rather than selecting a candidate from the N-best list. To put forth this benchmark, we introduce a novel dataset named _HyPordase (HP)_, which comprises various open source N-best hypotheses provided by state-of-the-art ASR systems and their paired true transcriptions. Considering real-life applications, HP dataset covers various challenging speech domains, including scenarios with background noise, specific contexts, and speaker accents. Furthermore, in terms of resources availability, we define three settings to mimic the deployment of ASR systems in real-world scenarios: _(i) Zero-shot_ Learning. In this setting, only test set hypotheses are available for inference. This corresponds to applying a well-trained ASR model to new scenarios without any training data. _(ii) Few-shot_ Learning. A few in-domain hypotheses with true transcription are available for training. This setting aims to address domain-specific ASR tasks with a few manual annotations. _(iii) Fine-tuning_. A sufficient training set is available to learn the mapping between hypotheses and transcription.
To exploit the three aforementioned scenarios, we present multiple error correction techniques using large language models (LLMs), which have shown the outperforming ability of language generation and reasoning in recent studies [5, 107, 48, 84]. For _zero-shot_ and _few-shot_ settings, we design an in-context learning method without any parameter tuning, which directly performs error correction based on task prompt and in-domain demonstration. In the _fine-tuning_ scenario, we develop two sequence-to-sequence training solutions, H2T-_ft_ and H2T-_LoRA_, which adapt pre-trained LLMs to specific transcription domains. Experimental results show that all learning strategies can be beneficial to reduce the WER in different resource settings, providing potential solutions for alleviating the
Figure 1: The left part shows the pipeline to generate the N-best hypotheses using a vanilla ASR engine with beam search decoding. The right part counts the probabilities of case (i) and case (ii) on the test set of LibriSpeech dataset. It indicates the discarded information in \(2^{nd}\sim 20^{th}\) utterances. Green and red \(T_{i}\) in “Exp” respectively denote correct and wrong tokens compared with ground-truth.
negative impact of speech variation. Additionally, with reasonable prompt design, LLMs can correct those specious tokens that are exclusive from N-best list. We will release the HP datasets, reproducible pipelines, and pre-trained models on Github 2 under MIT licence.
Footnote 2: [https://github.com/Hypotheses-Paradise/Hypo2Trans](https://github.com/Hypotheses-Paradise/Hypo2Trans)
Our contribution can be summarized as follows:
* We propose the first open and reproducible benchmark to evaluate how LLMs can be utilized to enhance ASR results with N-best hypotheses, where a new dataset HyPoradise 3 with more than 334K hypotheses-transcription pairs are collected from the various ASR corpus in most common speech domains. Footnote 3: Denoted as _Hypotheses Paradise_, inspired by “Lcha Icha Paradise” from Naruto.
* We develop three ASR error correction techniques based on LLMs in different resource settings to directly predict the true transcription from the N-best hypotheses. Experimental results in the _fine-tuning_ setting show that our new approach can **surpass** a performance upper-bound (e.g., oracle WER from n-best list) of traditional re-ranking based methods.
* We introduce an evaluation paradigm of _generative error correction_ for ASR. The acoustic model generates word-piece elements in the hypotheses list; subsequently, LLMs predict accurate transcription utilizing linguistic knowledge and contextual information.
## 2 Related Work
### ASR Rescoring and Error Correction
In order to improve the linguistic acceptability of ASR results, LM rescoring has been widely employed and achieved stable performance gain for ASR systems [79; 62; 4]. Typically, an external LM is trained separately and utilized to re-score the N-best list of hypotheses generated by ASR decoding with beam search. Various approaches for LM integration have been proposed, such as shallow fusion [17; 104; 46; 83], deliberation [98; 32; 41; 40; 91; 39], component fusion [76], and cold fusion [81]. Some authors have used pre-trained LM models to replace trainable LMs [86; 74], and the log-likelihood of each hypothesis is computed using unidirectional models, e.g., GPT-2, or pseudo-log-likelihood using bidirectional models like BERT [21] and RoBERTa [59]. In ASR, LMs are also widely used for the error correction task in different languages [96; 29], leveraging only the 1-best hypothesis generated by the ASR model [53; 61; 106; 23; 109; 77]. Furthermore, more recent works [60; 52; 51] utilize a candidates list after decoding for error correction. Though Grammatical Error Correction (GEC) has been actively explored [20; 93; 100], ASR error correction is distinct with GER due to the arbitrariness of the spoken language [2], which requires the efforts from both speech and NLP communities [18].
### Large Language Models
More recently, there has been a surge of interest in Transformer-based LLMs [84; 70; 75; 107] in both academia and industry. By learning from massive amounts of text data, LLMs can capture linguistic patterns and semantic relationships, which have led to impressive performance for a wide range of natural language processing (NLP) tasks [5; 65; 95].
**In-context Learning**. Given specific task descriptions or pair-wise contextual information, LLMs show outstanding adaptability on downstream NLP tasks _without_ any parameter tuning [63; 64; 100]. Such a capability of task-specific inference is also known as in-context learning (ICL) [99], which utilize LLMs to generate text that is more coherent and relevant to the specific domain or task [44; 16; 49; 73; 8; 108]. Recently, task-activating Prompting (TAP) [100] is one of the most relevant works, employing the injection of input-output pairs of task-oriented contexts (e.g., initiating the question prompt from a broad domain to refine preceding contexts as shown in Figure 2) with the aim of enhancing the zero-shot and few-shot capabilities of frozen-pretrained LLMs for second-pass ASR. We further evaluate the TAP-based zero-shot and few-shot approaches with examples.
**Low-rank Approximation based Neural Adapter**. Tuning all LLM parameters for a given downstream task is usually not feasible due to memory constraints. Many researchers sought to mitigate that problem by either adapting only a few parameters or leveraging external trainable modules for
a new task [58; 33]. A pioneer work [1] showed that the learned over-parametrized models in fact reside on a low intrinsic dimension, consequently, a low-rank adaptation (LoRA) approach [38] was proposed to indirectly tune some dense layers by optimizing rank decomposition matrices of the dense layers. Due to its computational efficiency, LoRA adaptation has been rapidly adopted as a new paradigm for LLMs tuning, which was useful in various downstream tasks [105; 24; 42; 92].
## 3 Hypothesis Generation and Dataset Creation
We introduce the generation process of the HyPoradise dataset in this section. The employed ASR system for N-best hypotheses generation is illustrated in 3.1, and then we introduce the selected speech domain in 3.2. Finally, we provide statistic information and generated HP in 3.2.
### ASR System
We employ two state-of-the-art ASR models, namely WavLM [14] and Whisper [69] for N-best hypotheses generation. Besides their remarkable performance and popularity, those models are representative in the deployment of an ASR because: (1) WavLM is a well-trained ASR model on LibriSpeech [66] but suffering from domain mismatch, and (2) Whisper is a universal ASR model but lacking domain specificity. More details about those two ASR models are described below:
**WavLM**: We utilize the ESPnet toolkit [94] along with the pre-trained model from HuggingFace to deploy our WavLM-based ASR system. The WavLM architecture consists of two blocks: the front-end, and the ASR model (433 million parameters in total). The front-end consists of 24 Transformer-based [88] encoder layers and is pre-trained using a combination of LibriLight [45] (60k hours of data), Gigaspeech [12] (10k hours of data), and VoxPopuli [90] (24k hours of data). Front-end features are fed into the ASR back-end for fine-tuning. The back-end consists of 12 Conformer-based [30] encoder layers, and 6 Transformer-based decoder layers. The fine-tuning process is performed on 960-hour LibriSpeech data. Additionally, the WavLM decoding recipe incorporates an external LM rescoring option, where the external LM adopts Transformer architecture with 16 encoder layers and is trained using the text of LibriSpeech 960 hours data and extra LM training data from the web.
**Whisper**: We employ the Whisper-Large model developed by OpenAI to generate hypotheses, without in-domain language model rescoring. The used configuration consists of an encoder-decoder Transformer architecture with 1,550 million parameters, which is trained on 680,000 hours of multilingual-weakly labeled speech data collected from the web.
Leveraging these two pre-trained ASR models, we have employed the beam search algorithm during decoding and generated N-best lists of sentence hypotheses for each input waveform. For both WavLM and Whisper, the default beam size was set to 60. After removing repeatable utterances, we select top-5 utterances with highest probabilities as N-best list, as they have carried sufficient elements to accurately predict transcription. Subsequent experiments confirm this belief by calculating the accurately upper-bound WER using 5-best hypotheses list. To build the HP dataset, we carry out this decoding strategy on multiple popular ASR datasets (please see Section 3.2) and generate paired data consisting of an 5-best hypotheses list and 1 ground-truth transcription. The pre-processing and generation code are also released for integrating new ASR corpus into HP. All the links of relevant resources are presented in Appendix.
### Selected Speech Corpora
For corpora selection, our goal is to cover common scenarios of ASR task, e.g., noisy background and speaker accent. Consequently, we collect and modify the following corpora with evident domain characteristics to compose the HP dataset.
**LibriSpeech**[66]: LibriSpeech is a public corpus of read speech from audiobooks, including 1,000 hours of speech data with diverse speakers, genders, and accents. For generating HP training data, we exclude some simple cases from its _train-960_ split that show WER result of 0, resulting in 88,200 training utterances. We use the entire _test-clean_ and _test-other_ splits for HP test data generation.
**CHiME-4**[89]: CHiME-4 is a dataset for far-field speech recognition. It includes real and simulated noisy recordings in four noisy environments, _i.e._, bus, cafe, pedestrian area, and street junction. We
use its _train_ (with 8,738 utterances) and _test-real_ (with 1,320 utterances) splits to generate HP training and test data. The four different noises in _test-real_ split are also evaluated separately in Table 3.
**WSJ**[67]: The Wall Street Journal (WSJ) is a widely-used benchmark for speech recognition. It includes read speech from speakers in a controlled environment, with a focus on business news and financial data. We use its _train-si284_ split (with 37,514 utterances) to generate HP training set. The _dev93_ (with 503 utterances) and _eval92_ (with 333 utterances) are applied to build test sets.
**SwitchBoard**[26]: The SwitchBoard corpus is a telephone speech dataset collected from conversations between pairs of speakers. It focuses on North American English and involves over 2.4k conversations from approximately 200 speakers. We randomly select 36,539 samples from its _train_ split to generate HP training set, as well as 2,000 utterances from the _eval2000_ split for HP test set.
**CommonVoice**[3]: CommonVoice 5.1 is a freely-available dataset for speech recognition. It contains speech recordings from diverse speakers in over 60 languages. To generate HP dataset, we randomly select 51,758 samples from its _train-en_ split with accent labels, _i.e._, African, Australian, Indian, and Singaporean, where training set contains 49,758 samples and test set contains 2,000 samples.
**Tedlium-3**[35]: Tedlium-3 is a dataset of speech recorded from TED Talks in multiple languages. It contains a diverse range of background noise, speaker accents, speech topics, etc. Considering its large size, we randomly select 50,000 samples from its _train_ split for HP dataset generation, where training set contains 47,500 samples and test set contains 2,500 samples.
**LRS2**[19]: Lip Reading Sentences 2 (LRS2) is a large-scale publicly available labeled audio-visual dataset, consisting of 224 hours of video clips from BBC programs. We randomly select 42,940 samples from its _train_ split as training set, and the remaining 2,259 samples are used for test set.
**ATIS**[34]: Airline Travel Information System (ATIS) is a dataset comprising spoken queries for air travel information, such as flight times, prices, and availability. It contains around 5,000 to 5,400 utterances, which are recorded from around 500 to 550 speakers.
**CORAAL**[47]: The Corpus of Regional African American Language (CORAAL) is the first public corpus of AAL data. It includes audio recordings along with the time-aligned orthographic transcription from over 150 sociolinguistic interviews. To generate HP dataset, we select 1,728 samples as training set and 100 samples as test set.
### HyPordise (HP) Dataset Statistics
After performing beam search decoding on the selected speech datasets introduced in Section 3.2, we collected more than 334K pairs of hypotheses list and transcription to form the HP dataset, including training and test sets. The statistics for the HP dataset are given in Table 1, which shows the number
\begin{table}
\begin{tabular}{c c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{\multirow{2}{*}{Source}} & \multicolumn{1}{c|}{Domain} & \multirow{2}{*}{Training Set} & \multirow{2}{*}{\# Pairs} & \multirow{2}{*}{Length} & \multirow{2}{*}{Test Set} & \multirow{2}{*}{\# Pairs} & \multirow{2}{*}{Length} \\ \multicolumn{1}{c|}{\multirow{2}{*}{LibriSpeech}} & \multicolumn{1}{c|}{\multirow{2}{*}{Audiobooks}} & \multirow{2}{*}{_train-960_} & \multirow{2}{*}{88,200} & \multirow{2}{*}{33.7} & _test-clean_ & 2,620 & 20.1 \\ & & & & & & _test-other_ & 2,939 & 17.8 \\ \hline CHiME4 & Noise & _train_ & 8,738 & 17.0 & _test-real_ & 1,320 & 16.4 \\ \hline WSJ & Business news & _train-si284_ & 37,514 & 17.5 &
\begin{tabular}{c} _dev93_ \\ _eval92_ \\ \end{tabular} & 503 & 16.7 \\ \hline SwitchBoard & Telephone & _train_ & 36,539 & 11.8 & _eval2000_ & 2,000 & 11.8 \\ \hline CommonVoice & Accented English & _train-accent_ & 49,758 & 10.5 & _test-accent_ & 2,000 & 10.5 \\ \hline Tedlium-3 & TED talk & _train_ & 47,500 & 12.6 & _test_ & 2,500 & 12.6 \\ \hline LRS2 & BBC audio & _train_ & 42,940 & 7.6 & _test_ & 2,259 & 7.6 \\ \hline ATIS & Airline info. & _train_ & 3,964 & 12.4 & _test_ & 809 & 11.3 \\ \hline CORAAL & Interview & _train_ & 1,728 & 24.2 & _test_ & 100 & 24.0 \\ \hline & Total & _train_ & 316,881 & 18.1 & _test_ & 17,383 & 14.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: HP dataset statistics in terms of the number of hypotheses-transcription pairs and average utterance length in various domains.
of pairs and average length in various domains and splits. We would release our generated datasets and kindly call for more hypotheses-transcription pairs toward sustainable community efforts.
## 4 ASR Error Correction from Hypotheses to Transcription
We hereby introduce a hypotheses-to-transcription (H2T) training scheme utilizing the collected HP dataset to enhance ASR performance with LLM integration. With limited labeled data, in-context learning [100] is employed to form task-specific prompts and in-domain demonstrations: Linguistic knowledge in LLM is exploited without parameter tuning. Furthermore, we present two trainable methods fine-tuning (_ft_) and H2T-_LoRA_ to learn the hypotheses-to-transcription mapping when a sufficient amount of labeled data is available.
### Hypotheses-to-Transcription (H2T) Training
In addition to in-context learning, we introduce two parameter-tunable methods to learn hypotheses-to-transcription mapping in a sequence-to-sequence manner: H2T-_ft_ and H2T-_LoRA_.
**H2T-_ft_** denotes fine-tuning all parameters of a neural model with labeled data of each HP domain. Specifically, we introduce a similar method with N-best T5, which utilizes other hypotheses to improve the 1-best hypothesis as shown in Fig. 3. To constrain the decoding space, we add an new item criterion \(\mathcal{L}_{ft}=\sum_{i=1}^{N}\alpha_{i}\log P(x^{(i)}|x,\theta)\), where \(x^{(i)}\) is the \(i\)-th hypothesis in N-best list. This item aims to encourage the correction model to preferentially consider tokens into the N-best hypotheses list, preventing arbitrary modification in huge decoding space. \(\alpha_{i}\) is a hyper-parameter for \(i\)-th hypothesis that decreases with the order ranked by the acoustic model.
**H2T-_LoRA_** avoids tuning the whole set of parameters of a pre-trained model by inserting a neural module with a small number of extra trainable parameters to approximate the full parameter updates, allowing for efficient learning of the H2T mapping without affecting the pre-trained parameters of the LLM. H2T-_LoRA_ introduces trainable low-rank decomposition matrices into LLMs' existing layers, enabling the model to adapt to new data while keeping the original LLMs fixed to retain the previous knowledge. Specifically, LoRA performs a reparameterization of each model layer expressed as a matrix multiplication by injecting low-rank decomposition matrices (Fig.3 (b)). As a result, the
Figure 3: (a) Structure of H2T-_ft_. (b) Reparametrization in H2T-_LoRA_. Solid box denotes the module is fixed during tuning while dashed box stands for trainable. Blue color denotes the weights has been pre-trained on another dataset.
Figure 2: A scalable evaluation of Task-Activating Prompting [100] (TAP) based in-context learning. The demonstration in blue box is drawn from the training set, which is optional for LLMs input.
representations generated by the LLM are not distorted due to task-specific tuning, while the adapter module acquires the capability to predict the true transcription from the N-best hypotheses.
Benefiting from efficient training, we can employ a large-scale language model in the H2T-_LoRA_ method, which is expected to understand the task description and capture correlation in the N-best list. Meanwhile, instead of adding an extra training objective in H2T-_ft_, we constrain the decoding space of H2T-_LoRA_ by adding requirement in task description.
## 5 Experimental Results
### Language Models Configurations
**T5** (0.75B\(\sim\)3B): T5 family [72] is a set of encoder-decoder models pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for machine translation or text summarization. In this paper, we select T5-_large_ (0.75B) as the correction model in H2T-_ft_ method.
**LLaMA** (7B\(\sim\)65B): Proposed by Meta AI, LLaMA [84] is a collection of foundation language models ranging from 7B, 13B, 30B, and 65B parameters. It is trained on publicly available datasets exclusively, and shows remarkable efficiency on NLP benchmarks. We select LLaMA-13B for LoRA adaptation in H2T-_LoRA_ method as one best setup under ablations.
**GPT-3.5** (175B): Proposed by OpenAI, GPT-3.5-turbo is one of the most advanced large language models, which powers the popular ChatGPT. It has been optimized from the GPT-3 [5] for chat purposes but works well for traditional completions tasks as well. We utilize GPT-3.5-turbo in task-activated in-context learning [100], which conduct _zero-shot_ and _few-shot_ learning experiments with designed task prompt.
### Training and Evaluation
For _few-shot_ settings, the specific task prompts, along with the LLM's responses from task-activated ICL prompting [100], are provided in the Appendix (page 20). For _fine-tuning_ setting, the detailed configuration of H2T-_ft_ and H2T-_LoRA_ are also explained in Appendix. Furthermore, we release some of the pre-trained correction models to allow interested readers to reproduce our results.
We report WER results as the evaluation metric for all methods. Additionally, we report the two oracle WER for comparison, which are 1) the n-best oracle \(o_{nb}\): WER of the "best candidate" in N-best hypotheses list, and 2) the compositional oracle method \(o_{cp}\): achievable WER using "all tokens" in N-best hypotheses list. The \(o_{nb}\) can be viewed as upper bound performance of the re-rank based method, while \(o_{cp}\) denotes the upper bound of correction using occurred elements in the list.
\begin{table}
\begin{tabular}{c|c|c c c c c|c c} \hline \hline \multirow{2}{*}{Test Set} & \multirow{2}{*}{Baseline} & \multirow{2}{*}{LM\({}_{rank}\)} & \multicolumn{2}{c}{**H2T-_ft_**} & \multicolumn{2}{c|}{**H2T-_LoRA_**} & \multicolumn{2}{c}{Oracle} \\ & & & T5 & LLaMA & T5 & LLaMA & \(o_{nb}\) & \(o_{cp}\) \\ \hline WSJ & 4.5 & 4.3 & 4.0 & 3.8 & \(2.7\)\({}_{-40.0\%}\) & \(\mathbf{2.2}\)\({}_{-51.1\%}\) & 4.1 & 1.2 \\ ATIS & 8.3 & 6.9 & 2.7 & 3.4 & \(\mathbf{1.7}\)\({}_{-79.5\%}\) & \(1.9\)\({}_{-77.1\%}\) & 5.2 & 1.1 \\ CHiME-4 & 11.1 & 11.0 & 7.9 & 8.2 & \(7.0\)\({}_{-36.9\%}\) & \(\mathbf{6.6}\)\({}_{-40.5\%}\) & 9.1 & 2.8 \\ Tedlium-3 & 8.5 & 8.0 & 6.6 & 5.2 & \(7.4\)\({}_{-12.9\%}\) & \(\mathbf{4.6}\)\({}_{-45.9\%}\) & 3.0 & 0.7 \\ CV-_accent_ & 14.8 & 16.0 & 12.9 & 15.5 & \(11.0\)\({}_{-25.7\%}\) & \(\mathbf{11.0}\)\({}_{-25.7\%}\) & 11.4 & 7.9 \\ SwitchBoard & 15.7 & 15.4 & 15.9 & 18.4 & \(14.9\)\({}_{-5.1\%}\) & \(\mathbf{14.1}\)\({}_{-10.2\%}\) & 12.6 & 4.2 \\ LRS2 & 10.1 & 9.6 & 9.5 & 10.2 & \(\mathbf{6.6}\)\({}_{-34.7\%}\) & \(8.8\)\({}_{-12.9\%}\) & 6.9 & 2.6 \\ CORAAL & 21.4 & 21.4 & 23.1 & 22.9 & \(20.9\)\({}_{-2.3\%}\) & \(\mathbf{19.2}\)\({}_{-10.3\%}\) & 21.8 & 10.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER (%) results of H2T-_ft_ and H2T-_LoRA_ in _fine-tuning_ setting. "\(o_{nb}\)" and "\(o_{cp}\)" respectively denote n-best oracle and compositional oracle that are defined in 5.2.
### Results of H2T-_ft_ and H2T-_LoRA_
We first report the WER results for H2T-_ft_ and H2T-_LoRA_ in the _fine-tuning_ setting, where the training set of HP is available to learn H2T mapping. Whisper is employed as acoustic model for hypotheses generation, and a vanilla language model \(LM_{rank}\) is trained using in-domain transcription of the training set, and then it re-ranks the hypotheses according to perplexity. From Table 2, we observe that 1) correction techniques achieve significant performance gain in specific scenarios, where H2T-_LoRA_ respectively reduces 77.1% and 55.1% relative WER on ATIS and WSJ. 2) WER performances on CHiME-4 and CV-_accent_ demonstrate proposed correction methods improves the robustness of on background noise and speaker accent. Additionally, H2T-_LoRA_ on these two datasets both surpass the upper-bound of re-ranking based method referring to \(o_{nb}\). 3) In general, H2T-_LoRA_ usually generate better WER results than H2T-_ft_, as the low-rank adapter allows LLMs to keep pre-trained knowledge and avoid over-fitting problem.
**Limitation and Failure Studies.** We notice that an over-fitting phenomenon existing in our correction techniques, especially in H2T-_ft_ where all parameters are tunable. Furthermore, the mean and variance of the utterance length can potentially influence the WER result, since H2T-_ft_ results on CORAAL (long-form speech) and SwitchBoard (large variance in length) both fail to enhance ASR performance. On LibriSpeech, when the WER is low (1.8% by WavLM), there is less room to correct recognition errors with proposed framework. The experimental results and list the representative failure cases can be found in Appendix Table 6 and Table 7. Given the evidence of ample room for further performance improvement, our proposal thus serves as an appropriate benchmark to assess the contribution of current and future LLMs to ASR.
model, and GPT-3.5 serves as the LLM for correction. We mainly consider common domain shifts of application: specific scenario, common background noise, and speaker accent, where 5-best hypotheses are selected as context input. From Table 3, we can observe that: (1) Without any in-domain data, LLM can benefit from ASR results based on the hypotheses list. This performance gain mainly relies on the linguistic knowledge of LLM and task-activating [100] descriptions (e.g., chains of task hints) in pipeline. (2) A few in-domain pairs effectively enhance the performance gain in terms of WER. From the final output of the reasoning process, we find that LLM attempts to summarize the regulation from the demonstration and then apply it to the given test example. (3) Leveraging the vast knowledge base, LLM can even correct missing tokens that are exclusive from hypotheses list in terms of context information.
To illustrate the third observation, we conduct the case study on WSJ-_dev93_ in Table 4. According to the ground-truth transcription, two errors (shown as red) are included in \(1^{st}\) hypothesis, where "petro chemical" is wrongly recognized as two tokens perhaps due to the speaking style of the speaker. LLM correct this error since "petrochemical" can be found in \(2^{nd}\) hypothesis. However, "Sinopec" is unseen during ASR training, leading it to be recognized as weird tokens ("xinepec" or "xinepec") in hypotheses. In this case, LLM shows human-like correction - it successfully infers the correct token based on the pronunciation of "xinepec", as well as the context of "China's petrochemical". In fact, Sinopec is a petrochemical-related Chinese company.
### Additional Discussion
**Effect on Spoken Language Intent Detection.** We examine the effect of error correction on a downstream task of spoken intent detection [80] (SID). To this end, we reproduce an BERT-based SID model [13] and respectively feed the 1-best utterance and corrected utterance by H2T-_LoRA_ for comparison. The ablation results on ATIS dataset are reported in Appendix, which shows that our correction technique can also benefit to SID task in terms of detection accuracy. (3) LLM correction based on N-best hypotheses can effectively enhance the downstream SIT result, which achieves comparable accuracy with using ground-truth transcription (97.4% _v.s._ 97.9%).
**Zero-shot Prompting Results.** We finally report an initial prompting evaluation on CHiME-4 in _zero-shot_ setting. Considering the task difficulty, T5 and LLaMA are employed for hypothesis correction. For comparison, we also provide the correction results using a far smaller GPT-2 (1.5B) with a 5-gram LM baseline trained by in-domain transcription. We used LLaMA 13B to perform these zero-shot error correction tasks. Using the test set extracted from Whisper, we observed that the zero-shot method did not yield improved results on CHiME-4 (11.5 \(\pm\) 0.5%) and CV-accent (14.9% \(\pm\) 1.5%). This zero-shot pipeline performed less stably on the other test set discussed in Table 2, which we consider a failure case with a standard deviation exceeding an absolute value of 10% in terms of WER. For T5-based error correction, we noticed that the method also failed to perform zero-shot error correction by using 0.75B.
**Future work**. We find that LLMs potentially perceive acoustic information during pre-training, as they tend to perform error correction using tokens with similar pronunciation. Therefore, our first future work is including more acoustic information in HP dataset, such as token-level confidence provided by ASR engine. Furthermore, considering different data amount of each domain, more parameter-efficient training methods besides low-rank adaptation should be discussed for LLMs tuning [54], e.g., model reprogramming [102; 31], prompting [10] and cross-modal adaptation [97; 101; 71].
## 6 Conclusion
To explore the benefits in speech-language co-learning, this work introduces a new ASR benchmark that utilizes LLMs for transcription prediction from N-best hypotheses. Our benchmark contains a new HP dataset consisting of more than 334K hypotheses-transcription pairs that are collected from 9 different public ASR corpora. In _few-shot_ settings, we demonstrate that LLMs with in-context learning can serve as a plug-and-play back end to effectively alleviate domain shift of ASR. In the _fine-tuning_ setting, our proposed error correction technique based on LLMs achieves better WER performance than the upper-bound of re-ranking based method, which provides a new paradigm for applying ASR in some challenging conditions, such as background noise and speaker accent. We believe our benchmark and findings provide new and unique insights into LLM-enhanced ASR. |
2301.13552 | Minimal Left-Right Symmetric Model with $A_4$ modular symmetry | In this paper, we have realized the left-right symmetric model with modular
symmetry. We have used $\Gamma$(3) modular group which is isomorphic to
non-abelian discrete symmetry group $A_4$. The advantage of using modular
symmetry is the non-requirement for the use of extra particles called
'flavons'. In this model, the Yukawa couplings are expressed in terms of
modular forms $(Y_1,Y_2,Y_3)$. In this work, we have studied minimal Left-Right
Symmetric Model for both type-I and type-II dominances. Here, we have
calculated the values for the Yukawa couplings and then plotted it against the
sum of the neutrino masses. The results obtained are well within the
experimental limits for the desired values of sum of neutrino masses. We have
also briefly analyzed the effects of the implications of modular symmetry on
neutrinoless double beta decay with the new physics contributions within
Left-Right Symmetric Model. | Ankita Kakoti, Bichitra Bijay Boruah, Mrinal Kumar Das | 2023-01-31T11:04:01Z | http://arxiv.org/abs/2301.13552v1 | # Minimal Left-Right Symmetric Model with \(A_{4}\) modular symmetry
###### Abstract
In this paper, we have realized the left-right symmetric model with modular symmetry. We have used \(\Gamma(3)\) modular group which is isomorphic to non-abelian discrete symmetry group \(A_{4}\). The advantage of using modular symmetry is the non-requirement for the use of extra particles called 'flavons'. In this model, the Yukawa couplings are expressed in terms of modular forms \((Y_{1},Y_{2},Y_{3})\). In this work, we have studied minimal Left-Right Symmetric Model for both type-I and type-II dominances. Here, we have calculated the values for the Yukawa couplings and then plotted it against the sum of the neutrino masses. The results obtained are well within the experimental limits for the desired values of sum of neutrino masses. We have also briefly analyzed the effects of the implications of modular symmetry on neutrinoless double beta decay with the new physics contributions within Left-Right Symmetric Model.
Introduction
Despite the huge and continued success of the Standard Model (SM) of particle physics, it leaves some of the puzzles unanswered like the existence of neutrino masses, baryon asymmetry of the universe, existence of dark matter etc. The discovery of neutrino oscillation by Sudbury neutrino observatory and Super-Kamiokande experiments was a milestone discovery in the area of neutrino physics. The experiments like MINOS [1], T2K [2], Daya-Bay [3], Double-Chooz [4], RENO [5] etc. provided evidence on the neutrinos being massive which is one of the most compelling revelation that we need to go beyond Standard Model. However inspite of the huge achievements in determining the neutrino oscillation parameters in solar, atmospheric, reactor and accelerator neutrino experiments, many questions related to neutrino still remain unsolved. Among these lies the question regarding the absolute mass scale of neutrinos, exact nature of the particle (Dirac or Majorana), hierarchical pattern of the mass spectrum (Normal or Inverted) and leptonic CP violation. The absolute mass scale of the neutrinos is not yet known. However experiments like Planck has given an upper bound on the sum of the light neutrino masses to be \(\Sigma|m_{\nu_{i}}|<0.23eV\) in 2012 [6] and recently the bound has been constrained to \(\Sigma|m_{\nu_{i}}|<0.11eV\)[7]. The most successful data pertaining to neutrino oscillation parameters is found in the \(3\sigma\) global fit data [8] as shown in table (1).
\begin{table}
\begin{tabular}{|l|c|l|} \hline Parameters & Normal & Inverted \\ & Ordering & Ordering \\ \hline \(\Delta\)\(m_{21}^{2}\) & 6.82 \(\rightarrow\) 8.04 & 6.82 \(\rightarrow\) 8.04 \\ (\(10^{-5}eV^{2}\)) & & \\ \hline \(\Delta\)\(m_{3l}^{2}\) & 2.435 \(\rightarrow\) 2.598 & \(-\)2.581 \(\rightarrow\) \\ (\(10^{-5}eV^{2}\)) & & \(-\)2.414 \\ \hline \(sin^{2}\)\(\theta_{12}\) & 0.264 \(\rightarrow\) 0.343 & 0.269 \(\rightarrow\) 0.343 \\ \hline \(sin^{2}\)\(\theta_{23}\) & 0.415 \(\rightarrow\) 0.616 & 0.419 \(\rightarrow\) 0.617 \\ \hline \(sin^{2}\)\(\theta_{13}\) & 0.02032 \(\rightarrow\) & 0.02052 \(\rightarrow\) \\ & 0.02410 & 0.02428 \\ \hline \end{tabular}
\end{table}
Table 1: Global fit \(3\sigma\) values for neutrino oscillation parameters.
We have used the definition,
\[\Delta m^{2}_{3l}=\Delta m^{2}_{31};\Delta m^{2}_{31}>0;NO \tag{1.1}\]
\[\Delta m^{2}_{3l}=\Delta m^{2}_{32};\Delta m^{2}_{32}<0;IO \tag{1.2}\]
The simplest way to look for neutrino masses is by the seesaw mechanism. The mechanism may be of type I [9], [10],type II [11], [12],type III [13] and Inverse Seesaw [14]. These are extensions of the SM where we incorporate extra particles like right-handed fermions,scalar fermion triplets, gauge singlet neutral fermions etc. The BSM physics also sheds light upon the phenomena like baryon asymmetry of the universe (BAU) [15], Lepton Number Violation (LNV) [16], Lepton Flavor violation (LFV) [17], existence of dark matter [18], [19] etc. A BSM framework which has been successful in explaining the first three of the phenomenologies is the Left- Right Symmetric Model (LRSM) [20; 21; 22; 23; 24], an extension of the SM corresponding to the addition of \(SU(2)_{R}\) group into the theory. The gauge group of LRSM is \(SU(3)_{C}\otimes SU(2)_{R}\otimes SU(2)_{L}\otimes U(1)_{B-L}\). The type I and type II seesaw masses appear naturally in the model. The right-handed neutrinos are an essential part of the model, which acquires Majorana mass when \(SU(2)_{R}\) symmetry is broken. LRSM provides a natural framework to understand the spontaneous breaking of parity and origin of small neutrino masses by seesaw mechanism [25].
Another concerning aspect is the ambiguity regarding nature of neutrinos which has not been yet predicted by the SM of particle physics, that whether neutrinos are Dirac or Majorana fermions. This problem is directly connected to the issue of lepton number conservation. One of the process of fundamental importance which arises in almost any extension of the SM is Neutrinoless Double Beta Decay(NDBD) [26], [27] which when verified can assure that neutrinos are Majorana fermions. NDBD is a slow, radiative process that transforms a nuclide of atomic number Z into its isobar with atomic number Z+2 [28],
\[N(A,Z)\to N(A,Z+2)+e^{-}+e^{-} \tag{1.3}\]
The main aim in the search of NDBD (\(0\nu\beta\beta\)) is the measurement of effective Majorana neutrino mass, which is a combination of the neutrino mass eigenstates and neutrino mixing matrix terms [28]. However, no experimental evidence regarding the decay has been in picture till date. In
addition to the determination of the effective masses, the half-life of the decay [29] combined with sufficient knowledge of the nuclear matrix elements (NME), we can set a constraint on the neutrino masses. The experiments like KamLAND-Zen [30] and GERDA [31] which uses Xenon-136 and Germanium-76 respectively have improved the lower bound on the half-life of the decay process. However, KamLAND-Zen imposes the best lower limit on the half life as \(T_{1/2}^{0\nu}>1.07\times 10^{26}\) yr at 90 % CL and the corresponding upper limit of the effective Majorana mass in the range (0.061-0.165)eV. There are several contributions in LRSM that appear due to additional RH current interactions, giving rise to sizeable LFV rates for TeV scale RH neutrino that occur at rates accessible in current experiments. It has been found that the most significant constraints has been provided by the decays, \(\mu\to 3e\) and \(\mu\rightarrow\gamma e\). In the Standard Model, these LFV decays are suppressed by the tiny neutrino masses. No experiment has so far observed any flavor violating processes including charged leptons. However, many experiments are currently going on to set strong limits on the most relevant LFV observables that will constrain the parameter space of many new models. The best bounds on the branching ratio for LFV decays of the form \(\mu\rightarrow\gamma e\) comes from MEG experiment and it is set at \(BR(\mu\rightarrow\gamma e)<4.2\times 10^{-13}\). In case of the decay \(\mu\to 3e\), the bound is set by the SINDRUM experiment at \(BR(\mu\to 3e)<1.0\times 10^{-12}\).
As mentioned LRSM is an important theory that incorporates the above mentioned phenomenologies, i.e., the phenomenologies related to neutrinos. There are many works where the authors make use of discrete symmetry groups like \(A_{4}\)[32],\(S_{4}\)[33],\(Z_{2}\) etc. [34] to analyze the problem of flavor structure of fermions and to study various related phenomenologies. In our work, we have used \(A_{4}\) modular symmetry to study neutrino masses and mixings and hence study Neutrinoless Double Beta Decay within the model. The advantage of using modular symmetry over discrete flavor symmetries is that the study of the model using symmetries can be done without the introduction of extra particles called 'flavons'. Hence the model is minimal.
However, in this work we have not done a very detailed analysis of the above mentioned phenomenologies, but only realized the left-right symmetric model with the help of \(A_{4}\) modular symmetry and studied the variations of new physics contributions of neutrinoless double beta decay within LRSM with the range of values for Yukawa couplings, which in our model is expressed as modular forms. In section (II), we have given a detailed explanation of the left-right symmetric model, the associated Lagrangian and the mass terms. We begin section (III) by introducing
modular symmetry and then in section (IV), we incorporate modular symmetry into LRSM and determine the associated mass matrices. In section (V), we present a very brief discussion of neutrinoless double beta decay and its associated contributions and their relations with the modular forms. In section (VI), the numerical analysis and results of this work has been discussed and the last section reads the conclusion for the present work.
## II Minimal Left-Right Symmetric Model
The Left-Right Symmetric Model (LRSM) was first introduced around 1974 by Pati and Salam. Rabindra N. Mohapatra and Goran Senjanovic were also some pioneers of this very elegant theory. LRSM is an extension of the Standard Model of particle physics, the gauge group being \(SU(3)_{C}\otimes SU(2)_{R}\otimes SU(2)_{L}\otimes U(1)_{B-L}\), which has been studied by several groups since 1970's [25], [21; 22; 23; 24]. The usual type-I and type-II seesaw neutrino masses arises naturally in the model. The seesaw scale is identified by the breaking of \(SU(2)_{R}\). Some other problems are also addressed in LRSM like parity, CP violation of weak interaction, massive neutrinos, hierarchy problems, etc. LRSM removes the disparity between the left and right-handed fields by considering the RH fields to be doublet under the additional \(SU(2)_{R}\) keeping the right sector couplings same as the left-one by left-right symmetry. In this model, the electric charge is given by \(Q=T_{3L}+T_{3R}+\frac{B-L}{2}\), where \(T_{3L}\) and \(T_{3R}\) are the generators of \(SU(2)_{L}\) and \(SU(2)_{R}\) respectively. \(B-L\) refers to baryon number minus lepton number. The particle content of the model along with their respective charge assignments are given in table(III). The matrix representation for the scalar sector is given by,
\[\phi=\begin{pmatrix}\phi_{1}^{0}&\phi_{1}^{+}\\ \phi_{2}^{-}&\phi_{2}^{0}\end{pmatrix} \tag{1}\]
\[\Delta_{L,R}=\begin{pmatrix}\frac{\delta_{L,R}^{+}}{\sqrt{2}}&\delta_{L,R}^{+} \\ \delta_{L,R}^{0}&-\frac{\delta_{L,R}^{+}}{\sqrt{2}}\end{pmatrix} \tag{2}\]
In order for the fermions to attain mass, a Yukawa Lagrangian is necessary which couples to the bidoublet \(\phi\). The Yukawa Lagrangian incorporating the bidoublet is given by,
\[\mathcal{L}_{\mathcal{D}}=\overline{l_{iL}}(Y_{ij}^{l}\phi+\widetilde{Y_{ij}^{ l}}\widetilde{\phi})l_{jR}+\overline{Q_{iL}}(Y_{ij}^{q}\phi+\widetilde{Y_{ij}^{q}} \widetilde{\phi})Q_{jR}+h.c \tag{3}\]
where, \(l_{L}\) and \(l_{R}\) are the left-handed and right-handed lepton fields, \(Q_{L}\) and \(Q_{R}\) are the left-handed and right-handed quark fields. \(Y^{l}\) being the Yukawa coupling corresponding to leptons and \(Y^{q}\) being the Yukawa coupling for the quarks. The Yukawa Lagrangian incorporating the scalar triplets which play a role in providing Majorana mass to the neutrinos is given by,
\[{\cal L_{M}}=f_{L,ij}{\Psi_{L,i}}^{T}Ci\sigma_{2}\Delta_{L}\Psi_{L,j}+f_{R,ij}{ \Psi_{R,i}}^{T}Ci\sigma_{2}\Delta_{R}\Psi_{R,j}+h.c \tag{2.4}\]
\(f_{L}\) and \(f_{R}\) are the Majorana Yukawa couplings and are equal subjected to discrete left-right symmetry. The scalar potential in LRSM is a combination of interaction terms consisting the potential and after spontaneous symmetry breaking the scalars attain VEVs given by,
\[<\Delta_{L,R}>=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0\\ v_{L,R}&0\end{pmatrix} \tag{2.5}\]
\[<\phi>=\begin{pmatrix}k&0\\ 0&e^{i\theta}k^{\prime}\end{pmatrix} \tag{2.6}\]
The magnitudes of the VEVs follows the relation, \(|v_{L}|^{2}<|k^{2}+{k^{\prime}}^{2}|<|v_{R}|^{2}\). The breaking pattern of the LRSM gauge group takes place in two steps. The LRSM gauge group is first broken down to the Standard Model gauge group by the vev of the scalar triplet \(\Delta_{R}\), and then the Standard Model gauge group is broken down to the electromagnetic gauge group i.e., \(U(1)_{em}\) by the vev of the bidoublet and a tiny vev of the scalar triplet \(\Delta_{L}\).
The Dirac mass terms for the leptons come from the Yukawa Lagrangian, which for the charged leptons and the neutrinos are given by,
\[M_{l}=\frac{1}{\sqrt{2}}(k^{\prime}Y_{l}+k\tilde{Y_{l}}) \tag{2.7}\]
\[M_{D}=\frac{1}{\sqrt{2}}(kY_{l}+k^{\prime}\tilde{Y_{l}}) \tag{2.8}\]
The light neutrino mass after spontaneous symmetry breaking (SSB), generated within a type (I+II) seesaw can be written as,
\[M_{\nu}=M_{\nu}{}^{I}+M_{\nu}{}^{II}, \tag{2.9}\]
\[M_{\nu}=M_{D}M_{RR}{}^{-1}M_{D}{}^{T}+M_{LL} \tag{2.10}\]
where,
\[M_{LL}=\sqrt{2}v_{L}f_{L} \tag{11}\]
and,
\[M_{RR}=\sqrt{2}v_{R}f_{R} \tag{12}\]
The first and second terms in corresponds to type-I seesaw and type-II seesaw masses respectively. It is an interesting fact that in the context of LRSM both type-I and type-II terms can be equally dominant or either of the two terms can be dominant, but under certain conditions [35; 36]. It has been demonstrated in the Appendix A. In the context of LRSM however, both the type-I and type-II mass terms can be expressed in terms of the heavy right-handed Majorana mass matrix, so equation (10) will follow,
\[M_{\nu}=M_{D}M_{RR}^{-1}M_{D}^{T}+\gamma\Bigg{(}\frac{M_{W}}{v_{R}}\Bigg{)}^{2 }M_{RR} \tag{13}\]
where, \(\gamma\) is a dimensionless parameter which is a function of various couplings, appearing in the VEV of the triplet Higgs \(\Delta_{L}\), i.e., \(v_{L}=\gamma(\frac{v^{2}}{v_{R}})\) and here, \(v=\sqrt{k^{2}+k^{\prime 2}}\), and
\[\gamma=\frac{\beta_{1}kk^{\prime}+\beta_{2}k^{2}+\beta_{3}k^{\prime 2}}{(2 \rho_{1}-\rho_{3})(k^{2}+k^{\prime 2})} \tag{14}\]
In our model, the dimensionless parameter \(\gamma\) has been fine tuned to \(\gamma\approx 10^{-6}\) and \(v_{R}\) is of the order of \(TeV\).
## III Modular symmetry
Modular symmetry has gained much importance in aspects of model building [37], [38]. This is because it can minimize the extra particle called 'flavons' while analyzing a model with respect to a particular symmetry group. An element \(q\) of the modular group acts on a complex variable \(\tau\) which belongs to the upper-half of the complex plane given as [38][39]
\[q\tau=\frac{a\tau+b}{c\tau+d} \tag{15}\]
where \(a,b,c,d\) are integers and \(ad-bc=1\), Im\(\tau\)\(>\)0.
The modular group is isomorphic to the projective special linear group PSL(2,Z) = SL(2,Z)/\(Z_{2}\) where, SL(2,Z) is the special linear group of integer \(2\times 2\) matrices having determinant unity and \(Z_{2}=(I,-I)\) is the centre, \(I\) being the identity element. The modular group can be represented in terms of two generators \(S\) and \(T\) which satisfies \(S^{2}=(ST)^{3}=I\). \(S\) and \(T\) satisfies the following matrix representations:
\[S=\begin{pmatrix}0&1\\ -1&0\end{pmatrix} \tag{3.2}\]
\[T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix} \tag{3.3}\]
corresponding to the transformations,
\[S:\tau\rightarrow-\frac{1}{\tau};T:\tau\rightarrow\tau+1 \tag{3.4}\]
Finite modular groups (N \(\leq\) 5) are isomorphic to non-abelian discrete groups, for example, \(\Gamma(3)\approx A_{4}\), \(\Gamma(2)\approx S_{3}\), \(\Gamma(4)\approx S_{4}\). While using modular symmetry, the Yukawa couplings can be expressed in terms of modular forms, and the number of modular forms present depends upon the level and weight of the modular form. For a modular form of level N and weight 2k, the table below shows the number of modular forms associated within and the non-abelian discrete symmetry group to which it is isomorphic [39].
\begin{table}
\begin{tabular}{|c|c|c|} \hline N & No. of modular forms & \(\Gamma(N)\) \\ \hline
2 & k + 1 & \(S_{3}\) \\ \hline
3 & 2k + 1 & \(A_{4}\) \\ \hline
4 & 4k + 1 & \(S_{4}\) \\ \hline
5 & 10k + 1 & \(A_{5}\) \\ \hline
6 & 12k & \\ \hline
7 & 28k - 2 & \\ \hline \end{tabular}
\end{table}
Table II: No. of modular forms corresponding to modular weight 2k.
In our work, we will be using modular form of level 3, that is, \(\Gamma(3)\) which is isomorphic to \(A_{4}\) discrete symmetry group. The weight of the modular form is taken to be 2, and hence it will have three modular forms \((Y_{1},Y_{2},Y_{3})\) which can be expressed as expansions of q given by,
\[Y_{1}=1+12q+36q^{2}+12q^{3}+84q^{4}+72q^{5}+36q^{6}+96q^{7}+180q^{8}+12q^{9}+216 q^{10} \tag{3.5}\]
\[Y_{2}=-6q^{1/3}(1+7q+8q^{2}+18q^{3}+14q^{4}+31q^{5}+20q^{6}+36q^{7}+31q^{8}+56q ^{9}) \tag{3.6}\]
\[Y_{3}=-18q^{2/3}(1+2q+5q^{2}+4q^{3}+8q^{4}+6q^{5}+14q^{6}+8q^{7}+14q^{8}+10q^{9}) \tag{3.7}\]
where, \(q=\exp(2\pi i\tau)\).
## IV Minimal LRSM with \(A_{4}\) modular symmetry
In particle physics, symmetries have always played a very crucial role. The realization of LRSM with the help of discrete flavor symmetries have been done in earlier works [40], [41]. In our work we have incorporated \(A_{4}\) modular symmetry into LRSM. The advantage of using modular symmetry rather than flavor symmetry is the minimal use of extra particles (flavons) and hence the model is minimal. The model contains usual particle content of LRSM [42]. The lepton doublets transform as triplets under \(A_{4}\) and the bidoublet and scalar triplets transform as 1 under \(A_{4}\)[43]. As we have considered modular symmetry, we assign modular weights to the particles, keeping in mind that matter multiplets corresponding to the model can have negative modular weights, but the modular forms cannot be assigned negative weights. The assignment of these weights are done in such a way that in the Lagrangian the sum of the modular weights in each term is zero. Modular weights corresponding to each particle is shown in table (III). The Yukawa Lagrangian for the leptonic and quark sector in LRSM is given by equation (2.3),(2.4) and with a reference to that we can write the Yukawa Lagrangian of our \(A_{4}\) modular symmetric LRSM, for the fermionic sector, by introducing Yukawa coupling in the form of modular forms \(Y\) is given as,
\[\mathcal{L_{Y}}=\overline{l_{L}}\phi l_{R}Y+\overline{l_{L}}\tilde{\phi}l_{R}Y +\overline{Q_{L}}\phi Q_{R}Y+\overline{Q_{L}}\tilde{\phi}Q_{R}Y+{l_{R}}^{T}Ci \tau_{2}\Delta_{R}l_{R}Y+{l_{L}}^{T}Ci\tau_{2}\Delta_{L}l_{L}Y \tag{4.1}\]
The Yukawa couplings \(Y=(Y_{1},Y_{2},Y_{3})\) are expressed as modular forms of level 3.
In our work, we are concerned with the mass of the neutrinos and as such, using \(A_{4}\) modular symmetry and using the multiplication rules for \(A_{4}\) group, we construct the Dirac and Majorana mass matrices as given below. The Dirac mass matrix is given by,
\[M_{D}=v\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{2}&-Y_{1}&2Y_{3}\\ -Y_{3}&2Y_{2}&-Y_{1}\end{pmatrix} \tag{4.2}\]
where, \(v\) is considered to be the VEV for the Higgs bidoublet.
The right-handed Majorana mass matrix is given by,
\[M_{R}=v_{R}\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{3}&2Y_{2}&-Y_{1}\\ -Y_{2}&-Y_{1}&2Y_{3}\end{pmatrix} \tag{4.3}\]
\begin{table}
\begin{tabular}{|c|c|} \hline & Y (modular forms) \\ \hline \(A_{4}\) & 3 \\ \hline \(k_{I}\) & 2 \\ \hline \end{tabular}
\end{table}
Table 4: Charge assignment and modular weight for the corresponding modular Yukawa form for the model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Gauge group & \(Q_{L}\) & \(Q_{R}\) & \(l_{L}\) & \(l_{R}\) & \(\phi\) & \(\Delta_{L}\) & \(\Delta_{R}\) \\ \hline \(SU(3)_{C}\) & 3 & 3 & 1 & 1 & 1 & 1 & 1 \\ \hline \(SU(2)_{L}\) & 2 & 1 & 2 & 1 & 2 & 3 & 1 \\ \hline \(SU(2)_{R}\) & 1 & 2 & 1 & 2 & 2 & 1 & 3 \\ \hline \(U(1)_{B-L}\) & 1/3 & 1/3 & -1 & -1 & 0 & 2 & 2 \\ \hline \(A_{4}\) & 3 & 3 & 3 & 3 & 1 & 1 & 1 \\ \hline \(k_{I}\) & 0 & -2 & 0 & -2 & 0 & -2 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: Charge assignments for the particle content of the model.
where, \(v_{R}\) is the VEV for the scalar triplet \(\Delta_{R}\). As it is seen that the Majorana mass matrix for our model is found to be symmetric in nature as it should be. Under these assumptions for modular symmetric LRSM and the basis that we have considered, our charged lepton mass matrix is also found to be diagonal.
The type-I seesaw mass is then given by,
\[M_{\nu_{I}}=M_{D}.{M_{R}}^{-1}.{M_{D}}^{T} \tag{4.4}\]
and, the type-II seesaw mass is given by,
\[M_{\nu_{II}}=M_{LL} \tag{4.5}\]
As mentioned above, in LRSM type-II seesaw mass can also be expressed in terms of the right-handed mass \(M_{R}\) as,
\[M_{\nu_{II}}=\gamma{\left(\frac{M_{W}}{v_{R}}\right)}^{2}M_{R} \tag{4.6}\]
### Type-I dominanace
In LRSM, the type-I seesaw mass dominates when the vev of the left-handed triplet is taken to be negligibly small and hence the type-II term is absent. In such a case the lightest neutrino mass can be given in terms of the type-I seesaw mass term given by,
\[M_{\nu}=M_{D}{M_{R}}^{-1}{M_{D}}^{T} \tag{4.7}\]
and the heavy right-handed Majorana mass term can be given as,
\[M_{R}=f_{R}v_{R} \tag{4.8}\]
where, \(f_{R}\) is the right-handed Majorana Yukawa coupling.
In the approximation that \(k^{\prime}<<k\), and if we consider that our Yukawa coupling \(Y^{l}\) corresponding to the neutrino masses is \(y_{D}\) and the coupling \(\widetilde{Y^{l}}\) for the charged fermion masses is denoted by \(y_{L}\), so considering \(y_{D}k>>y_{L}k^{\prime}\) we can write the type-I mass term as [44],
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f_{R}^{-1}y_{D}^{T} \tag{4.9}\]
If we consider that \(U_{R}\) is a unitary matrix that diagonalizes \(M_{R}\), so since the VEV \(v_{R}\) is a constant the same matrix will also diagonalize the coupling matrix \(f_{R}\). Taking \(f_{R}=f_{L}=f\), so
\[f=U_{R}f^{dia}U_{R}^{T} \tag{4.10}\]
If we take inverse on both sides and taking into account the property of a unitary matrix (\(U_{R}^{-1}=U_{R}^{T}\)), we get,
\[f^{-1}=U_{R}^{T}(f^{dia})^{-1}U_{R} \tag{4.11}\]
Therefore, we get
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}U_{R}^{T}(f^{dia})^{-1}U_{R}y_{D}^{T} \tag{4.12}\]
Multiplying both sides of the equation with \(U_{R}^{T}\) from the right and with \(U_{R}\) from left, we finally arrive at the following equation,
\[U_{R}M_{\nu}U_{R}^{T}=(M_{\nu})^{dia} \tag{4.13}\]
where we have used \(U_{R}y_{D}U_{R}^{T}=y_{D}\). So, the unitary matrix diagonalizing the matrix \(M_{R}\) also diagonalizes the light neutrino mass matrix. So in this case it can be determined that if \(m_{i}\) denotes the light neutrino mass and \(M_{i}\) denotes the heavy neutrino mass, then they are related as
\[m_{i}\propto\frac{1}{M_{i}} \tag{4.14}\]
For our model, the Yukawa couplings are modular forms expressed as expansions of \(q\), and the mass matrices are expressed in terms of the modular forms \((Y_{1},Y_{2},Y_{3})\). So, the light neutrino mass matrix, \(M_{\nu}\) for type-I dominance is given by the equation (4.7). As already stated in equations (4.2) and (4.3), the Dirac and Majorana mass matrices are determined by the application of multiplication rules for the \(A_{4}\) group. So, for type-I dominance, our light neutrino mass matrix will be given by,
\[M_{\nu}=\frac{v^{2}}{v_{R}}\begin{pmatrix}2Y_{1}&-Y_{2}&-Y_{3}\\ -Y_{2}&2Y_{3}&-Y_{1}\\ -Y_{3}&-Y_{1}&2Y_{2}\end{pmatrix} \tag{4.15}\]
As mentioned previously, the value for \(v_{R}\) is of the order of \(TeV\) and that for \(v\) is in \(GeV\). We have computed the values of the sum of the neutrino masses for type-I dominance and checked the correctness of our model by plotting it against the Yukawa couplings and the result was found to match the experimental bounds.
Figure 1: Variation of \(|Y_{1}|\) with sum of neutrino masses.
Figure 3: Variation of \(|Y_{3}|\) with sum of neutrino masses.
Figure 2: Variation of \(|Y_{2}|\) with sum of neutrino masses.
### Type-II dominance
Type-II seesaw mass in LRSM dominates when the Dirac term connecting the right-handed and left-handed parts is negligible as compared to that of the type-II term [44]. In that case, our light neutrino mass \(m_{\nu}\) will given by the type-II seesaw mass term, i.e.,
\[M_{\nu_{L}}=f_{L}v_{L} \tag{4.16}\]
And the heavy mass matrix is given by,
\[M_{R}=f_{R}v_{R} \tag{4.17}\]
Again if we consider that \(U_{L}\) and \(U_{R}\) diagonalizes \(M_{\nu_{L}}\) and \(M_{R}\) respectively, so for the reason mentioned above the same matrices will also diagonalize \(f_{L}\) and \(f_{R}\) respectively and since in our model, \(f_{L}=f_{R}\), so we can consider \(U_{L}=U_{R}\). In such a case, we arrive at an important result that
\[m_{i}\propto M_{i} \tag{4.18}\]
Now using modular symmetry the light neutrino mass matrix for type-II dominance in our model is given by,
\[m_{\nu}=v_{L}\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{3}&2Y_{2}&-Y_{1}\\ -Y_{2}&-Y_{1}&2Y_{3}\end{pmatrix} \tag{4.19}\]
where, \(v_{L}\) is the vev for left-handed scalar triplet. The value of \(v_{L}\) is taken to be of the order of \(eV\). The sum of the neutrino masses is computed for type-II dominance and plotting of the sum is done with the Yukawa couplings which are found to be as shown under,
Figure 4: Variation of \(|Y_{1}|\) with sum of neutrino masses.
## V Neutrinoless double beta decay (\(0\nu\beta\beta\)) in minimal LRSM
Neutrinoless double beta decay is a lepton number violating process, which if proven to exist will directly imply the Majorana nature of neutrinos.
\[N(A,Z)\to N(A,Z+2)+e^{-}+e^{-} \tag{5.1}\]
Many groups have however already done a lot of work on NDBD in the model, [21],[28; 45; 46; 47; 48; 49; 50]. In LRSM [51], there are several contributions to NDBD in addition to the standard contribution via light Majorana neutrino exchange owing to the presence of several heavy additional scalar,vector
Figure 5: Variation of \(|Y_{2}|\) with sum of neutrino masses.
Figure 6: Variation of \(|Y_{3}|\) with sum of neutrino masses.
and fermionic fields [52, 53, 54, 55]. Various contributions to NDBD transition rate in LRSM are discussed as follows :
* Standard Model contribution to NDBD where the intermediate particles are the \(W_{L}\) bosons and light neutrinos, the process in which the amplitude depends upon the leptonic mixing matrix elements and light neutrino masses.
* Heavy right-handed neutrino contribution in which the mediator particles are the \(W_{L}\) bosons and the amplitude depends upon the mixing between light and heavy neutrinos as well as the mass of the heavy neutrino.
* Light neutrino contribution to NDBD where the intermediate particles are \(W_{R}\) bosons and the amplitude depends upon the mixing between light and heavy neutrinos as well as mass of the right-handed gauge boson \(W_{R}\).
* Heavy right-handed neutrino contribution where the mediator particles are the \(W_{R}\) bosons. The amplitude of this process is dependent on the elements of the right handed leptonic mixing matrix and mass of the right-handed gauge boson, \(W_{R}\) as well as the mass of the heavy right handed Majorana neutrino.
* Light neutrino contribution from the Feynman diagram mediated by both \(W_{L}\) and \(W_{R}\), and the amplitude of the process depends upon the mixing between light and heavy neutrinos, leptonic mixing matrix elements, light neutrino masses and the mass of the gauge bosons, \(W_{L}\) and \(W_{R}\).
* Heavy neutrino contribution from the Feynman diagram mediated by both \(W_{L}\) and \(W_{R}\), and the amplitude of the process depends upon the right handed leptonic mixing matrix elements, mixing between the light and heavy neutrinos, also the mass of the gauge bosons, \(W_{L}\) and \(W_{R}\) and the mass of the heavy right handed neutrino.
* Scalar triplet contribution (\(\Delta_{L}\)) in which the mediator particles are \(W_{L}\) bosons, and the amplitude for the process depends upon the masses of the \(W_{L}\) bosons, left-handed triplet Higgs, as well as their coupling to leptons.
* Right-handed scalar triplet contribution (\(\Delta_{R}\)) contribution to NDBD in which the mediator particles are \(W_{R}\) bosons, and the amplitude for the process depends upon the masses of the \(W_{R}\) bosons, right-handed triplet Higgs, \(\Delta_{R}\) as well as their coupling to leptons.
In our work, where we have incorporated \(A_{4}\) modular symmetry to LRSM and in our present work we have considered three of the above mentioned contributions, one from the standard light neutrino contribution and the other two new physics contribution mediated by \(W_{R}^{-}\) and \(\Delta_{R}\) respectively. For simple approximations, an assumption has been made in the mass scales of heavy particles, where,
\[M_{R}\approx M_{W_{R}}\approx M_{\Delta_{L}{}^{++}}\approx M_{\Delta_{R}{}^{++ }}\approx TeV\]
. Under these assumptions, the amplitude for the light-heavy mixing contribution which is proportional to \(\frac{m_{D}{}^{2}}{M_{R}}\) remains very small, since \(m_{\nu}\approx\frac{m_{D}{}^{2}}{M_{R}}\approx(0.01-0.1)eV,m_{D}\approx(10^{5 }-10^{6})eV\) which implies \(\frac{m_{D}}{M_{R}}\approx(10^{-7}-10^{-6})eV\). Thus in our model, we ignore the contributions involving the light and heavy neutrino mixings.
When NDBD is done in the framework of LRSM, the standard light neutrino contribution is given by,
\[m_{v}^{eff}=U_{Li}^{2}m_{i} \tag{5.2}\]
where, \(U_{Li}\) are the elements of the first row of the neutrino mixing matrix \(U_{PMNS}\), in which the elements are dependent on known mixing angles \(\theta_{13}\), \(\theta_{12}\) and the Majorana phases \(\kappa\) and \(\eta\). The \(U_{PMNS}\) matrix is given by,
\[U_{PMNS}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta}&-c_{23}c_{12}-s_{23}s_{12}s_{13}e^ {i\delta}&s_{23}c_{13}\\ s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}-c_{23}s_{13}s_{12}e^ {i\delta}&c_{23}c_{13}\end{pmatrix}P \tag{5.3}\]
where, \(P=diag(1,e^{i\kappa},e^{i\eta})\). So the effective mass can be parametrized in terms of the elements of the diagonalizing matrix and the eigenvalues as,
\[m_{v}^{eff}=m_{1}c_{12}^{2}c_{13}^{2}+m_{2}s_{12}^{2}c_{13}^{2}e^{2i\kappa}+m _{3}s_{13}^{2}e^{2i\eta}. \tag{5.4}\]
## VI Numerical analysis and results
In our present work, we have modified left-right symmetric model by incorporating \(A_{4}\) modu
lar symmetry for both type-I and type-II dominances. As we are using modular symmetry, the Yukawa couplings are expressed as expansions of \(q\) as shown in equations (3.5),(3.6) and (3.7). In our model, the value of \(q\) is found to be of the order of \(10^{-1}\). The aboslute value of the modulus should however be greater than 1.
\[\tau=Re(\tau)+Im(\tau) \tag{6.1}\]
Yukawa couplings against the sum of the neutrino masses. The range of the values for sum of neutrino masses for both the cases are given as under,
### Standard Light Neutrino Contribution to \(0\nu\beta\beta\)
As mentioned above, in the standard light neutrino contribution to \(0\nu\beta\beta\), the intermediate particles are the \(W_{L}\) bosons and light neutrino. The effective mass for the contribution is given by equation (5.2). Simplifying for the respective elements of \(U_{Li}\) and \(m_{i}\), the value of the effective mass is obtained in terms of the modular forms \((Y_{1},Y_{2},Y_{3})\) as,
\[m_{\nu}^{eff}=m_{1}^{eff}+m_{2}^{eff}+m_{3}^{eff} \tag{6.2}\]
where,
\[m_{1}^{eff}=\frac{\nu(Y_{2}-Y_{3})^{2}(Y_{1}+Y_{2}+Y_{3})}{\nu_{R}(Y_{1}-Y_{3} )^{2}}\]
\[m_{2}^{eff}=\frac{\nu^{2}(Y_{1}-Y_{2})^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{ 1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2\nu_{R}(Y _{1}-Y_{3})^{2}}\]
\[m_{3}^{eff}=\frac{\nu^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{1}^{2}-2Y_{1}Y_{ 2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2\nu_{R}}\]
for type-I dominance, and the plots are shown as,
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\sum m_{\nu}\) & Normal Hierarchy & Inverted hierarchy \\ \hline \(Type-I(min)\) & 0.000980556 & 0.000437758 \\ \(Type-I(max)\) & 0.177296 & 0.186377 \\ \hline \(Type-II(min)\) & 0.000219304 & 0.000035 \\ \(Type-II(max)\) & 0.0200981 & 0.0203081 \\ \hline \end{tabular}
\end{table}
Table 7: Range of values for sum of neutrino masses for type-I and type-II dominances for both normal and inverted hierarchy.
Figure 8: Variation of \(|Y_{2}|\) with effective neutrino mass for standard light neutrino contribution.
Figure 7: Variation of \(|Y_{1}|\) with effective neutrino mass for standard light neutrino contribution.
Figure 9: Variation of \(|Y_{3}|\) with effective neutrino mass for standard light neutrino contribution.
For type-II dominance, we have
\[m_{1}^{eff}=-\frac{\nu_{L}(-Y_{2}+Y_{3})(Y_{1}+Y_{2}+Y_{3})}{Y_{1}-Y_{2}}\]
\[m_{2}^{eff}=-\frac{\nu_{L}(Y_{1}-Y_{3})(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2(Y_{1}-Y_{2})}\]
\[m_{3}^{eff}=\frac{\nu_{L}(Y_{1}-Y_{3})(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2}\]
### Heavy Right-Handed Neutrino contribution to \(0\nu\beta\beta\)
In our work, we have considered contributions of heavy right-handed neutrino and scalar Higgs triplet to NDBD. The effective mass for heavy right-handed neutrino is given by,
\[m_{R}^{eff}=p^{2}\Bigg{(}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\Bigg{)}\Bigg{(}\frac{ U_{Rei}^{*^{2}}}{M_{i}}\Bigg{)} \tag{6.3}\]
where, \(p^{2}\) is the typical momentum exchange of the process. As it is known that TeV scale LRSM plays a very important role in the process of neutrinoless double beta decay (\(0\nu\beta\beta\)), we have considered the values as \(M_{W_{R}}=10TeV\), \(M_{W_{L}}=80GeV\), \(M_{\Delta_{R}}\approx 3TeV\) and after calculation, the value for heavy right-handed neutrino is found to be in the scale of \(TeV\). The allowed value of p is in the range \((100-200)MeV\) and so we consider, \(p\approx 180MeV\). Thus, we get,
\[p^{2}\Bigg{(}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\Bigg{)}=10^{10}eV^{2} \tag{6.4}\]
where, \(U_{Rei}\) refers to the first row elements of the diagonalizing matrix of the heavy Majorana mass matrix and \(M_{i}\) are its eigenvalues. The effective mass corresponding to the heavy right-handed neutrino can be expressed in terms of the modular forms as,
\[m_{eff}^{R}=10^{10}(m_{eff}^{R_{1}}+m_{eff}^{R_{2}}+m_{eff}^{R_{3}}) \tag{6.5}\]
where,
\[m_{eff}^{R_{1}}=\frac{2}{\nu_{R}(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{2}-2Y_ {1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}\]
Figure 12: Variation of \(|Y_{3}|\) with effective neutrino mass for standard light neutrino contribution.
\[m_{eff}^{R_{2}}=\frac{2(Y_{1}^{*}-Y_{3}^{*})^{2}}{\nu_{R}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3 }\sqrt{3Y_{1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2})}(Y _{1}^{*}-Y_{2}^{*})^{2}}\]
\[m_{eff}^{R_{3}}=\frac{(-Y_{2}^{*}+Y_{3}^{*})^{2}}{\nu_{R}(Y_{1}+Y_{2}+Y_{3})(Y _{1}^{*}-Y_{2}^{*})^{2}}\]
The total effective mass is also calculated for the standard light and right-handed heavy neutrino contribution, given by,
\[|m_{\nu}^{eff^{total}}|=|m_{\nu}^{eff}+m_{eff}^{R}| \tag{6.6}\]
which can be obtained in terms of the modular forms as a summation of the above mentioned terms.
Figure 14: Variation of \(|Y_{2}|\) with total effective neutrino mass.
Figure 13: Variation of \(|Y_{1}|\) with total effective neutrino mass.
The plots above are for type-I dominance.
### Scalar Triplet contribution to \(0\nu\beta\beta\)
The magnitude of \(\Delta_{R}\) contribution is controlled by the factor \(\frac{M_{i}}{M_{\Delta_{R}}}\)[44]. However, scalar triplet contribution is not included in the total contribution under the assumption \(\frac{M_{i}}{M_{\Delta_{R}}}<0.1\). But, some the mixing parameters in the large part of the parameter space may result in a higher \(\frac{M_{i}}{M_{\Delta_{R}}}\) ratio and in such cases we will have to include it in the total contribution. The impact of this contribution here is studied in the limit, \(M_{\Delta_{R}}\approxq M_{heaviest}\).
The effective mass for scalar triplet contribution is given as,
\[|m_{\Delta}^{eff}|=|p^{2}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{U_{Rei}^{2}M _{i}}{M_{\Delta_{R}}^{2}}| \tag{6.7}\]
The value of the mass for the right-handed scalar triplet is taken as, \(M_{\Delta_{R}}=3TeV\). So, the value of the coefficient results as,
\[p^{2}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{1}{M_{\Delta_{R}}^{2}}=\frac{10 ^{10}}{9\times 10^{24}} \tag{6.8}\]
In terms of modular forms, the effective scalar mass can be expressed as,
\[m_{eff}^{\Delta_{R}}=m_{eff_{1}}^{\Delta_{R}}+m_{eff_{2}}^{\Delta_{R}}+m_{eff_ {3}}^{\Delta_{R}} \tag{6.9}\]
where,
\[m_{eff_{1}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{2}+Y_{3})^{2}(Y_{1}+Y_{2}Y_{3})}{(Y _{1}-Y_{2})^{2}}\]
Figure 18: Variation of \(|Y_{3}|\) with total effective neutrino mass.
\[m_{eff_{2}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{1}-Y_{3})^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3} \sqrt{3Y_{1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2 (Y_{1}-Y_{2})^{2}}\]
\[m_{eff_{3}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2}\]
The plots are shown as under.
## VII Conclusion
The discovery of neutrino oscillations paved the gateway for physics beyond the Standard Model. In our paper, we have realized LRSM with the help of modular \(A_{4}\) symmetry for both type-I and type-II dominance. Using modular symmetry provides the advantage of using no extra particles called 'flavons'. The Yukawa couplings are represented as modular forms expressed as expansions of \(q\). The values of the Yukawa couplings \((Y_{1},Y_{2},Y_{3})\) are calculated using 'Mathematica'. The mass matrices are then determined using the multiplication rules for \(A_{4}\) group stated in the Appendix. The Majorana mass matrix is found to be symmetric and under the considered basis, the charged lepton mass matrix is also diagonal. We have expressed the light neutrino and heavy right-handed neutrino mass matrix in terms of the modular forms. We have also studied briefly the contributions of \(0\nu\beta\beta\) in LRSM. The effective masses corresponding to standard light neutrino contribution, right-handed contribution and scalar triplet contributions are determined in terms of \((Y_{1},Y_{2},Y_{3})\) and we have plotted the effective mass corresponding to the considered contributions against the Yukawa couplings. To summarize our work, some results are stated as under,
* The absolute value of the modulus was found to be within the range 1.073 to 1.197, which is greater than unity, that is the desired result.
* The Yukawa couplings, expressed in terms of modular forms ranges from \(10^{-9}\) to \(10^{-6}\).
* The sum of the neutrino masses for type-I dominance ranges from the order of \(10^{-4}\) to
Figure 21: Variation of \(|Y_{3}|\) with effective neutrino mass for scalar triplet contribution.
for both normal and inverted hierarchy.
* The sum of the neutrino masses for type-II dominance ranges from the order of \(10^{-4}\) to \(10^{-2}\) for both normal and inverted hierarchy.
The effective masses for the \(0\nu\beta\beta\) contributions are calculated and by determining their relations with the modular forms, we have plotted the effective masses with the three Yukawa couplings and it has been found that the values for the effective mass corresponding to each contribution is well within the experimental bounds, which infact makes us clearly state that the building of the model with modular symmetry is advantageous to that of flavor symmetries. In this model, we have not used any extra particles and the analysis has been done taking into consideration the calculated and computed values for the model parameters and the results are found to be satisfactory, so it can be stated that the Left-Right Symmetric Model can be constructed with modular symmetry while satisfying the experimental bounds on the desired parameters.
## VIII Appendix A
Let us consider the Higgs potential of our model that has quadratic and quartic coupling terms given by [36],
\[V_{\phi,\Delta_{L},\Delta_{R}}=-\mu_{ij}^{2}Tr[\phi_{i}^{\dagger} \phi_{j}]+\lambda_{ijkl}Tr[\phi_{i}^{\dagger}\phi_{j}]Tr[\phi_{k}^{\dagger} \phi_{l}]+\lambda_{ijkl}^{{}^{\prime}}Tr[\phi_{i}^{\dagger}\phi_{j}\phi_{k}^{ \dagger}\phi_{l}]-\mu_{ij}^{2}Tr[\Delta_{L}^{\dagger}\Delta_{L}+\Delta_{R}^{ \dagger}\Delta_{R}]\] \[\rho_{1}[(Tr[\Delta_{L}^{\dagger}\Delta_{L}])^{2}+(Tr[\Delta_{L}^ {\dagger}\Delta_{L}])^{2}]+\rho_{2}(Tr[\Delta_{L}^{\dagger}\Delta_{L}\Delta_{ L}^{\dagger}\Delta_{L}]+Tr[\Delta_{R}^{\dagger}\Delta_{R}\Delta_{R}^{\dagger} \Delta_{R}])+\rho_{3}Tr[\Delta_{L}^{\dagger}\Delta_{L}\Delta_{R}^{\dagger} \Delta_{R}]+\] \[\alpha_{ij}Tr[\phi_{i}^{\dagger}\phi_{j}](Tr[\Delta_{L}^{\dagger }\Delta_{L}]+Tr[\Delta_{R}^{\dagger}\Delta_{R}])+\beta_{ij}(Tr[\Delta_{L}^{ \dagger}\Delta_{L}\phi_{i}\phi_{j}^{\dagger}]+Tr[\Delta_{R}^{\dagger}\Delta_{ R}\phi_{i}\phi_{j}^{\dagger}])+\gamma_{ij}(Tr[\Delta_{L}^{\dagger}\phi_{i}\Delta_{R} \phi_{j}^{\dagger}]+h.c) \tag{10}\]
where, i,j,k,l runs from 1 to 2 with \(\phi_{1}=\phi\) and \(\phi_{2}=\tilde{\phi}\). As mentioned above after SSB, the scalar sector obtains VEV. So after the substitution of the respective VEVs and determining the traces, so after simplification the potential can be written as,
\[V=-\mu^{2}(v_{L}^{2}+v_{R}^{2})+\frac{\rho}{4}(v_{L}^{4}+v_{R}^{4})+\frac{ \rho^{\prime}}{2}+\frac{\alpha}{2}(v_{L}^{2}+v_{R}^{2})k_{1}^{2}+\gamma v_{L} v_{R}k^{2} \tag{11}\]
where, we have used the approximation \(k^{\prime}<<k\), and \(\rho^{\prime}=2\rho_{3}\). Our minimization conditions are, \(\frac{\delta V}{\delta v_{L}}=\frac{\delta V}{\delta v_{R}}=\frac{\delta V}{ \delta k}=\frac{\delta V}{\delta k^{\prime}}=0\)
Therefore, we get,
\[\frac{\delta V}{\delta v_{L}}=-2\mu^{2}v_{L}+\rho v_{L}^{3}+\rho^{\prime}v_{L}k^{2 }+\gamma v_{R}k^{2} \tag{8.3}\]
Here, it is evident that the Majorana mass of the left-handed neutrino \(M_{LL}\) is dependent on the vev \(v_{L}\) as already defined above. Again, we have
\[\frac{\delta V}{\delta v_{R}}=-2\mu^{2}v_{R}+\rho v_{R}^{3}+\rho^{\prime}v_{R}k ^{2}+\gamma v_{L}k^{2} \tag{8.4}\]
So, the right handed Majorana mass \(M_{RR}\) is dependent on the vev \(v_{R}\). Similarly, the calculations for the same can be carried out and it can be found out the Dirac mass term \(M_{D}\) can be expressed in terms of the vev for the Higgs bidoublet as also defined previously.
Now, we are to determine a relation between the VEVs for the scalars and so after using the minimization conditions and simplifying the equations, we come to a relation given by,
\[v_{L}v_{R}=\frac{\gamma}{\xi}k \tag{8.5}\]
where, \(\xi=\rho-\rho^{\prime}\).
The neutrino mass for LRSM is given as a summation of the type-I and type-II term as already mentioned above. So, in the approximation that \(k^{\prime}<<k\), and if we consider that our Yukawa coupling \(Y^{l}\) corresponding to the neutrino masses is \(y_{D}\) and the coupling \(\widetilde{Y^{l}}\) for the charged fermion masses is denoted by \(y_{L}\), so considering \(y_{D}k>>y_{l}k^{\prime}\) we can write,
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f_{R}^{-1}y_{D}^{T}+f_{L}v_{L} \tag{8.6}\]
Since, for due to left-right symmetry, we can consider \(f_{L}=f_{R}=f\), so the above equation can be written as,
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f^{-1}y_{D}^{T}+fv_{L} \tag{8.7}\]
So, from this equation we can come to a relation given by,
\[M_{\nu}=(f\frac{\gamma}{\xi}+y_{D}f^{-1}y_{D}^{T})\frac{k^{2}}{v_{R}} \tag{8.8}\]
Here, we can consider two situations, namely
* If \(f(\frac{\gamma}{\xi})<<y_{D}f^{-1}y_{D}^{T}\), the light neutrino mass is given by the type-I term \(M_{D}M_{RR}^{-1}M_{D}^{T}\). That is, here type-I is dominant and the light neutrino mass is from the suppression of heavy \(\nu_{R}\).
* If \(f(\frac{\gamma}{\xi})>>y_{D}f^{-1}y_{D}^{T}\), the light neutrino mass is given by the type-II term \(fv_{L}\). That is, in this case type-II mass term is dominant and the light neutrino mass is because of the tiny value of \(\nu_{L}\).
Appendix B
### Properties of \(A_{4}\) group
\(A_{4}\) is a non-abelian discrete symmetry group which represents even permuatations of four objects. It has four irreducible representations, three out of which are singlets \((1,1^{\prime},1^{\prime\prime})\) and one triplet 3 (\(3_{A}\) represents the anti-symmetric part and \(3_{S}\) the symmetric part). Products of the singlets and triplets are given by,
\[1\otimes 1=1\]
\[1^{\prime}\otimes 1^{\prime}=1^{\prime\prime}\]
\[1^{\prime}\otimes 1^{\prime\prime}=1\]
\[1^{\prime\prime}\otimes 1^{\prime\prime}=1^{\prime}\]
\[3\otimes 3=1\oplus 1^{\prime}\oplus 1^{\prime\prime}\oplus 3_{A}\oplus 3_{S}\]
If we have two triplets under \(A_{4}\) say, \((a_{1},a_{2},a_{3})\) and \((b_{1},b_{2},b_{3})\), then their multiplication rules are given by,
\[1\approx a_{1}b_{1}+a_{2}b_{3}+a_{3}b_{2}\]
\[1^{\prime}\approx a_{3}b_{3}+a_{1}b_{2}+a_{2}b_{1}\]
\[1^{\prime\prime}\approx a_{2}b_{2}+a_{3}b_{1}+a_{1}b_{2}\] \[3_{S}\approx\begin{pmatrix}2a_{1}b_{1}-a_{2}b_{3}-a_{3}b_{2}\\ 2a_{3}b_{3}-a_{1}b_{2}-a_{2}b_{1}\\ 2a_{2}b_{2}-a_{1}b_{3}-a_{3}b_{1}\end{pmatrix}\] \[3_{A}\approx\begin{pmatrix}a_{2}b_{3}-a_{3}b_{2}\\ a_{1}b_{2}-a_{2}b_{1}\\ a_{3}b_{1}-a_{1}b_{3}\end{pmatrix}\]
|
2309.07305 | SHIELD: Secure Haplotype Imputation Employing Local Differential Privacy | We introduce Secure Haplotype Imputation Employing Local Differential privacy
(SHIELD), a program for accurately estimating the genotype of target samples at
markers that are not directly assayed by array-based genotyping platforms while
preserving the privacy of donors to public reference panels. At the core of
SHIELD is the Li-Stephens model of genetic recombination, according to which
genomic information is comprised of mosaics of ancestral haplotype fragments
that coalesce via a Markov random field. We use the standard forward-backward
algorithm for inferring the ancestral haplotypes of target genomes, and hence
the most likely genotype at unobserved sites, using a reference panel of
template haplotypes whose privacy is guaranteed by the randomized response
technique from differential privacy. | Marc Harary | 2023-09-13T20:51:11Z | http://arxiv.org/abs/2309.07305v1 | # SHIELD: Secure Haplotype Imputation Employing Local Differential Privacy
###### Abstract
We introduce Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program for accurately estimating the genotype of target samples at markers that are not directly assayed by array-based genotyping platforms while preserving the privacy of donors to public reference panels. At the core of SHIELD is the Li-Stephens model of genetic recombination, according to which genomic information is comprised of mosaics of ancestral haplotype fragments that coalesce via a Markov random field. We use the standard forward-backward algorithm for inferring the ancestral haplotypes of target genomes--and hence the most likely genotype at unobserved sites--using a reference panel of template haplotypes whose privacy is guaranteed by the randomized response technique from differential privacy.
## 1 Introduction
In the context of biomedical analyses of large patient cohorts, whole-genome sequencing still remains prohibitively expensive for existing high-throughput technology. On the other hand, array-based genotyping platforms provide a more efficient method of collecting data for large-scale studies of human disease, albeit at the expense of the statistical power of genome-wide association (GWA) studies that intend to fine-map causal variants or facilitate meta-analyses [1, 2, 3, 4, 5].
One solution is genotype imputation, a preliminary stage in many GWA studies that consists of inferring the genotype for a given target genome at loci that have not been directly assayed, essentially expanding the dimensionality of the original dataset [2, 5, 6, 7, 8, 9, 10]. Employing a reference panel of donated haplotypes sequenced via higher-quality technology and at a far denser set of variants, imputation algorithms like MaCH [7], Minimac [8], BEAGLE [9], PLINK [10], fastPHASE [11], and IMPUTE [2] have been demonstrated to reliably augment both the coverage and statistical power of GWA analyses and hence become an essential component of many clinical studies [6].
Further to this end, public databases like the UK biobank (UKB) [12], All of Us research program [13], Haplotype Reference Consortium [14], and 1,000 Genomes Project (1KG) [15] have been made available to facilitate genomic research in part by offering standardized and readily accessible reference panels [16]. In cases where running imputation algorithms using large reference panels is impractical on local hardware or the direct access to the biobank data is prohibited, public web services like the Michigan Impute Server [17] are often established to answer queries to clients submitting target haplotypes for imputation.
Unfortunately, as part of a growing literature on privacy concerns in genomic research, it has also been documented that coordinated attacks on the part of cryptographic adversaries are capable of compromising the privacy of research subjects that donate to public reference panels [18, 19, 20, 21]. For example, attackers have been able to exploit ancestral data [22] or other personally identifying information [23] to reconstruct reference genomes. An urgent challenge is therefore to develop a suite of imputation algorithms that can simultaneously facilitate high-utility, statistically reliable GWA studies while protecting the privacy of contributors to reference haplotype panels [18, 24, 25].
One solution is the technique of differential privacy, which has rapidly become the "gold-standard" for statistical queries by being able to provide both robust privacy guarantees for participants in studies and meaningful results for researchers in commercial and scientific settings [26, 27, 28]. At the crux of the technique is a rigorous mathematical formalization of privacy that quantifies the extent to which adding pseudorandom noise to the results of computations can protect the anonymity of members of a database [29].
The following work introduces Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program that employs the Li-Stephens model of genetic recombination [5, 30] to impute missing haplotype variants in target genomes while incorporating differential privacy techniques to protect reference panel donors. Specifically, SHIELD proceeds in two stages: (i) initial input perturbation to guarantee local differential privacy [31] via randomized response [32, 33] and (ii) fitting a hidden Markov model [34] to each subsequent client query via the forward-backward algorithm [35]. In an experiment that closely simulates a real-world use case for haplotype imputation, we show that SHIELD is able to obtain state-of-the-art imputation accuracy while providing mathematically formalized privacy guarantees.
## 2 Results
### Overview
The setting for which SHIELD is intended consists of a client user uploading target genomes to a public imputation server [18]. In the standard imputation workflow, contributors to a biobank upload their sequenced genomic data to a central, publicly available server, where the data are then collated to create a haplotype reference panel to pass as an argument to an imputation algorithm [12, 14, 15]. Subsequently, client researchers may then upload target genomes as part of a clinical study to the server, where the targets are imputed using the private haplotype reference panel and, most often, an algorithm based in hidden Markov models [2, 7, 8, 9, 10, 34] and the forward-backward algorithm [35]. At no point in the workflow is the haplotype reference panel directly visible to client researchers submitting jobs to the server. However, while the privacy of the contributors to the reference panel may appear guaranteed, it has been demonstrated that adversarial attacks employing carefully coordinated queries to the server can divulge the sequences of reference haplotypes [18].
To this end, SHIELD modifies the imputation workflow by leveraging local [31] differential privacy [26, 27, 28, 32, 36]. Haplotype data can be represented as a bitstring in which a 1 at the \(i\)th position in the sequence indicates that the haplotype possesses the minor allele at the \(i\)th site and a 0 the major allele [8]. Prior to submission to the central imputation server, pseudorandom noise is added to the two bitstrings denoting each individual's pair of haplotypes via randomized response, a technique from differential privacy that simply consists of flipping a random subset of the bits from 0 to 1 and vice versa [32, 33]. The likelihood that a given bit in the haplotype bitstring is flipped varies as a function of a parameter \(\varepsilon\)--called the privacy budget [29]--such that lower values of \(\varepsilon\) entail a higher probability that any bit is flipped and therefore a higher degree of privacy. The tradeoff, however, is that lower privacy budgets incur a greater expense to imputation accuracy, rendering it a hyperparameter that the database curator must carefully adjust to strike an acceptable balance between donor privacy and client utility. Once all perturbed haplotypes are collected at the central server, imputation is subsequently performed using the modified haplotypes as a reference panel.
Privacy is guaranteed by the fact that no contributor's data will, on average, be unmodified when input to the imputation algorithm invoked by client researchers. In this way, no adversary could be certain that the results that they obtain from an attack accurately reflect the true reference panel. These privacy guarantees are also local; even if an adversary were to access the reference panel directly rather than through coordinated queries, the data obtained would again not perfectly reflect any individual's true genome [36].
### State-of-the-art imputation accuracy
To evaluate SHIELD's performance on a realistic simulation of an imputation query, we performed an ablation study on the 1KG Phase 3 [15] dataset. We withheld 100 genomes (equivalent to 200 haplotypes) from the reference panel to impute via the remaining 2,404 samples. The first 10,000 single-nucleotide polymorphisms (SNPs) were extracted from 1KG; the remaining were discarded to render run times more tractable. To
simulate an array-based assay of the 200 target haplotypes, we ablated all sites except those included in the Illumina Human1M-Duo v3.0 DNA Analysis BeadChip manifest, the intersection of which with the first 10,000 sites in the 1KG data consisted of a total of 253 sites for an _a priori_ coverage of 2.53%.
To quantify accuracy, we summed the imputed dosages for each pair of haplotypes to compute a final genotype dosage for each sample, then computed the coefficient of determination (\(R^{2}\)) between the genotype dosages and the ground-truth exome data. Because sites vary massively by minor allele frequency (MAF), the loci were divided into three bins corresponding to MAFs of \((0\%,0.5\%)\), \([0.5\%,5\%)\), and \([5\%,50\%]\). Respectively, these bins contained 5,943, 2,157, and 1,900 variants in the reference set. Accuracy was assessed, by bin, both to compare the performance of SHIELD to that of Minimac3 [8] and to characterize the effect of the privacy budget on our method's accuracy.
Our analyses show nearly identical performance between SHIELD and Minimac3 when no input perturbation is applied, with the former obtaining scores of 0.571, 0.784, and 0.902, respectively, on the three bins enumerated above and the latter scores of 0.584, 0.787, and 0.901 (Figure 2). SHIELD's accuracy was reevaluated at various values of our privacy budget along the interval \([0.01,10]\), reflecting the typical range of values that \(\varepsilon\) is assigned in many differentially private algorithms [26]. Expectedly, accuracy exhibits a negative association with \(\varepsilon\). At an upper bound of \(\varepsilon=10\), SHIELD performs nearly identically to Minimac3
Figure 1: Overview of the SHIELD pipeline, with the key algorithms in orange. Noise is added once to the reference data (purple) via Perturb, then collated and stored on the server to guarantee local DP (modified bits in bold). The client (green) then calls Impute on the server with the target haplotype (missing sites denoted \(\varnothing\)) and the reference panel as arguments.
Figure 2: A. The reference haplotype matrix corresponding to the first 128 SNPs on chromosome 20 and 200 haplotypes in 1KG. Empty squares represent the presence of the major allele, yellow of the minor. B. The same haplotype matrix perturbed by SHIELD.
(0.564, 0.784, 0.901; Figure 3), while performance degrades significantly at \(\varepsilon=0.01\) (0.014, 0.038, 0.218; Figure 3).
### Impact on Markov parameters
As noted above, the parameters for the Markov random field [34] modeling genomic recombination [30], namely the mutation and recombination rates, were computed on the unperturbed data by Minimac3 [8]. The rationale was that the noise added to the reference panel mimicked the behavior of extremely rapid genomic recombination, causing Minimac3's expectation-maximization procedure to dramatically overestimate the recombination rates (5.93 \(\times 10^{-3}\) vs. 4.84\(\times 10^{-4}\)) and, conversely, to underestimate the mutation rates (Figure 3B). These atypical rates exerted a decidedly negative impact on imputation accuracy, with performance decreasing by 35.5%, 16.1%, and 5.46% for each of the three bins, respectively, when the rates were computed on the reference panel perturbed at \(\varepsilon=5.0\). In sum, it is clearly superior to estimate population parameters _a priori_, although, notably, doing so on the reference panel itself is not differentially private and may leak information.
### Impact on compression rates
An additional feature of haplotype imputation introduced by Minimac3 was the M3VCF format for genomic data, which both substantially decreases total file size over the traditional VCF format and enables the state-space reduction technique that further improves imputation runtime [8]. The key insight enabling the format is the observation that, due to identity-by-descent [5], most haplotypes share identical \(k\)-mers of genomic material at intervals of contiguous loci despite being unique overall. In other words, given an arbitrary interval along the genome, the number of unique \(k\)-mers collectively exhibited by the reference panel is almost always smaller than the total number of reference haplotypes _per se_. Therefore, it is possible to implement a compression scheme in which the genome is partitioned into intervals and only the unique \(k\)-mer strings are retained, substantially compressing the original reference panel [8].
An unfortunate consequence of local differential privacy via randomized response is that, on average, random noise will destroy the exact equality between haplotypes substrings. From the perspective of a compression algorithm attempting to identify the set of unique \(k\)-mers along a given interval, an apparently larger number of unique fragments will exist, rendering M3VCF-style compression will less efficient. As an illustration, we partitioned the genomic data into mutually exclusive, exhaustive blocks of uniform size ranging from 2 to 500. We then computed the data compression ratio when M3VCF-style state-space reduction was applied at each block size by dividing the total \(5.008\times 10^{8}\) bits in the uncompressed panel by the number of bits following compression and plotted the ratio against block size (Figure 3C). Input perturbation resulted in compression rates up to an order of magnitude smaller.
Figure 3: A. Comparison between the accuracy by MAF of imputed dosages for targets withheld from 1KG for both SHIELD (non-differentially private) and Minimac3. B. SHIELD’s accuracy by MAF versus privacy budget.
## 3 Discussion
In this work, we develop Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program for performing genomic imputation with strong privacy guarantees for reference haplotypes via the randomized response technique [33]. Analysis shows that SHIELD is able to obtain state-of-the-art accuracy in realistic experimental settings at typical privacy budgets.
We note that the strong performance of SHIELD parallels the effectiveness of RAPPOR [37], a differentially private algorithm for mining strings in commercial contexts that is also based on randomized response. Unlike SHIELD, however, RAPPOR is not intended for data that is inherently binary; rather, arbitrary alphanumeric strings are hashed onto Bloom filters [38] that are subsequently perturbed. The fact that haplotype data intrinsically consist of bitstrings makes randomized response particularly convenient in a genomic context.
But despite the strong performance exhibited in the experiments above, it should be acknowledged that the privacy guarantees made by our program are limited to individual variants. In other words, for a given privacy budget \(\varepsilon\)[26, 27, 28], SHIELD can provably ensure protection for each sample's genotype at any one site, but not across the entire genome _per se_. Certain adversarial attacks are therefore still feasible with SHIELD even though accurate reconstruction of reference haplotypes is not [19, 20, 21, 22, 23]. Whole-genome privacy would instead require the division \(\varepsilon\) across each site (see [27] for a discussion on composition in differential privacy), which is prohibitively difficult for datasets containing tens of thousands of variants. On the other hand, such divisions may be possible if a fairly limited segment of the genome is to be imputed. Future research into genomic privacy may investigate these scenarios or alternative differentially private mechanisms.
A second limitation of our program is its dependence on accurate _a priori_ estimates of population pa
Figure 4: A. SHIELD’s imputed dosage accuracy \(\left(R^{2}\right)\) by MAF using parameters derived from both the original and perturbed reference panels. B. The mean recombination and error rates using the original and perturbed parameters. C. M3VCF-like compression ratio versus haplotype block size on the original and perturbed parameters.
rameters [5, 8, 30], which are non-trivial to compute while still enforcing local differential privacy. Subsequent work may inquire into the feasibility of computing population parameters _a posteriori_ by performing some manner of statistical correction.
Nevertheless, the capacity for basic differentially private mechanisms to easily provide meaningful results is highly promising for the prospect of privacy in practical genomic research.
## 4 Methods
The SHIELD algorithm consists of two subroutines, Perturb and Impute, that are described below. The former is called once on a reference panel \(\mathbf{X}\) to produce a locally [31] differentially private [26, 27, 28, 29] reference panel \(\mathbf{\tilde{X}}\) that is stored on the imputation server, whereas the latter is then called by the client for each subsequent query haplotype \(\mathbf{z}\) using \(\mathbf{\tilde{X}}\) as the reference panel.
### Differential Privacy and Randomized Response
We derive the privacy guarantees of SHIELD from the notion of differential privacy [26, 27, 28, 29]. Preliminarily, we develop the notion of _neighboring datasets_. Given a universe of datasets \(\mathcal{X}\), we say that two datasets \(x,y\in\mathcal{X}\) are neighbors if and only if they differ by at most one individual sample. We will also call a randomized algorithm \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\), where \(\mathcal{F}\) is an arbitrary probability space, a _mechanism_. We then say that a mechanism \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\) satisfies \(\left(\epsilon,\delta\right)\)-differential privacy if and only if for all \(\mathcal{S}\subseteq\mathcal{F}\) and for all \(x,y\in\mathcal{X}\) such that \(x\) are \(y\) are neighboring, we have
\[P\left(\mathcal{M}\left(x\right)\in\mathcal{S}\right)\leq\exp\left(\varepsilon \right)P\left(\mathcal{M}\left(y\right)\in\mathcal{S}\right)+\delta. \tag{1}\]
Among the most common techniques in differential privacy, randomized response [32, 33] satisfies \(\epsilon\)-differential privacy for binary attributes. The randomized response scheme on a binary attribute \(X\) is a mechanism \(\mathcal{M}_{rr}:\left\{0,1\right\}\rightarrow\left\{0,1\right\}\) is characterized by a \(2\times 2\) distortion matrix
\[\mathbf{P}=\begin{pmatrix}p_{00}&p_{01}\\ p_{10}&p_{11}\end{pmatrix}, \tag{2}\]
where \(p_{uv}=P\left(\mathcal{M}_{rr}(x_{i})=u|x_{i}=v\right)\quad(u,v)\in\left\{0,1\right\}\). It can be shown [32] that the highest-utility value for \(\mathbf{P}\) is
\[\mathbf{P}=\begin{pmatrix}\frac{e^{\varepsilon}}{1+e^{\varepsilon}}&\frac{1}{ 1+e^{\varepsilon}}\\ \frac{1}{1+e^{\varepsilon}}&\frac{e^{\varepsilon}}{1+e^{\varepsilon}}\end{pmatrix}. \tag{3}\]
Fixing the number of samples in our reference panel \(n\) and the number of sites \(m\), we denote the universe of possible reference panels \(\mathcal{X}=\left\{0,1\right\}^{m\times n}\). Because haplotypes are vector-valued, applying the notion of neighboring datasets is non-trivial. For our purposes, we will say that two reference panels \(\mathbf{X},\mathbf{X}^{\prime}\in\mathcal{X}\) are neighboring if and only if their Hamming distance is less than or equal to \(1\). In other words, we consider \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) neighbors if and only if \(X_{i,j}\neq X_{i,j}^{\prime}\) for a single marker \(i\) and a single individual \(j\) as opposed to a whole-genome interpretation of neighboring datasets in which \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) may differ by an entire row.
It then follows that by applying the randomized response mechanism \(\mathcal{M}_{rr}\) to each entry in a reference panel matrix \(\mathbf{X}\), we may store a perturbed copy \(\mathbf{\tilde{X}}\) of the original reference panel that satisfies entry-wise \(\varepsilon\)-differential privacy. The perturbation step of SHIELD then consists of the procedure Perturb. We note that we use the symbol \(\ell\) to denote a pseudorandom sample and \(\text{Bern}\left(\vartheta\right)\) to denote a Bernoulli distribution with parameter \(\vartheta\).
A convenient property of differential privacy is _post-processing_[26]. If \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\) is an \(\left(\varepsilon,\delta\right)\)-differentially private randomized algorithm and \(f:\mathcal{F}\rightarrow\mathcal{F}^{\prime}\) is an arbitrary mapping, then \(f\circ\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}^{\prime}\) is \(\left(\varepsilon,\delta\right)\)-differentially private. We set \(\mathcal{M}=\mathcal{M}_{rr}\) and define \(f\) such that \(f\left(\mathbf{\tilde{X}}\right)=\textsc{Impute}\left(\mathbf{z},\mathbf{ \tilde{X}},\boldsymbol{\mu},\boldsymbol{\rho}\right)\) for some fixed values \(\mathbf{z}\), \(\boldsymbol{\mu}\), and \(\boldsymbol{\rho}\) (see below on the meaning of these parameters). Then by post-processing, it follows that each call to Impute on the perturbed reference panel \(\mathbf{\tilde{X}}\) will satisfy \(\varepsilon\)-differential privacy. In other words, once \(\mathbf{\tilde{X}}\) has been collected on the imputation server and perturbed so as to satisfy local differential privacy, an unlimited number of queries are able to be made by an algorithmic adversary without divulging any one haplotype's value at any one site with a high degree of certainty.
```
1:procedurePerturb(\(\mathbf{X},\varepsilon\))
2:\(\mathbf{\tilde{X}}\leftarrow\) empty matrix
3:for\(i=1,2,\ldots,n\)do
4:for\(j=1,2,\ldots,m\)do
5:\(c\stackrel{{\leftarrow}}{{\leftarrow}}\mathrm{Bern}\left(\frac{ \varepsilon^{\varepsilon}}{1+e^{\varepsilon}}\right)\)
6:if\(c=1\)then
7:\(\tilde{X}_{i,j}\gets X_{i,j}\)
8:else
9:\(\tilde{X}_{i,j}\leftarrow\neg X_{i,j}\)
10:endif
11:endfor
12:endfor
13:return\(\mathbf{\tilde{X}}\)
14:endprocedure
```
**Algorithm 1** Applies randomized response mechanism to reference panel.
### HMM-based genotype imputation
We will also use the following notation:
* \(0\), \(1\), and \(\varnothing\): the minor allele, major allele, and constant denoting an unobserved site to be imputed;
* \(n\) and \(m\): the number of reference samples and reference markers;
* \([n]\): the set of reference haplotypes, represented as the index set;
* \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m}]^{\intercal} \in\left\{0,1\right\}^{m\times n}\): the reference panel haplotype sequences, equivalent to a real (and more, specifically, binary) matrix;
* \((z_{k})_{k=1}^{m}\in\left\{0,1,\varnothing\right\}^{m}\): the sequence corresponding to the observed target haplotype that, because it may include the missing site letter, is _not_, strictly speaking, a real vector;
* \(\left(\hat{z}_{k}\right)_{k=1}^{m}\equiv\mathbf{\hat{z}}\in\left[0,1\right]^{m}\): the sequence of imputed haplotype dosages, equivalent to a real vector;
* \(\mathbf{y}\in[n]^{m}\): the site-wise identities of the reference haplotypes from which \(\mathbf{z}\) is descended;
* \(\boldsymbol{\rho}\in\left[0,1\right]^{m}\): the recombination rates [5, 30] such that \(\rho_{i}=P(y_{i+1}=j_{2}|y_{i}=j_{1})\quad j_{2}\neq j_{1}\), meaning that \(\boldsymbol{\rho}\) is equivalent to a real vector (we simply let \(\rho_{m}=0\) as a dummy value);
* \(\boldsymbol{\mu}\in\left[0,1\right]^{m}\): the mutation rates [5, 30] such \(\mu_{i}=P(z_{i}\neq X_{i,j}|y_{i}=j)\), meaning that \(\boldsymbol{\mu}\) is equivalent a real vector;
* \(\mathbf{M}=[\mathbf{m}_{1},\mathbf{m}_{2},\ldots,\mathbf{m}_{m}]^{\intercal} \in\left[0,1\right]^{m\times n}\): the emission probabilities in matrix form such that \(\mu_{i}=P(z_{i}\neq X_{i,j}|y_{i}=j)\) such that \[M_{i,j}=\begin{cases}1-\mu_{i}&\text{if}\quad z_{i}=X_{i,j}\\ \mu_{i}&\text{if}\quad z_{i}=\varnothing\quad\text{or}\quad z_{i}\neq X_{i,j} \end{cases};\] (4)
* \(\boldsymbol{\Gamma}=[\boldsymbol{\gamma}_{1},\boldsymbol{\gamma}_{2},\ldots, \boldsymbol{\gamma}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the posterior probabilities for haplotype identity for all sites in matrix form \(\Gamma_{i,j}=P\left(y_{i}=j\right|(z_{k})_{k=1}^{m})\);
* \(\mathbf{A}=[\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2},\ldots,\boldsymbol {\alpha}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the forward probabilities [35] for all sites in matrix form such that \(A_{i,j}=P(y_{i}=j|\left(z_{k}\right)_{k=1}^{i})\);
* \(\mathbf{B}=[\boldsymbol{\beta}_{1},\boldsymbol{\beta}_{2},\ldots,\boldsymbol {\beta}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the backward probabilities [35] for all sites in matrix form such that \(B_{i,j}=P\left(y_{i}=j,(z_{k})_{k=i+1}^{m}\right)\); |
2306.17793 | Screw and Lie Group Theory in Multibody Dynamics -- Recursive Algorithms
and Equations of Motion of Tree-Topology Systems | Screw and Lie group theory allows for user-friendly modeling of multibody
systems (MBS) while at the same they give rise to computationally efficient
recursive algorithms. The inherent frame invariance of such formulations allows
for use of arbitrary reference frames within the kinematics modeling (rather
than obeying modeling conventions such as the Denavit-Hartenberg convention)
and to avoid introduction of joint frames. The computational efficiency is owed
to a representation of twists, accelerations, and wrenches that minimizes the
computational effort. This can be directly carried over to dynamics
formulations. In this paper recursive $O\left( n\right) $ Newton-Euler
algorithms are derived for the four most frequently used representations of
twists, and their specific features are discussed. These formulations are
related to the corresponding algorithms that were presented in the literature.
The MBS motion equations are derived in closed form using the Lie group
formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations,
of which Kane's equations are a special case, and the other are the Lagrange
equations. The recursive kinematics formulations are readily extended to higher
orders in order to compute derivatives of the motions equations. To this end,
recursive formulations for the acceleration and jerk are derived. It is briefly
discussed how this can be employed for derivation of the linearized motion
equations and their time derivatives. The geometric modeling allows for direct
application of Lie group integration methods, which is briefly discussed. | Andreas Mueller | 2023-06-30T16:48:25Z | http://arxiv.org/abs/2306.17793v1 | Screw and Lie Group Theory in Multibody Dynamics Recursive Algorithms and Equations of Motion of Tree-Topology Systems
###### Abstract
Screw and Lie group theory allows for user-friendly modeling of multibody systems (MBS) while at the same they give rise to computationally efficient recursive algorithms. The inherent frame invariance of such formulations allows for use of arbitrary reference frames within the kinematics modeling (rather than obeying modeling conventions such as the Denavit-Hartenberg convention) and to avoid introduction of joint frames. The computational efficiency is owed to a representation of twists, accelerations, and wrenches that minimizes the computational effort. This can be directly carried over to dynamics formulations. In this paper recursive \(O\left(n\right)\) Newton-Euler algorithms are derived for the four most frequently used representations of twists, and their specific features are discussed. These formulations are related to the corresponding algorithms that were presented in the literature. The MBS motion equations are derived in closed form using the Lie group formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations, of which Kane's equations are a special case, and the other are the Lagrange equations. The recursive kinematics formulations are readily extended to higher orders in order to compute derivatives of the motions equations. To this end, recursive formulations for the acceleration and jerk are derived. It is briefly discussed how this can be employed for derivation of the linearized motion equations and their time derivatives. The geometric modeling allows for direct application of Lie group integration methods, which is briefly discussed.
Keywords:Multibody system dynamics relative coordinates recursive algorithms O(n) screws Lie groups Newton-Euler equations Lagrange equations Kane's equations Euler-Jourdain equations projection equations Lie group integration linearization
## 1 Introduction
The core task in computational multibody system (MBS) dynamics is to either construct the equations of motion (EOM) explicitly, that can be written for an unconstrained tree-topology MBS in the form
\[\mathbf{M}\left(\mathbf{q}\right)\ddot{\mathbf{q}}+\mathbf{C}\left(\dot{\mathbf{ q}},\mathbf{q}\right)\dot{\mathbf{q}}=\mathbf{Q}\left(\dot{\mathbf{q}},\mathbf{q},t \right), \tag{1}\]
in a way that is easy to pursue, or to evaluate them for given \(\left(\dot{\mathbf{q}},\dot{\mathbf{q}},\mathbf{q}\right)\) and \(t\), respectively to solve them, in a computationally efficient way for \(\mathbf{q}\left(t\right)\). In continuation of [62] the aim of this paper is to present established \(O\left(n\right)\) formulations in a common geometric setting and to show that this setting allows for a flexible and user-friendly MBS modeling.
Screw and Lie group theory provides a geometric framework that allows for achieving optimal computational performance and at the same time allows for an intuitive and flexible modeling. In particular, it gives rise to a formulation of the MBS kinematics that does not involve body-fixed joint frames. The kinematics modeling is indeed reflected in the formulation used to evaluate the EOM. A central concept is the representation of velocities (twists) as screws. Four different variants were recalled in [62]. In this paper their application to dynamics modeling is reviewed. A well-known approach, which exploits the fact that rigid body twists are screws, is the so-called'spatial
vector' formulation introduced in [27; 30], respectively the so-called'spatial operator algebra' that was formalized in [75]. The latter is the basis for the \(O\left(n\right)\) forward dynamics algorithms introduced in [31; 38; 39; 45; 74; 76]. The fundamental operation underlying these formulations is the frame transformations of screws, i.e. twists and wrenches. The fact that the latter can be expressed in terms of compact matrix operations gave rise to a matrix formulation for the MBS kinematic and dynamics [5; 43; 44; 85] using screw algebra. While these formulations make merely use of the algebraic properties of screws (e.g. velocities, accelerations, wrenches) several algorithms for generating the EOM of MBS with tree topology were reported that also exploit the fact that finite rigid body motions constitute the Lie group \(SE\left(3\right)\) whose Lie algebra \(se\left(3\right)\) is isomorphic to the algebra of screws [16; 33; 34; 24; 25]. The central relation is the _product of exponentials_ (POE) introduced in [16]. The important feature of such a geometric Lie group formulation is the frame invariance, which makes it independent from any modeling convention like Denavit-Hartenberg. This allows for direct processing of CAD data, and gives further rise to numerically advantageous Lie group time integration methods. Yet there is no established Lie group algorithm for the generation respectively evaluation of the EOM that takes full advantage of the freedom to chose different motion representations enabled by the frame invariance.
This paper is organized as follows. Recursive relations for the acceleration and jerk, and thus for the time derivatives of the Jacobians, are first derived in section 2. The Newton-Euler equations for the four different representations of twists introduced in [62] are then recalled in section 3. The corresponding recursive \(O\left(n\right)\) inverse dynamics algorithm for evaluating the EOM are presented in section 4. The body-fixed algorithm is similar to that in [2; 7; 31; 35; 36; 45; 46; 69; 70; 73; 72; 78], the hybrid formulation to that in [1; 6; 38; 39; 75; 76], and the spatial formulation to that in [30]. Two versions of the EOM in closed form are presented in section 5. In section 5.1 the 'Euler-Jourdain' respectively 'projection' equations [15; 86] are presented that, together with the screw formulation of MBS kinematics, allows for an efficient MBS modeling in terms of readily available geometric data. In section 5.2 a closed form of the Lagrangian EOM is presented using the Lie group approach. It should be noticed that the presented formulations allow for modeling MBS without introduction of joint frames, while applying the recursive kinematics and dynamics algorithm that is deemed best suited. The significance of the Lie group formulation for the linearization of the EOM as well as the determination of derivative of the EOM w.r.t. geometric design parameters and time derivatives is discussed in section 6. Finally in section 7 the application of Lie group integration methods is briefly discussed. The kinematic relations that were presented in [62] are summarized in appendix A. The basic Lie group background can be found in [48; 77; 65].
## 2 Acceleration, Jerk, and Partial Derivatives of Jacobian
Besides the compact description of finite and instantaneous motions of a system of articulated bodies, a prominent feature of the screw theoretical approach is that it allows for expressing the partial derivatives explicitly in terms geometric objects. Moreover, the analytic formulation of the kinematics using the POE gives rise to compact expressions for higher derivatives of the instantaneous joint screws, i.e. of the Jacobian, which may be relevant for sensitivity analysis and linearization of motion equations. In this section results for the acceleration and jerk of a kinematic chain are presented for the body-fixed, spatial, and hybrid representation. The corresponding relations for the mixed representation are readily found from either one of these using the relations in table 3 of [62].
### Body-Fixed Representation
Starting from (101) the body-fixed acceleration is \(\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\mathbf{J}_{i}^{\mathrm{b}}\ddot{\mathbf{q}}+ \mathbf{J}_{i}^{\mathrm{b}}\dot{\mathbf{q}}\), and explicitly in terms of the body-fixed instantaneous screw coordinates
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}} \ddot{q}_{j}+\sum_{j\leq i}\sum_{k\leq i}\frac{\partial}{\partial q_{k}}\mathbf{ J}_{i,j}^{\mathrm{b}}\dot{q}_{j}\dot{q}_{k}. \tag{2}\]
Using the matrix form of (103) the partial derivatives of the instantaneous screw coordinates are
\[\frac{\partial}{\partial q_{k}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}=\frac{ \partial}{\partial q_{k}}(\mathbf{C}_{i}^{-1}\mathbf{C}_{j})\mathbf{A}_{j}^{- 1}\widehat{\mathbf{Y}}_{j}\mathbf{A}_{j}\mathbf{C}_{i}^{-1}\mathbf{C}_{j}+ \mathbf{C}_{i}^{-1}\mathbf{C}_{j}\mathbf{A}_{j}^{-1}\widehat{\mathbf{Y}}_{j} \mathbf{A}_{j}\frac{\partial}{\partial q_{k}}(\mathbf{C}_{j}^{-1}\mathbf{C}_ {i}). \tag{2}\]
This can be evaluated with help of the POE formula (93) as
\[\frac{\partial}{\partial q_{k}}(\mathbf{C}_{i}^{-1}\mathbf{C}_{j}) =\frac{\partial}{\partial q_{k}}(\mathbf{A}_{i}^{-1}\exp(- \widehat{\mathbf{Y}}_{i}q_{i})\cdots\exp(-\widehat{\mathbf{Y}}_{j+1}q_{j+1}) \mathbf{A}_{j}))\] \[=-\mathbf{A}_{i}^{-1}\exp(-\widehat{\mathbf{Y}}_{i}q_{i})\cdots \exp(-\widehat{\mathbf{Y}}_{k+1}q_{k+1})\widehat{\mathbf{Y}}_{k}\exp(- \widehat{\mathbf{Y}}_{k}q_{k})\cdots\exp(-\widehat{\mathbf{Y}}_{j+1}q_{j+1}) \mathbf{A}_{j}\] \[=-\mathbf{C}_{i}^{-1}\mathbf{C}_{k}\mathbf{A}_{k}^{-1}\widehat{ \mathbf{Y}}_{k}\mathbf{A}_{k}\mathbf{C}_{k}^{-1}\mathbf{C}_{j}=-\mathbf{C}_{i }^{-1}\mathbf{C}_{k}\mathbf{A}_{k}^{-1}\widehat{\mathbf{Y}}_{k}\mathbf{A}_{k} \mathbf{C}_{k}^{-1}\mathbf{C}_{i}\mathbf{C}_{i}^{-1}\mathbf{C}_{j}\] \[=-\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}}\mathbf{C}_{i}^{-1} \mathbf{C}_{j},\ j\leq k\leq i, \tag{3}\]
and in the same way follows that
\[\frac{\partial}{\partial q_{k}}(\mathbf{C}_{j}^{-1}\mathbf{C}_{i})=\mathbf{C} _{j}^{-1}\mathbf{C}_{i}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}},\ j\leq k\leq i. \tag{4}\]
Inserted into (2) yields \(\frac{\partial}{\partial q_{k}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}= \widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}} -\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}\), and noting (113), the final expression is
\[\frac{\partial\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}}=[\mathbf{J}_{i,j }^{\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}],\ j<k\leq i. \tag{5}\]
Hence the partial derivative of the instantaneous joint screw \(\mathbf{J}_{i,j}^{\mathrm{b}}\) w.r.t. to \(q_{k}\) is simply the screw product (114) of \(\mathbf{J}_{i,j}^{\mathrm{b}}\) and \(\mathbf{J}_{i,k}^{\mathrm{b}}\). The final expression for the acceleration attains a very compact form
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}} \ddot{q}_{j}+\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^ {\mathrm{b}}]\dot{q}_{j}\dot{q}_{k}. \tag{6}\]
Indeed the same result would be obtained using (103) in terms of \(\mathbf{Y}_{i}\). This expression has been derived, using different notations, for instance in [16; 65; 69; 51].
The equations (6) can be summarized for all bodies \(i=1,\ldots,n\) using the system twist (111) and system Jacobian (112). To this end, the derivative (5) is rewritten as
\[\frac{\partial\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}}=[\mathbf{J}_{i,j}^ {\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}]=\mathbf{Ad}_{\mathbf{C}_{i,k}}[ \mathbf{J}_{k,j}^{\mathrm{b}},{}^{k}\mathbf{X}_{k}]=-\mathbf{Ad}_{\mathbf{C}_{ i,k}}\mathbf{ad}_{\mathbf{A}_{k}\mathbf{X}_{k}}\mathbf{J}_{k,j}^{\mathrm{b}},\ j<k\leq i \tag{7}\]
so that
\[\dot{\mathbf{J}}_{i,j}^{\mathrm{b}}=\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{ \mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}]\dot{q}_{k}=-\sum_{j<k\leq i}\mathbf{ Ad}_{\mathbf{C}_{i,k}}\mathbf{ad}_{\mathbf{A}_{k}\mathbf{X}_{k}}\mathbf{J}_{k,j}^{ \mathrm{b}}\dot{q}_{k}.\]
Noticing that \(\mathbf{ad}_{\mathbf{\cdot}\mathbf{X}_{k}}\mathbf{J}_{k,k}^{\mathrm{b}}=\mathbf{0}\) the time derivative of the body-fixed system Jacobian factors as
\[\dot{\mathbf{J}}^{\mathrm{b}}\left(\mathbf{q},\dot{\mathbf{q}}\right)=-\mathbf{ A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{a}^{\mathrm{b}}\left(\dot{\mathbf{q}} \right)\mathbf{A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{X}^{\mathrm{b}}=- \mathbf{A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{a}^{\mathrm{b}}\left(\dot{ \mathbf{q}}\right)\mathbf{J}^{\mathrm{b}}\left(\mathbf{q}\right) \tag{8}\]
with \(\mathbf{A}^{\mathrm{b}}\) defined in (24) of [62] and with
\[\mathbf{a}^{\mathrm{b}}\left(\dot{\mathbf{q}}\right):=\mathrm{diag}\ (\dot{q}_{1} \mathbf{ad}_{\mathbf{\cdot}\mathbf{X}_{1}},\ldots,\dot{q}_{n}\mathbf{ad}_{\mathbf{ \cdot}\mathbf{X}_{n}}). \tag{9}\]
Hence the system acceleration is given in compact matrix form as
\[\dot{\mathbf{V}}^{\mathrm{b}}=\mathbf{J}^{\mathrm{b}}\ddot{\mathbf{q}}-\mathbf{A} ^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}\dot{\mathbf{q}}= \mathbf{J}^{\mathrm{b}}\ddot{\mathbf{q}}-\mathbf{A}^{\mathrm{b}}\mathbf{a}^{ \mathrm{b}}\mathbf{V}^{\mathrm{b}}. \tag{10}\]
Remark 1 (Overall inverse kinematics solution): The relation (10) gives rise to a solution of the inverse kinematics problem on acceleration level, i.e. the generalized accelerations for given configurations, twists, and accelerations of the bodies. The unique solution is
\[\ddot{\mathbf{q}}=((\mathbf{X}^{\mathrm{b}})^{T}\mathbf{X}^{\mathrm{b}})^{-1}( \mathbf{X}^{\mathrm{b}})^{T}((\mathbf{I}-\mathsf{D}^{\mathrm{b}})\dot{\mathbf{ V}}^{\mathrm{b}}+\mathsf{a}^{\mathrm{b}}\mathbf{V}^{\mathrm{b}}) \tag{11}\]
which is indeed the time derivative of (26) in [62]. In components this gives the acceleration of the individual joints as \(\ddot{q}_{i}={}^{i}\mathbf{X}_{i}^{T}(\dot{\mathbf{V}}_{i}^{\mathrm{b}}- \mathbf{A}\mathbf{d}\mathbf{c}_{\mathrm{i},i-1}\dot{\mathbf{V}}_{i-1}^{ \mathrm{b}}+\dot{q}_{i}[^{i}\mathbf{X}_{i},\mathbf{V}_{i}^{\mathrm{b}}])/ \left\|{}^{i}\mathbf{X}_{i}\right\|^{2}\).
A further time derivative of the twist yields the jerk of a body, which requires a further partial derivative of the Jacobian. Starting from (5), and using the Jacobi identity (116) and the bilinearity \(\frac{\partial}{\partial q_{k}}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i, k}^{\mathrm{b}}]=\)[\(\frac{\partial}{\partial q_{k}}\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i, k}^{\mathrm{b}}]+[\mathbf{J}_{i,j}^{\mathrm{b}},\frac{\partial}{\partial q _{k}}\mathbf{J}_{i,k}^{\mathrm{b}}]\), the non-zero second partial derivative is found as
\[\frac{\partial^{2}\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}\partial q_{r }}=\left\{\begin{array}{l}[[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^{ \mathrm{b}}],\mathbf{J}_{i,r}^{\mathrm{b}}],j<k\leq r\leq i\\ [[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,r}^{\mathrm{b}}],\mathbf{J}_{i, k}^{\mathrm{b}}],j<r<k\leq i\end{array}\right.. \tag{12}\]
This gives rise to an explicit form for the body-fixed jerk
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}}\overset{..}{q}_{j}+ 2\!\!\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{ b}}]\ddot{q}_{j}\dot{q}_{k}\] (13) \[+\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^ {\mathrm{b}}]\dot{q}_{j}\ddot{q}_{k}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In matrix form the overall spatial acceleration can be summarized as
\[\dot{\mathbf{V}}^{\mathrm{s}}=\mathrm{J}^{\mathrm{s}}\ddot{\mathbf{q}}+\mathsf{Lb}^ {\mathrm{s}}\mathrm{diag}\;(\mathbf{J}^{\mathrm{s}}_{1},\ldots,\mathbf{J}^{ \mathrm{s}}_{n})\dot{\mathbf{q}} \tag{18}\]
with
\[\mathsf{b}^{\mathrm{s}}\left(\mathbf{V}^{\mathrm{s}}\right):=\mathrm{diag}\;( \mathbf{ad}_{\mathbf{V}^{\mathrm{s}}_{1}},\ldots,\mathbf{ad}_{\mathbf{V}^{ \mathrm{s}}_{n}}) \tag{19}\]
and \(\mathsf{L}\) being the lower triangular block identity matrix. A solution for \(\ddot{\mathbf{q}}\) similar to (10) exists.
The second partial derivative of the spatial Jacobian is
\[\frac{\partial^{2}\mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{k}q_{j}}=\left\{ \begin{array}{ll}[\mathbf{J}^{\mathrm{s}}_{k},[\mathbf{J}^{\mathrm{s}}_{j}, \mathbf{J}^{\mathrm{s}}_{i}]],&k<j<i\\ [\mathbf{J}^{\mathrm{s}}_{j},[\mathbf{J}^{\mathrm{s}}_{k},\mathbf{J}^{ \mathrm{s}}_{i}]],&j\leq k<i\end{array}\right.. \tag{20}\]
Therewith the spatial representation of the jerk of body \(i\) is found as
\[\ddot{\mathbf{V}}^{\mathrm{s}}_{i} =\sum_{j\leq i}\left(\mathbf{J}^{\mathrm{s}}_{j}\dddot{q}_{j}+2[ \mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\ddot{q}_{j}+\sum_{k \leq j}[\mathbf{J}^{\mathrm{s}}_{k}\ddot{q}_{k},\mathbf{J}^{\mathrm{s}}_{j}] \dot{q}^{j}+[\mathbf{V}^{\mathrm{s}}_{j-1}+\mathbf{V}^{\mathrm{s}}_{j}- \mathbf{V}^{\mathrm{s}}_{i},[\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{ \mathrm{s}}_{j}]]\dot{q}_{j}\right) \tag{21}\] \[=\sum_{j\leq i}\left(\mathbf{J}^{\mathrm{s}}_{j}\dddot{q}_{j}+[ [\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}],\mathbf{V}^{ \mathrm{s}}_{i}-2\mathbf{V}^{\mathrm{s}}_{j}]\dot{q}_{j}+[\sum_{k\leq j} \mathbf{J}^{\mathrm{s}}_{k}\ddot{q}_{k}+[\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J }^{\mathrm{s}}_{j}]\dot{q}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\dot{q}_{j}+2[ \mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\ddot{q}_{j}\right) \tag{22}\]
The instantaneous joint screws (104), and thus their derivatives (15) and (20), are independent of a particular body. The closed form of the \(\nu\)-th order partial derivative has been reported in [55]
\[\frac{\partial^{\nu}\mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{\alpha _{1}}\partial q_{\alpha_{2}}\ldots\partial q_{\alpha_{\nu}}} =[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}},[\mathbf{J}^{\mathrm{s}}_ {\beta_{\nu-1}},[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu-2}},\ldots[\mathbf{J}^{ \mathrm{s}}_{\beta_{1}},\mathbf{J}^{\mathrm{s}}_{i}]\ldots]]],\;\beta_{\nu} \leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}<i \tag{23}\] \[=\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}}}\mathbf{ad}_{ \mathbf{J}^{\mathrm{s}}_{\beta_{\nu-1}}}\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_ {\beta_{\nu-2}}}\cdots\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_{\beta_{1}}}\mathbf {J}^{\mathrm{s}}_{i},\;\beta_{\nu}\leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}<i\] \[=[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}},\frac{\partial^{\nu-1} \mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{\beta_{1}}\partial q_{\beta_{2}} \cdots\partial q_{\beta_{\nu-1}}}],\;\beta_{\nu}\leq\beta_{\nu-1}<i\]
where again \(\beta_{\nu}\leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}\) is the ordered sequence of the indices \(\alpha_{1},\ldots,\alpha_{\nu}\). The last form in (23) allows for a recursive determination. Moreover, a recursive formulation for the time derivative of spatial twists has been reported in [58]. Together with the very concise form (16) this makes the spatial representation computationally very attractive.
### Hybrid Form
The results in section 2.1 can be carried over to the hybrid twist making use of the relation (106). As in (118), denote with \(\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}\) and \(\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k}\) the screw coordinate vectors comprising respectively the linear and angular part of the column of the hybrid Jacobian so that \(\mathbf{J}^{\mathrm{h}}_{i,k}=\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k} +\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}\). Then
\[\frac{\partial\mathbf{J}^{\mathrm{h}}_{i,j}}{\partial q_{k}} =\frac{\partial\mathbf{Ad}_{\mathbf{R}_{i}}}{\partial q_{k}} \mathbf{J}^{\mathrm{b}}_{i,j}+\mathbf{Ad}_{\mathbf{R}_{i}}\frac{\partial \mathbf{J}^{\mathrm{b}}_{i,j}}{\partial q_{k}}=\mathbf{ad}_{\mathbf{J}^{ \mathrm{s}}_{i,k}}\mathbf{Ad}_{\mathbf{R}_{i}}\mathbf{J}^{\mathrm{b}}_{i,j}+ \mathbf{Ad}_{\mathbf{R}_{i}}[\mathbf{J}^{\mathrm{b}}_{i,j},\mathbf{J}^{ \mathrm{b}}_{i,k}]\] \[=[\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k},\mathbf{J}^{ \mathrm{h}}_{i,j}]+[\mathbf{J}^{\mathrm{h}}_{i,j},\mathbf{J}^{\mathrm{h}}_{i,k} ]=[\mathbf{J}^{\mathrm{h}}_{i,j},\mathbf{J}^{\mathrm{h}}_{i,k}-\overset{ \omega}{\mathbf{J}}^{\mathrm{h}}_{i,k}] \tag{24}\]
and thus
\[\frac{\partial\mathbf{J}^{\mathrm{h}}_{i,j}}{\partial q_{k}} {=}[\mathbf{J}^{\mathrm{h}}_{i,j},\overset{v}{\mathbf{J}}^{ \mathrm{h}}_{i,k}]=-\mathbf{ad}_{\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}} \mathbf{J}^{\mathrm{h}}_{ij},\;\;j\leq k\leq i. \tag{25}\]
The similarity to (5) is apparent. The difference is that the convective term due to the angular motion is missing, which is why only \(\overset{v}{\mathbf{J}}\) appears. The time derivative of the hybrid Jacobian can thus be expressed as
\[\overset{\mathrm{h}}{\mathbf{J}}^{\mathrm{h}}_{i,j}=\sum_{k\leq j}[\mathbf{J}^{ \mathrm{h}}_{i,j},\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}]\dot{q}_{k}=[ \mathbf{J}^{\mathrm{h}}_{i,j},\Delta\overset{v}{\mathbf{V}}^{\mathrm{h}}_{j-1,i}] \tag{26}\]
where \(\Delta\mathbf{V}_{j-1,i}^{\mathrm{h}}:=\mathbf{V}_{i}^{\mathrm{h}}-\mathbf{Ad}_{ \mathbf{r}_{i,j-1}}\mathbf{V}_{j-1}^{\mathrm{h}}\) is the relative hybrid twist of body \(j-1\) and \(i\) as observed in the BFR on body \(i\). A simpler relation is obtained by directly differentiating (105)
\[\dot{\mathbf{J}}_{i,j}^{\mathrm{h}} =(\mathbf{ad}_{\dot{\mathbf{r}}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{ i,j-1}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})^{0}\mathbf{X}_{j}^{j} \tag{27}\] \[=\mathbf{Ad}_{\mathbf{r}_{i,j-1}}(\mathbf{ad}_{\dot{\mathbf{d}}_ {i,j}}+\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})^{0}\mathbf{X}_{j}^{j}=\mathbf{Ad} _{\mathbf{r}_{i,j-1}}(\mathbf{ad}_{\mathbf{V}_{j}^{\ast}}-\mathbf{ad}_{\dot{ \mathbf{r}}_{i}})^{0}\mathbf{X}_{j}^{j}.\]
This yields the following explicit expressions for the hybrid acceleration
\[\dot{\mathbf{V}}_{i}^{\mathrm{h}}= \sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{h}}\ddot{q}_{j}+\sum_{j \leq k\leq i}\left[\mathbf{J}_{i,j}^{\mathrm{h}},\dot{\mathbf{v}}_{i,k}^{ \mathrm{h}}\right]\!\dot{q}_{j}\dot{q}_{k}=\sum_{j\leq i}(\mathbf{J}_{i,j}^{ \mathrm{h}}\ddot{q}_{j}+[\mathbf{J}_{i,j}^{\mathrm{h}},\Delta\mathbf{V}_{j-1,i}^{\mathrm{v}}]\!\dot{q}_{j}) \tag{28}\] \[= \sum_{j\leq i}(\mathbf{J}_{i,j}^{\mathrm{h}}\ddot{q}_{j}+(\mathbf{ ad}_{\mathbf{r}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^{ \ast}})^{0}\mathbf{X}_{j}^{j}\dot{q}_{j}). \tag{29}\]
For the second derivative it is simplest to start from (27), and a straightforward calculation yields
\[\ddot{\mathbf{J}}_{i,j}^{\mathrm{h}}=\left(\mathbf{ad}_{\dot{\mathbf{r}}_{i,j }}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+ \mathbf{Ad}_{\mathbf{r}_{i,j}}(\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+\mathbf{ ad}_{\mathbf{\omega}_{j}^{\ast}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})\right)^{0} \mathbf{X}_{j}^{j}. \tag{30}\]
The jerk in hybrid representation can thus be written as
\[\ddot{\mathbf{V}}_{i}^{\mathrm{h}}= \sum_{j\leq i}\left(\mathbf{J}_{i,j}^{\mathrm{h}}\dddot{\mathbf{r} }_{j}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\ddot{q}_{j}+\left(\mathbf{ad}_{\dot{ \mathbf{r}}_{i,j}}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^ {\ast}}\right)\!\dot{q}_{j}\right. \tag{31}\] \[\qquad+\left.\mathbf{Ad}_{\mathbf{r}_{i,j}}\big{(}2\mathbf{ad}_{ \mathbf{\omega}_{j}^{\ast}}\ddot{q}_{j}+\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+ \mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}\big{)} \dot{q}_{j}\big{)}^{0}\mathbf{X}_{j}^{j}\right). \tag{32}\]
These are the core relation in the so-called'spatial vector' formulation (i.e. using the hybrid representation of twists) [38; 39; 31; 45; 74; 76]. In this context the Lie bracket, respectively screw product, (114) has been termed the'spatial cross product' [28; 30].
### Mixed Representation
With (100), employing the results for the mixed representation, yields
\[\dot{\mathbf{J}}_{ij}^{\mathrm{m}}=\left(\begin{array}{cc}\mathbf{R}_{i}^{T} &\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\dot{\mathbf{J}}_{ij}^{\mathrm{h}},\ \ \dot{\mathbf{V}}_{i}^{\mathrm{m}}=\left(\begin{array}{cc}\mathbf{R}_{i}^{T}& \mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\dot{\mathbf{V}}_{i}^{\mathrm{h}},\ \ \ddot{\mathbf{V}}_{i}^{\mathrm{m}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\ddot{\mathbf{V}}_{i}^{\mathrm{h}}. \tag{33}\]
## 3 Newton-Euler Equations in Various Representations
### Spatial Representation
Consider a rigid body with body-fixed BFR \(\mathcal{F}_{\mathrm{b}}=\{\Omega;\vec{e}_{0,1},\vec{e}_{\mathrm{b},2},\vec{e}_ {\mathrm{b},3}\}\) located at an arbitrary point \(\Omega\). Denote the inertia matrix w.r.t. this BFR with \(\mathbf{M}^{\mathrm{b}}\), ref. (40). The configuration of the BFR \(\mathcal{F}_{\mathrm{b}}\) is described by \(\mathbf{C}=(\mathbf{R},\mathbf{r})\). The spatial inertia matrix expressed in the IFR is then
\[\mathbf{M}^{\mathrm{s}}=\mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{M}^{\mathrm{b}} \mathbf{Ad}_{\mathbf{C}}^{-1}. \tag{34}\]
The spatial canonical momentum co-screw \(\mathbf{\Pi}^{\mathrm{s}}=\left(\mathbf{L}^{\mathrm{s}},\mathbf{P}^{\mathrm{s}} \right)^{T}\in se^{\ast}\left(3\right)\), conjugate to the spatial twist, is thus
\[\mathbf{\Pi}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\mathbf{V}^{\mathrm{s}}= \mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{M}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{C}}^{ -1}\mathbf{V}^{\mathrm{s}}=\mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{\Pi}^{ \mathrm{b}}. \tag{35}\]
The momentum balance yields the Newton-Euler (NE) equations in spatial representation, which attains the simple form
\[\dot{\mathbf{\Pi}}^{\mathrm{s}}=\mathbf{W}^{\mathrm{s}} \tag{36}\]
where \(\mathbf{W}^{\mathrm{s}}=\left(\mathbf{t}^{\mathrm{s}},\mathbf{f}^{\mathrm{s}} \right)^{T}\) is the applied wrench, with spatial torque \(\mathbf{t}^{\mathrm{s}}\equiv{}^{0}\mathbf{t}^{0}\) and force \(\mathbf{f}^{\mathrm{s}}\equiv{}^{0}\mathbf{f}\), both measured and resolved in the IFR. The momentum balance equation (36) is the simplest form
possible, which is achieved by using the spatial representation of twist, wrench, and momentum. Firstly, it does not involve any vectorial operation, e.g. cross products. Secondly, it is also numerically advantageous: any numerical discretization of the ODE (36) easily preserves the spatial momentum in the absence of external wrenches. This has been discussed already by Borri in [13]. In this context the spatial formulation is called the fixed pole equation. In a recent paper [32] the advantages of this form are exploited for geometrically exact modeling of beams.
The explicit and compact form in terms of the spatial twist is found, introducing (35) and using
\[\dot{\mathbf{M}}^{\mathrm{s}}=-\mathbf{ad}_{\mathbf{V}\cdot}^{T}\mathbf{M}^{ \mathrm{s}}-\mathbf{M}^{\mathrm{s}}\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}} \tag{37}\]
along with \(\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}}\mathbf{V}^{\mathrm{s}}=\mathbf{0}\), as
\[\boxed{\mathbf{W}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\dot{\mathbf{V}}^{ \mathrm{s}}-\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}}^{T}\mathbf{M}^{\mathrm{s}} \mathbf{V}^{\mathrm{s}}.} \tag{38}\]
Remark 2: Writing (38) as \(\mathbf{W}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\dot{\mathbf{V}}^{\mathrm{s}}+ \mathbf{C}^{\mathrm{s}}\mathbf{V}^{\mathrm{s}}\) (with \(\mathbf{C}^{\mathrm{s}}:=-\mathbf{ad}_{\mathbf{V}\cdot}^{T}\mathbf{M}^{ \mathrm{s}}\)) shows that \(\dot{\mathbf{M}}^{\mathrm{s}}-2\mathbf{C}^{\mathrm{s}}=\mathbf{ad}_{\mathbf{ V}\cdot}^{T}\mathbf{M}^{\mathrm{s}}-\mathbf{M}^{\mathrm{s}}\mathbf{ad}_{ \mathbf{V}^{\mathrm{s}}}\) is skew symmetric. This property is called the skew symmetry of the motion equations [65].
### Body-fixed Representation
Let \(\mathcal{F}_{\mathrm{c}}=\{C;\vec{e}_{\mathrm{c},1},\vec{e}_{\mathrm{c},2}, \vec{e}_{\mathrm{c},3}\}\) be a body-fixed frame located at the COM. Its configuration is described by \(C_{\mathrm{c}}=\left(\mathbf{R}_{\mathrm{c}},\mathbf{r}_{\mathrm{c}}\right)\). The body-fixed twist of the COM frame is denoted with \(\widetilde{\boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{R}_{ \mathrm{c}}^{T}\dot{\mathbf{R}}_{\mathrm{c}},\mathbf{v}_{\mathrm{c}}^{\mathrm{ b}}=\mathbf{R}_{\mathrm{c}}^{T}\dot{\mathbf{r}}_{\mathrm{c}}\). The inertia matrix w.r.t. this COM frame is denoted
\[\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}=\left(\begin{array}{cc}\mathbf{\Theta} _{\mathrm{c}}&\mathbf{0}\\ \mathbf{0}&m\mathbf{I}\end{array}\right) \tag{39}\]
with the body mass \(m\) and the inertia tensor \(\mathbf{\Theta}_{\mathrm{c}}\) expressed in the body-fixed COM frame \(\mathcal{F}_{\mathrm{c}}\). Let \(S_{\mathrm{bc}}=\left(\mathbf{R}_{\mathrm{bc}},\mathbf{b}^{\mathrm{d}}\mathbf{ b}_{\mathrm{bc}}\right)\in SE\left(3\right)\) be the transformation from the COM frame \(\mathcal{F}_{\mathrm{c}}\) to the BFR \(\mathcal{F}_{\mathrm{b}}\). Here \(\mathbf{b}\mathbf{d}_{\mathrm{bc}}\) is the position vector from the BFR to the COM resolved in the BFR. Then the configuration of \(\mathcal{F}_{\mathrm{c}}\) is given in terms of that of the BFR by \(\mathbf{C}_{\mathrm{c}}=\mathbf{C}\mathbf{S}_{\mathrm{bc}}\). The inertia matrix w.r.t. to the general BFR \(\mathcal{F}_{\mathrm{b}}\) is
\[\mathbf{M}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{S}_{\mathrm{bc}}}^{-T}\mathbf{M}_{\mathrm{ c}}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{S}_{\mathrm{bc}}}^{-1}\] \[=\left(\begin{array}{cc}\mathbf{\Theta}_{\mathrm{b}}&m^{ \mathrm{b}}\widetilde{\boldsymbol{d}}_{\mathrm{bc}}\\ -m^{\mathrm{b}}\widetilde{\boldsymbol{d}}_{\mathrm{bc}}&m\mathbf{I}\end{array}\right) \tag{40}\]
with \(\mathbf{\Theta}_{\mathrm{b}}=\mathbf{R}_{\mathrm{bc}}\mathbf{\Theta}_{ \mathrm{c}}\mathbf{R}_{\mathrm{bc}}^{T}-m\widetilde{\boldsymbol{d}}_{\mathrm{ bc}}^{2}\) (which is the parallel axes theorem).
The momentum co-screw represented in the body-fixed RFR \(\mathcal{F}_{\mathrm{b}}\) is \(\mathbf{\Pi}^{\mathrm{b}}=\mathbf{M}^{\mathrm{b}}\mathbf{V}^{\mathrm{b}}\). The frame transformation of (38) to the BFR \(\mathcal{F}_{\mathrm{b}}\) yields the body-fixed momentum balance represented in \(\mathcal{F}_{\mathrm{b}}\) in the concise form
\[\boxed{\mathbf{W}^{\mathrm{b}}=\dot{\mathbf{\Pi}}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}^{\mathrm{b}}}^{T}\mathbf{\Pi}^{\mathrm{b}}} \tag{41}\]
with the applied wrench \(\mathbf{W}^{\mathrm{b}}=\left(\mathbf{t}^{\mathrm{b}},\mathbf{f}^{\mathrm{b}} \right)^{T}\) in body-fixed representation. The equations (41) are formally identical to the spatial equations (38). Written separately, this yields the NE equations expressed in an arbitrary body-fixed BFR
\[\mathbf{\Theta}_{\mathrm{b}}\dot{\boldsymbol{\omega}}^{\mathrm{b}} +\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\mathbf{\Theta}_{\mathrm{b}} \boldsymbol{\omega}^{\mathrm{b}}-m^{\mathrm{b}}\left(\mathbf{v}^{\mathrm{b}}+ \widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}\right) \widetilde{\boldsymbol{d}}_{\mathrm{bc}} =\mathbf{t}^{\mathrm{b}} \tag{42}\] \[m\big{(}\dot{\mathbf{v}}^{\mathrm{b}}+\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}+(\widetilde{\boldsymbol{\omega}}^{ \mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}})^{\mathrm{b}}\mathbf{d}_{\mathrm{bc}}\big{)} =\mathbf{f}^{\mathrm{b}}. \tag{43}\]
When using the COM frame as special case, the momentum represented in the body-fixed COM frame is \(\mathbf{\Pi}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{b}} \mathbf{V}_{\mathrm{c}}^{\mathrm{b}}\), and the momentum balance yields
\[\mathbf{W}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{b}} \mathbf{V}_{\mathrm{c}}^{\mathrm{b}}-\mathbf{ad}_{\boldsymbol{\omega}_{\mathrm{ c}}^{\mathrm{b}}}^{T}\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}\mathbf{V}_{\mathrm{c}}^{ \mathrm{b}}. \tag{44}\]
Written in components, this yields the NE equations represented in the COM frame
\[\mathbf{\Theta}_{\mathrm{c}}\dot{\boldsymbol{\omega}}_{\mathrm{ c}}^{\mathrm{b}}+\widetilde{\boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}} \mathbf{\Theta}_{\mathrm{c}}\mathbf{\omega}_{\mathrm{c}}^{\mathrm{b}} =\mathbf{t}_{\mathrm{c}}^{\mathrm{b}} \tag{45}\] \[m\big{(}\dot{\mathbf{v}}_{\mathrm{c}}^{\mathrm{b}}+\widetilde{ \boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}}\mathbf{v}_{\mathrm{c}}^{ \mathrm{b}}\big{)} =\mathbf{f}_{\mathrm{c}}^{\mathrm{b}}. \tag{46}\]
Noticeably the angular and translational momentum equations are coupled even though the COM is used as reference. This is due to using body-fixed twists.
### Hybrid Form
The hybrid twist \(\mathbf{V}_{\mathrm{c}}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s}}, \dot{\mathbf{r}}_{\mathrm{c}}\right)^{T}\) of the COM frame is related to the body-fixed twist by \(\mathbf{Ad}_{\mathbf{R}_{\mathrm{c}}}^{-1}\mathbf{V}_{\mathrm{c}}^{\mathrm{b}}\), see (98), where \(\mathbf{R}_{\mathrm{c}}\) is the absolute rotation matrix of \(\mathcal{F}_{\mathrm{c}}\) in \(C_{\mathrm{c}}\). The hybrid momentum screw is thus \(\mathbf{\Pi}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{h}} \mathbf{V}_{\mathrm{c}}^{\mathrm{h}}\), where the hybrid representation of the inertia matrix is
\[\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{R}_{ \mathrm{c}}}^{-T}\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{R}_{ \mathrm{c}}}^{-1}=\left(\begin{array}{cc}\mathbf{\Theta}_{\mathrm{c}}^{ \mathrm{h}}&\mathbf{0}\\ \mathbf{0}&m\mathbf{I}\end{array}\right),\ \ \mathbf{\Theta}_{\mathrm{c}}^{ \mathrm{h}}=\mathbf{R}_{\mathrm{c}}\mathbf{\Theta}_{\mathrm{c}}\mathbf{R}_{ \mathrm{c}}^{T}. \tag{47}\]
The hybrid momentum balance w.r.t. the COM follows from \(\dot{\mathbf{\Pi}}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}\). Using \(\dot{\mathbf{M}}_{\mathrm{c}}^{\mathrm{h}}=-\mathbf{ad}_{\boldsymbol{\omega}^{ \mathrm{s}}}^{T}\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}-\mathbf{M}_{\mathrm{c}}^{ \mathrm{h}}\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}\) yields
\[\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{h}} \dot{\mathbf{V}}_{\mathrm{c}}^{\mathrm{h}}+\mathbf{ad}_{\boldsymbol{\omega}^{ \mathrm{s}}}\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}\widetilde{\mathbf{V}}_{ \mathrm{c}}^{\mathrm{h}} \tag{48}\]
with \(\dot{\mathbf{V}}_{\mathrm{c}}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s} },\mathbf{0}\right)^{T}\) (notice \(-\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}^{T}=\mathbf{ad}_{\boldsymbol{ \omega}^{\mathrm{s}}}\)). Writing (48) separately for the angular and linear momentum balance
\[\mathbf{\Theta}_{\mathrm{c}}^{\mathrm{h}}\ddot{\boldsymbol{\omega }}^{\mathrm{s}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\mathbf{\Theta}_{ \mathrm{c}}^{\mathrm{h}}\boldsymbol{\omega}^{\mathrm{s}} =\mathbf{t}_{\mathrm{c}}^{\mathrm{h}} \tag{49}\] \[m\dot{\mathbf{r}}_{\mathrm{c}} =\mathbf{f}_{\mathrm{c}}^{\mathrm{h}} \tag{50}\]
shows that the hybrid NE equations w.r.t. the COM are indeed decoupled. Here \(\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}=(\mathbf{t}_{\mathrm{c}}^{\mathrm{h}}, \mathbf{f}_{\mathrm{c}}^{\mathrm{h}})^{T}\) denotes the hybrid wrench measured in the COM frame and resolved in the IFR.
Now consider the arbitrary body-fixed BFR \(\mathcal{F}_{\mathrm{b}}\) with configuration \(C=(\mathbf{R},\mathbf{r})\). The hybrid twist \(\mathbf{V}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s}},\dot{\mathbf{r }}\right)^{T}\) measured at this RFR is \(\mathbf{V}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}\mathbf{V}_{ \mathrm{c}}^{\mathrm{h}}\), with the displacement vector \(\mathbf{d}_{\mathrm{bc}}\) from BFR to COM resolved in the IFR. The hybrid mass matrix w.r.t. to the BFR \(\mathcal{F}_{\mathrm{b}}\) is found as
\[\mathbf{M}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-T}\mathbf{M}_{ \mathrm{c}}^{\mathrm{h}}\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}=\left( \begin{array}{cc}\mathbf{\Theta}^{\mathrm{h}}&m\widetilde{\mathbf{d}}_{ \mathrm{bc}}\\ -m\widetilde{\mathbf{d}}_{\mathrm{bc}}&m\mathbf{I}\end{array}\right),\ \ \mathbf{\Theta}^{\mathrm{h}}=\mathbf{\Theta}_{\mathrm{c}}^{\mathrm{h}}-m \widetilde{\mathbf{d}}_{\mathrm{bc}}^{2}. \tag{51}\]
The momentum balance in hybrid representation w.r.t. an arbitrary BFR
\[\dot{\mathbf{\Pi}}^{\mathrm{h}}=\mathbf{W}^{\mathrm{h}} \tag{52}\]
is found, using \(\dot{\mathbf{Ad}}_{\mathbf{d}_{\mathrm{bc}}}^{-1}=-\mathbf{ad}_{\mathbf{d}_{ \mathrm{bc}}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}\mathbf{ad}_{ \boldsymbol{\omega}^{\mathrm{s}}}-\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}} \mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}\) to evaluate (52), as
\[\boxed{\mathbf{W}^{\mathrm{h}}=\mathbf{M}^{\mathrm{h}}\dot{\mathbf{V}}^{\mathrm{h}} +\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}\mathbf{M}^{\mathrm{h}}\dot{ \mathbf{V}}^{\mathrm{h}}.} \tag{53}\]
Separating the angular and translational part results in
\[\mathbf{\Theta}^{\mathrm{h}}\dot{\boldsymbol{\omega}}^{\mathrm{s}}+ \widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\mathbf{\Theta}^{\mathrm{h}} \boldsymbol{\omega}^{\mathrm{s}}+m\widetilde{\mathbf{d}}_{\mathrm{bc}}\ddot{ \mathbf{r}} =\mathbf{t}^{\mathrm{h}} \tag{54}\] \[m(\ddot{\mathbf{r}}+(\dot{\widetilde{\boldsymbol{\omega}}}^{\mathrm{ s}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\widetilde{\boldsymbol{\omega}}^{ \mathrm{s}})\mathbf{d}_{\mathrm{bc}}) =\mathbf{f}^{\mathrm{h}}. \tag{55}\]
These are simpler than the body-fixed equations (42) and (43). Finally notice that \(\mathbf{f}^{\mathrm{h}}=\mathbf{f}^{\mathrm{s}}\).
### Mixed Form
The mixed twists \(\mathbf{V}^{\mathrm{m}}=\big{(}\boldsymbol{\omega}^{\mathrm{b}},\dot{\mathbf{r}} \big{)}^{T}\) consists of the body-fixed angular velocity \(\boldsymbol{\omega}^{\mathrm{b}}\), i.e. measured and resolved in the BFR \(\mathcal{F}_{\mathrm{b}}\), and the translational velocity \(\dot{\mathbf{r}}\) measured at the BFR \(\mathcal{F}_{\mathrm{b}}\) and resolved in the IFR. The NE equations for the mixed representation w.r.t. a general BFR are directly found by combining (42) and (55), with \(\ddot{\mathbf{r}}=\dot{\mathbf{v}}^{\mathrm{b}}+\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}\),
\[\boldsymbol{\Theta}^{\mathrm{b}}\dot{\boldsymbol{\omega}}^{ \mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\boldsymbol{\Theta}^{ \mathrm{b}}\boldsymbol{\omega}^{\mathrm{b}}+m^{\mathrm{b}}\widetilde{\mathbf{ d}}_{\mathrm{bc}}\mathbf{R}^{T}\ddot{\mathbf{r}} = \mathbf{t}^{\mathrm{b}} \tag{56}\] \[m(\ddot{\mathbf{r}}+\mathbf{R}(\dot{\widetilde{\boldsymbol{ \omega}}}^{\mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\widetilde{ \boldsymbol{\omega}}^{\mathrm{b}})^{\mathrm{b}}\mathbf{d}_{\mathrm{bc}}) = \mathbf{f}^{\mathrm{h}}. \tag{57}\]
If a COM frame is used, combining (45) and (50) yields
\[\boldsymbol{\Theta}_{c}\dot{\boldsymbol{\omega}}^{\mathrm{b}}_{ \mathrm{c}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}_{\mathrm{c}} \boldsymbol{\Theta}_{\mathrm{c}}\boldsymbol{\omega}^{\mathrm{b}}_{\mathrm{c}} = \mathbf{t}^{\mathrm{b}}_{\mathrm{c}} \tag{58}\] \[m\ddot{\mathbf{r}}_{\mathrm{c}} = \mathbf{f}^{\mathrm{h}}_{\mathrm{c}}.\]
### Arbitrary Representation
The NE equations of body \(i\) represented in an arbitrary frame \(\mathcal{F}_{j}\) are obtained by a frame transformation of the spatial momentum balance (36) as
\[\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\dot{\mathbf{\Pi}}^{\mathrm{s}}_{i}=\mathbf{ Ad}^{T}_{\mathbf{C}_{j}}\mathbf{W}^{\mathrm{s}}_{i}. \tag{59}\]
The spatial twist in terms of the twist of body \(i\) represented in \(\mathcal{F}_{j}\) is \(\mathbf{V}^{\mathrm{s}}_{i}=\mathbf{Ad}_{\mathbf{C}_{j}}{}^{j}\mathbf{V}_{i}\). Using \(\dot{\mathbf{V}}^{\mathrm{s}}_{i}=\mathbf{Ad}_{\mathbf{C}_{j}}{}^{j}\dot{ \mathbf{V}}_{i}+\mathbf{ad}_{\mathbf{V}_{j}}{}^{j}\mathbf{V}^{\mathrm{s}}_{i}\), (38) yields
\[{}^{j}\mathbf{M}_{i}\big{(}{}^{j}\dot{\mathbf{V}}_{i}+\mathbf{ad}_{{}^{j}{}_{ \mathbf{V}_{j}}{}^{j}}\mathbf{V}_{i}\big{)}-\mathbf{ad}^{T}_{\mathbf{V}_{i}}{} ^{j}\mathbf{M}_{i}{}^{j}\mathbf{V}_{i}={}^{j}\mathbf{W}_{i} \tag{60}\]
with the inertia matrix of body \(i\) represented in frame \(j\)
\[{}^{j}\mathbf{M}_{i}:=\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\mathbf{M}^{\mathrm{s} }_{i}\mathbf{Ad}_{\mathbf{C}_{j}}. \tag{61}\]
The spatial and body-fixed representations are special cases with \(i=j\).
Even more generally, the NE equations can be resolved in yet another frame \(\mathcal{F}_{k}\). This is achieved by transforming the momentum balance (36) as
\[\mathbf{Ad}^{T}_{\mathbf{R}_{j,k}}\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\dot{\mathbf{ \Pi}}^{\mathrm{s}}_{i}=\mathbf{Ad}^{T}_{\mathbf{R}_{j,k}}\mathbf{Ad}^{T}_{ \mathbf{C}_{j}}\mathbf{W}^{\mathrm{s}}_{i} \tag{62}\]
where \(\mathbf{R}_{k,j}\) is the rotation matrix from \(\mathcal{F}_{i}\) to \(\mathcal{F}_{k}\). The final equations follow from (60) and the relation \({}^{j}\dot{\mathbf{V}}^{j}_{i}=\mathbf{Ad}_{\mathbf{R}_{j,k}}{}^{k}\dot{ \mathbf{V}}^{j}_{i}+\mathbf{ad}_{{}^{j}{}_{\widetilde{\mathbf{V}}^{j}_{k}}} \mathbf{Ad}_{\mathbf{R}_{j,k}}{}^{k}\mathbf{V}^{j}_{i}\) as
\[{}^{k}\mathbf{M}^{j}_{i}\big{(}{}^{k}\dot{\mathbf{V}}^{j}_{i}+\big{(}\mathbf{ ad}_{{}^{k}{}_{\mathbf{V}^{j}_{j}}}+\mathbf{ad}_{{}^{k}{}_{\widetilde{\mathbf{V}}^{j}_{k}}} \big{)}^{k}\mathbf{V}^{j}_{i}\big{)}-\mathbf{ad}^{T}_{\mathbf{x}^{\prime}_{i}}{} ^{k}\mathbf{M}^{j}_{i}{}^{k}\mathbf{V}^{j}_{i}={}^{k}\mathbf{W}^{j}_{i} \tag{63}\]
with the mass matrix of body \(i\) measured at frame \(\mathcal{F}_{j}\) and resolved in frame \(\mathcal{F}_{k}\)
\[{}^{k}\mathbf{M}^{j}_{i}:=\mathbf{Ad}^{T}_{\mathbf{R}_{k,j}}{}^{j}\mathbf{M}_{ i}\mathbf{Ad}_{\mathbf{R}_{k,j}}=\mathbf{Ad}^{T}_{\mathbf{R}_{k,j}}\mathbf{Ad}^{T}_{ \mathbf{C}_{j}}\mathbf{M}^{\mathrm{s}}_{i}\mathbf{Ad}_{\mathbf{C}_{j}}\mathbf{ Ad}_{\mathbf{R}_{k,j}}. \tag{64}\]
The spatial and body-fixed representations are special cases with \(i=j=k\), and the hybrid representation with \(i=j\) and \(k=0\). An alternative form of the NE equations in arbitrary reference frames was presented in [9].
## 4 Recursive Evaluation of the Motion Equations for a Kinematic Chain
The model-based control of complex MBS as well as the computational MBS dynamics rely on efficient recursive inverse and forward dynamics algorithms. A recursive Newton-Euler method for tree-topology MBS was presented in an abstract, i.e. coordinate-free, approach in [47]. However, the various recursive methods using different representations give rise to algorithmically equivalent methods but with different computational costs. In the following, the various inverse dynamics algorithms are presented and their computational effort is estimated. A detailed analysis as well as the forward dynamics algorithms are beyond the scope of this paper. The presented discussion is nevertheless indicative also for the corresponding forward dynamics algorithms. Some results on the forward kinematics complexity can be found in [67; 81; 87]. This depends on the actual implementation, however. A comparative study is still due, and shall be part of further research. The inverse dynamics consists in evaluating the motion equations for given joint coordinates \(\mathbf{q}\), joint rates \(\dot{\mathbf{q}}\), accelerations \(\ddot{\mathbf{q}}\), and applied wrenches \(\mathbf{W}_{i}^{\mathrm{app}}\), and hence to determine the joint forces \(\mathbf{Q}=(Q_{1},\ldots Q_{n})\). The starting point of recursive algorithms for rigid body MBS are the NE equations of the individual bodies. The MBS dynamics is indeed governed by the Lagrange equations. Consequently, summarizing the recursive steps yields the Lagrangian motion equations in closed form. This will be shown in the following.
It is assumed for simplicity that the inertia properties, i.e. the mass matrices \(\mathbf{M}_{i}^{\mathrm{b}}\), are expressed in the body-fixed BFR of body \(i\) determining its configuration, rather than introducing a second frame.
### Body-fixed Representation
Forward Kinematics RecursionGiven the joint variables \(\mathbf{q}\), the configurations of the \(n\) bodies are determined recursively by (92) or (93), and the twists by (108). Then also the accelerations are found recursively. The expression \(\mathbf{C}_{i-1,i}\left(q_{i}\right)=\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q _{i})\) for the relative configuration yields \(\dot{\mathbf{Ad}}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}}=[ \mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{i}\mathbf{ X}_{i}\dot{q}_{i}]\), and hence
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{i} \mathbf{X}_{i}\dot{q}_{i}]+{}^{i}\mathbf{X}_{i}\ddot{q}_{i} \tag{65a}\] \[=\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{ \mathrm{b}}_{i}]+{}^{i}\mathbf{X}_{i}\ddot{q}_{i}\] (65b) \[=\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{V}_{i}^{\mathrm{b}},{}^{i}\mathbf{X}_{i}\dot{q}_{i}]+{}^{i} \mathbf{X}_{i}\ddot{q}_{i}. \tag{65c}\]
where (65b) and (65c) follow by replacing either argument in the Lie bracket using (108).
Remark 3: Notice that solving (108) for \(\dot{q}_{i}\) leads to the result in remark 9 of [62]. Solving (65c) for \(\ddot{q}_{i}\) yields (11). Using (65b) the latter can be expressed as \(\ddot{q}_{i}={}^{i}\mathbf{X}_{i}^{T}\left(\mathbf{V}_{i}^{\mathrm{b}}- \mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{b}}+[\mathbf{V }_{i}^{\mathrm{b}},\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b }}]\right)\)/ \(\left\|{}^{i}\mathbf{X}_{i}\right\|^{2}\).
Recursive Newton-Euler AlgorithmOnce the configurations, twists, and accelerations of the bodies are computed with the forward kinematics recursion, the Newton-Euler equations (41) for each individual body can be evaluated by an inverse dynamics backward recursion. The momentum balance of body \(i\) then yields the resulting body-fixed wrench \(\mathbf{W}_{i}^{\mathrm{b}}\) acting on the body due to generalized joint forces and constraint reactions forces. Projecting the resultant wrench onto the screw axis \({}^{i}\mathbf{X}_{i}\) of joint \(i\) yields the generalized force \(Q_{i}\). Summarizing the forward and backward recursions yields the following recursive algorithm:
Forward Kinematics
* Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\)
* For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q_{i})= \exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_{i}q_{i})\mathbf{A}_{i}\] (66a) \[\mathbf{V}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}}+{} ^{i}\mathbf{X}_{i}\dot{q}_{i}\] (66b) \[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{b }}-\dot{q}_{i}\mathbf{ad}\cdot_{\mathbf{X}_{i}}\mathbf{V}_{i}^{\mathrm{b}}+{} ^{i}\mathbf{X}_{i}\ddot{q}_{i}\] (66c)
* Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\mathrm{b}},\dot{\mathbf{V}}_{i}^{\mathrm{b}}\)
Inverse Dynamics
* Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\mathrm{b}},\dot{\mathbf{V}}_{i}^{\mathrm{b}}, \mathbf{W}_{i}^{\mathrm{b,app}}\)
* For \(i=n-1,\ldots,1\) \[\dot{\mathbf{W}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i+1,i}}^{T}\mathbf{W}_{i+1}^{\mathrm{b }}+\mathbf{M}_{i}^{\mathrm{b}}\dot{\mathbf{V}}_{i}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{b}}}^{T}\mathbf{M}_{i}^{\mathrm{b}}\mathbf{V}_{i}^{ \mathrm{b}}+\mathbf{W}_{i}^{\mathrm{b,app}}\] (67a) \[Q_{i} ={}^{i}\mathbf{X}_{i}^{T}\mathbf{W}_{i}^{\mathrm{b}}\] (67b)
* Output: \(\mathbf{Q}\)
The joint reaction wrench is omitted in (67a) since this is reciprocal to the joint screw, and does not contribute to (67b). Notice that, with (94), the body-fixed \({}^{i}\mathbf{X}_{i}\) as well as the spatial representation \(\mathbf{Y}_{i}\) of joint screw coordinates can be used. This form of the recursive body-fixed NE equations, using Lie group notation, has been reported in several publications [69; 73; 72; 51].
Computational EffortFor the kinematic chain comprising \(n\) bodies connected by \(n\)\(1\)-DOF joints, in total, the twist recursion (66b) and acceleration recursion (66c) each requires \(n-1\) frame transformations. The acceleration recursion (66c) further requires \(n-1\) Lie brackets. The second argument of the Lie bracket can be reused from (66b). Hence the twist and acceleration recursion need \(2\left(n-1\right)\) frame transformations and \(n-1\) Lie brackets. The backward recursion (67a) needs \(n-1\) frame transformations and \(n\) Lie brackets. In total, the NE algorithm needs \(3\left(n-1\right)\) frame transformations and \(2n-1\) Lie brackets. The evaluation of the Lie bracket in (66c) can be simplified using (65b) since the screw vector \({}^{i}\mathbf{X}_{i}\) expressed in RFR is sparse and often only contains one non-zero entry.
Remark on Forward DynamicsUsing the body-fixed representation, a recursive forward dynamics algorithm, making explicit use of Lie group concepts, was presented in [69; 70; 73; 72; 78]. The kinematic forward recursion together with the factorization in section 3.1.3 of [62] was used derive \(O\left(n\right)\) forward dynamics algorithms in [31; 45], where the Lie group concept is regarded as spatial operator algebra. Other \(O\left(n\right)\) forward dynamics algorithms were presented in [2; 7; 35; 36]. The inverse dynamics formulation was also presented in [30; 46] in the context of screw theory.
### Spatial Representation
Forward KinematicsRecursing the spatial twist in terms of the spatial Jacobian, the expressions (104) lead immediately to
\[\mathbf{V}_{i}^{\mathrm{s}}=\mathbf{V}_{i-1}^{\mathrm{s}}+\mathbf{J}_{i}^{ \mathrm{s}}\dot{q}_{i}. \tag{68}\]
The recursive determination of spatial accelerations thus only requires the time derivative (16) of the spatial Jacobian, so that
\[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{ad}_{\mathbf{V}_{i}^ {\mathrm{s}}}\mathbf{J}_{i}^{\mathrm{s}}\dot{q}_{i}+\mathbf{J}_{i}^{\mathrm{s} }\ddot{q}_{i} \tag{69}\] \[=\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{ad}_{\mathbf{V}_{i-1}^ {\mathrm{s}}}\mathbf{V}_{i}^{\mathrm{s}}+\mathbf{J}_{i}^{\mathrm{s}}\ddot{q}_ {i}.\]
The second form in (69) follows by inserting (68). This is the generalization of Euler's theorem, for the derivative of vectors resolved in moving frames, to screw coordinate vectors. Therefore the \(\mathbf{ad}\) operator is occasionally called the'spatial cross product'.
Recursive Newton-Euler AlgorithmThe momentum balance expressed with the spatial NE equations (38) together with (68) leads to the following algorithm:
Forward Kinematics * Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\) * For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}\mathbf{B}_{i}\exp(i^{\text{\tiny{$\mathsf{i}$}}} \mathbf{X}_{i}q_{i})=\exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_ {i}q_{i})\mathbf{A}_{i}\] (70a) \[\mathbf{J}_{i}^{\text{s}} =\mathbf{Ad}_{\mathbf{C}_{i}}{}^{i}\mathbf{X}_{i}=\mathbf{Ad}_{ \mathbf{C}_{i}\mathbf{A}_{i}^{-1}}\mathbf{Y}_{i}=\mathbf{Ad}_{\mathbf{C}_{j} \mathbf{S}_{j,j}}{}^{j-1}\mathbf{Z}_{j}\] (70b) \[\mathbf{V}_{i}^{\text{s}} =\mathbf{V}_{i-1}^{\text{s}}+\mathbf{J}_{i}^{\text{s}}\dot{q}_{i}\] (70c) \[\dot{\mathbf{V}}_{i}^{\text{s}} =\dot{\mathbf{V}}_{i-1}^{\text{s}}+\mathbf{J}_{i}^{\text{s}} \ddot{q}_{i}+\mathbf{ad}_{\mathbf{V}_{i-1}^{\text{s}}}\mathbf{V}_{i}^{\text{ s}}\] (70d) * Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{s}},\dot{\mathbf{V}}_{i}^{\text{s}}, \mathbf{J}_{i}^{\text{s}}\) Inverse Dynamics * Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{s}},\dot{\mathbf{V}}_{i}^{\text{s}}, \mathbf{J}_{i}^{\text{s}},\mathbf{W}_{i}^{\text{s,app}}\) * For \(i=n-1,\ldots,1\) \[\mathbf{M}_{i}^{\text{s}} =\mathbf{Ad}_{\mathbf{C}_{i}}^{T}\mathbf{M}_{i}^{\text{b}} \mathbf{Ad}_{\mathbf{C}_{i}}^{-1}\] (71a) \[\mathbf{W}_{i}^{\text{s}} =\mathbf{W}_{i+1}^{\text{s}}+\mathbf{M}_{i}^{\text{s}}\dot{ \mathbf{V}}_{i}^{\text{s}}-\mathbf{ad}_{\mathbf{V}_{i}^{\text{s}}}^{T}\mathbf{ M}_{i}^{\text{s}}\mathbf{V}_{i}^{\text{s}}+\mathbf{W}_{i}^{\text{s,app}}\] (71b) \[Q_{i} =(\mathbf{J}_{i}^{\text{s}})^{T}\mathbf{W}_{i}^{\text{s}}\] (71c) * Output: \(\mathbf{Q}\) _Computational Effort_ In contrast to (66b), once the instantaneous screws (70b) and the spatial mass matrix (34) are computed, the recursions (70c), (70d), and (71b) do not require frame transformations of twists. Instead the spatial mass matrix is transformed, according to (71a), which is the frame transformation of a second-order tensor. Overall the spatial algorithm needs \(n\) frame transformations of screw coordinates, \(n\) frame transformation of a second-order tensor, and \(2n-1\) Lie brackets. Comparing body-fixed and spatial formulation, it must be noticed that the frame transformation of the second-order inertia tensor has the same complexity as two screw coordinate transformations (if just implemented in the form (34)), and hence the computational complexity of both would be equivalent. This fact is to be expected since body-fixed and spatial representations are related by frame transformations. Nevertheless the spatial version has some interesting features that shall be emphasized:
1. The NE equations (38) form a non-linear first-order ODE system on \(SE\left(3\right)\times se\left(3\right)\). Since a spatial reference is used, the momentum conservation of a rigid body can simply be written as \(\dot{\mathbf{\Pi}}_{i}^{\text{s}}=\mathbf{0}\), where \(\mathbf{\Pi}_{i}^{\text{s}}\in se^{\text{s}}\left(3\right)\) is the momentum co-screw. Using the spatial momentum balance (36) has potentially two advantages. Firstly, (36) is a linear ODE in \(\mathbf{\Pi}\) on the phase space \(SE\left(3\right)\times se^{\text{s}}\left(3\right)\). This implies that a numerical integration scheme can easily preserve the momentum, as pointed out in [13]. Secondly, \(O\left(n\right)\) formulations using canonical momenta have been shown to be computationally advantageous. An \(O\left(n\right)\) forward dynamics algorithm based on the canonical Hamilton equations was presented in [66] that uses on the hybrid form. It was shown to require less numerical operations than \(O\left(n\right)\) algorithms based on the NE equations. It is also known that \(O\left(n\right)\) algorithms based on the spatial representation can be computationally more efficient than those based on body-fixed or hybrid representations [30]. A further reduction of computational costs shall be expected from an algorithm using spatial momenta.
2. It is interesting to notice that the hybrid as well as the spatial twists appear in the recursive \(O\left(n\right)\) forward dynamics formulation in [6], where the first is called 'Cartesian velocity' and the latter'velocity state'. In this formulation the spatial twist plays a central role, and it was already remarked that the recursive relation of spatial twists, ref. (70c), is simpler than that for hybrid twists (75c) below.
3. If a _purely kinematic analysis_ is envisaged the forward recursion (70b)-(70d) is more efficient than the body-fixed and the hybrid version (see next section) [67] (disregarding possibly necessary transformations of the results to local reference frames). As pointed out in section 2.2 this advantage is retained for the higher-order kinematics (jerk, jounce, etc.) [55].
Remark on Forward DynamicsThe spatial formulation is rarely used for dynamics. Featherstone [27; 30] derived a forward dynamics \(O\left(n\right)\) algorithm. It was concluded that this requires the lowest computational effort compared to other methods. But this does not take into account the necessary transformations of twists and wrenches to local reference frames. Moreover, it was shown in [81] that the \(O\left(n\right)\) forward dynamics algorithm in body-fixed representation, using the body-fixed joint screw coordinates \({}^{i}\mathbf{X}_{i}\) and RFR at the joint axis, can be implemented in such a way that it requires less computational effort than the spatial version. The key is that when the BFR of \(\mathcal{F}_{i}\) is located at and aligned with the axis of joint \(i\), then \({}^{i}\mathbf{X}_{i}\) becomes sparse. From a users perspective this is a restraining presumption, however.
### Hybrid Form
Forward Kinematics RecursionThe hybrid twists is determined recursively by (110) with \({}^{0}\mathbf{X}_{i}^{i}\)=\(\mathbf{Ad_{R_{i}}}^{i}\mathbf{X}_{i}\). For the acceleration recursion note that \(\dot{\mathbf{Ad_{r}}}_{i,i-1}=\mathbf{ad_{\dot{r}_{i,i-1}}}=\mathbf{ad_{\dot{ r}_{i-1}}}-\mathbf{ad_{\dot{r}_{i}}}\) since \(\dot{\mathbf{r}}_{i,i-1}=\dot{\mathbf{r}}_{i-1}-\dot{\mathbf{r}}_{i}\). This yields
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1}^ {\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i,i-1}}} \mathbf{V}_{i-1}^{\text{h}}+\mathbf{ad_{\omega_{i}}}{}^{0}\mathbf{X}_{i}^{i} \dot{q}_{i}. \tag{72}\]
Taking into account that \(\mathbf{ad_{\dot{r}_{i}}}\left(\mathbf{V}_{i}^{\text{h}}-\mathbf{X}_{i}^{ \text{h}}\dot{q}_{i}\right)=\mathbf{ad_{\dot{r}_{i}}}\mathbf{V}_{i-1}^{\text{h }}\) (because there is no angular part in \(\mathbf{ad_{\dot{r}_{i}}}\)), and \(\mathbf{ad_{\dot{r}_{i}}}+\mathbf{ad_{\omega_{i}^{\prime}}}=\mathbf{ad_{\dot{ \mathbf{V}_{i}^{\prime}}}}\), this can be transformed to
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1} ^{\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i-1}}} \mathbf{V}_{i-1}^{\text{h}}-\mathbf{ad_{\dot{r}_{i}}}\mathbf{V}_{i}^{\text{h} }+\mathbf{ad_{\mathbf{V}_{i}^{\text{h}}}}{}^{0}\mathbf{X}_{i}^{i}\dot{q}_{i}. \tag{73}\]
Another form follows by solving (110) for \(\mathbf{V}_{i-1}^{\text{h}}\) and inserting this into (72), while noting that \(\mathbf{ad_{\dot{r}_{i,i-1}}}\mathbf{Ad_{r}}_{i,i-1}^{-1}=\mathbf{ad_{\dot{r}_{ i,i-1}}}\), as
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1}^ {\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i,i-1}}} \mathbf{V}_{i}^{\text{h}}+(\mathbf{ad_{\dot{V}_{i}^{\text{h}}}}-\mathbf{ad_{ \dot{r}_{i-1}}}){}^{0}\mathbf{X}_{i}^{i}\dot{q}_{i}. \tag{74}\]
Comparing these three different recursive relations (72), (73), and (74) for the hybrid acceleration from a computational perspective (72) is the most efficient.
Recursive Newton-Euler AlgorithmWith the hybrid Newton-Euler equations (53) the recursive NE algorithm is as follows:
Forward Kinematics* Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\)* For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}(\mathbf{q})\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_ {i}q_{i})=\exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_{i}q_{i}) \mathbf{A}_{i}\] (75a) \[{}^{0}\mathbf{X}_{i}^{i} =\mathbf{Ad_{R_{i}}}^{i}\mathbf{X}_{i}=\mathbf{Ad_{-r_{i}}} \mathbf{Y}_{i}=\mathbf{Ad_{R_{i}}}\mathbf{Ad_{S_{i,i}}}^{i-1}\mathbf{Z}_{i}\] (75b) \[\mathbf{V}_{i}^{\text{h}} =\mathbf{Ad_{r_{i,i-1}}}\mathbf{V}_{i-1}^{\text{h}}+{}^{0}\mathbf{ X}_{i}^{i}\dot{q}_{i}\] (75c) \[\dot{\mathbf{V}}_{i}^{\text{h}} =\mathbf{Ad_{r_{i,i-1}}}\dot{\mathbf{V}}_{i-1}^{\text{h}}+\mathbf{ ad_{\dot{r}_{i,i-1}}}\mathbf{V}_{i-1}^{\text{h}}+\mathbf{ad_{\omega_{i}^{\prime}}}{}^{0} \mathbf{X}_{i}^{i}\dot{q}_{i}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}\] (75d)
* Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{h}},\dot{\mathbf{V}}_{i}^{\text{h}},{}^{0} \mathbf{X}_{i}^{i}\)
Inverse Dynamics
* Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{h}},\dot{\mathbf{V}}_{i}^{\text{h}},{}^{0} \mathbf{X}_{i}^{i},\mathbf{W}_{i}^{\text{h,app}}\)
* For \(i=n-1,\ldots,1\) \[\mathbf{M}_{i}^{\mathrm{h}} =\mathbf{Ad}_{\mathbf{R}_{i}}^{T}\mathbf{M}_{i}^{\mathrm{b}} \mathbf{Ad}_{\mathbf{R}_{i}}^{-1}\] (76a) \[\mathbf{W}_{i}^{\mathrm{h}} =\mathbf{Ad}_{\mathbf{d}_{i+1,i}}^{T}\mathbf{W}_{i+1}^{\mathrm{h} }+\mathbf{M}_{i}^{\mathrm{h}}\dot{\mathbf{V}}_{i}^{\mathrm{h}}+\mathbf{ad}_{ \omega^{\mathrm{h}}}\mathbf{M}_{i}^{\mathrm{h}}\dot{\mathbf{V}}_{i}^{\mathrm{h }}+\mathbf{W}_{i}^{\mathrm{h,app}}\] (76b) \[\mathbf{Q}_{i} =(^{0}\mathbf{X}_{i}^{i})^{T}\mathbf{W}_{i}^{\mathrm{h}}\] (76c)
* Output: \(\mathbf{Q}\)
Computational EffortThe hybrid representation is a compromise between using twists and wrenches measured in body-fixed frames (as for the body-fixed representation, where twists and wrenches are measured at the RFR origin) and those resolved in the IFR (as for the spatial representation, where twists and wrenches are measured at the IFR origin). It has therefore been used extensively for \(O\left(n\right)\) inverse and forward dynamics algorithms. The essential difference between the forward recursion for kinematic evaluation in body-fixed and hybrid formulation is that the body-fixed recursion (66a-66c) requires frame transformations of screws involving rotations and translations whereas the hybrid recursion (75a-75d) only requires the change of reference point using position vectors resolved in the IFR. The attitude transformation only appears in (75b) and in the computation of the hybrid inertia matrix (76a). In total the forward kinematics needs \(n\) rotational transformations, \(2n-2\) translational transformations. Further, (75d) needs \(n-1\) cross products of the form \(\mathbf{ad}_{\dot{\mathbf{r}}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{h}}=\left( \mathbf{0},(\dot{\mathbf{r}}_{i-1}-\dot{\mathbf{r}}_{i})\times\omega_{i-1}^{ \mathrm{s}}\right)^{T}\) and \(n\) Lie brackets \(\mathbf{ad}_{\omega_{i}^{\mathrm{s}}}\mathbf{X}_{i}^{\mathrm{s}}\). The inverse dynamics needs the \(n\) rotational transformations (76a) of the second-order inertia tensor, \(n-1\) translational transformations of wrenches and \(n\) Lie brackets with \(\omega_{i}^{\mathrm{s}}\) in (76b). In total the hybrid NE algorithm needs \(3n-3\) translational and \(n\) rotational transformations of screw coordinates, \(n\) rotational transformations of the inertia tensor, and \(3n-1\) Lie brackets. Although the number of operations is equivalent to the body-fixed version the particular form of transformations is computationally very simple motivating its extensive us in \(O\left(n\right)\) forward dynamics algorithms. Moreover, the hybrid NE equations are commonly expressed in a body-fixed BFR at the COM, so that the hybrid NE equations simplify to (48) and (49),(50), respectively.
Instead of transforming the joint screws \({}^{i}\mathbf{X}_{i}\) or \(\mathbf{Y}_{i}\) in the reference configuration, the instantaneous hybrid joint screws can be determined using the defining expression (36) in [62] with the current \(\mathbf{b}_{j,i}\) and \(\mathbf{e}_{j}\).
Remark on Forward DynamicsThe above inverse dynamics formulation was presented in [38; 39; 75; 76] together with \(O\left(n\right)\) forward dynamics algorithms. An \(O\left(n\right)\) forward dynamics method was presented in [1; 6]. These algorithms are deemed efficient taking into account that the computation results do not have to be transformed to the body-fixed reference points of interest, as in case of the spatial version. An \(O\left(n\right)\) forward dynamics algorithm was developed in [66] using canonical Hamilton equations in hybrid representation, i.e. the momentum balance (52) in terms of the conjugate momenta \(\mathbf{\Pi}_{i}^{\mathrm{h}}\), rather than the NE equations. It was concluded that its performance is comparable to that of Featherstone's [30] method in terms of spatial twists.
### Choice of Body-Fixed Reference Frames
The Lie group formulation involves geometric and inertia properties that are readily available, e.g. from CAD data.
In [62] and in the preceding sections two approaches to the description of MBS geometry (with and without body-fixed joint frames) and three versions for representing velocities and accelerations (body-fixed, spatial, hybrid) were presented, of which each has its merits. The description of the geometry is independent from the representation of twists. For instance, the geometry could be described in terms of joint screws expressed in the IFR while the kinematics and dynamics is modeled using body-fixed twists. This allows to take advantage of the low-complexity hybrid or spatial recursive NE equations while still having the freedom to use or avoid body-fixed joint frames.
The standard approach to model an MBS is to introduce 1.) an IFR, 2.) body-fixed BFRs, and 3.) body-fixed JFRs. The latter is avoided using spatial joint screws \(\mathbf{Y}_{i}\), as already presented. It still remains to introduce body-fixed BFRs kinematically representing the bodies. However, even the _explicit_ definition of RFRs can be avoided by properly placing them. Their location is usually dictated by the definition of the inertia tensors, and it is customary to relate the inertia data to the COM. If instead the body-fixed BFRs are assigned such that they coincide in the reference configuration (\(\mathbf{q}=\mathbf{0}\)) with the IFR, then no reference configurations of bodies need to be determined (\(\mathbf{A}_{i}=\mathbf{I}\)). This normally means that the RFR is outside the physical extension of a body. That is, the inertia properties of all bodies are determined in the assembly reference configuration w.r.t. to the global IFR. In other words they are deduces from the design drawing (corresponding to \(\mathbf{q}=\mathbf{0}\)) relative to a single construction frame. This can be exploited when using CAD systems. The required kinematic data then reduces to the direction and position vectors, \(\mathbf{e}_{i}\) and \(\mathbf{y}_{i}\), in order to compute \(\mathbf{Y}_{i}\) in (94). As a side effect it is \(\mathbf{Y}_{i}={}^{i}\mathbf{X}_{i}\). This is an important result, and does apply to any of the discussed twist representations, since the representation of twists has nothing to do with the geometry description. Moreover, then the POE (93) and the Jacobian (103), (104), and thus (106), in terms of spatial screw coordinates simplify. The only computational drawback is that the hybrid Newton and Euler equations are not decoupled since the spatial IFR, to which the inertia data is related, is unlikely to coincide with the COM of the bodies in the reference configuration. Details can be found in [54].
## 5 Motion Equations in Closed Form
### Euler-Jourdain Equations
The body-fixed NE equations for the individual bodies within the MBS are
\[\mathbf{M}_{i}^{\mathrm{b}}\mathbf{\dot{V}}_{i}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}_{i}}^{T}\mathbf{M}_{i}^{\mathrm{b}}\mathbf{V}_{i}^{\mathrm{b}}- \mathbf{W}_{i}^{\mathrm{b,app}}-\mathbf{W}_{i}^{\mathrm{b,c}}=\mathbf{0} \tag{77}\]
where \(\mathbf{W}_{i}^{\mathrm{b,c}}\) is the constraint reaction wrench of joint \(i\), and \(\mathbf{W}_{i}^{\mathrm{b,app}}\) represents the total wrench applied to body \(i\) including the applied wrench in joint \(i\). Jourdain's principle of virtual power, using the admissible variation \(\delta\mathbf{V}^{\mathrm{b}}=\mathrm{J}^{\mathrm{b}}\delta\dot{\mathbf{q}}\) of the system twist, and noting that \(\delta\mathbf{V}^{\mathrm{b}}\) are reciprocal to the constraint wrenches (see appendix B.2), yields the system of \(n\) motion equations
\[\left(\mathrm{J}^{\mathrm{b}}\right)^{T}\left(\begin{array}{c}\mathbf{M}_{1} ^{\mathrm{b}}\mathbf{V}_{1}^{\mathrm{b}}-\mathbf{ad}_{\mathbf{V}_{1}^{ \mathrm{b}}}^{T}\mathbf{M}_{1}^{\mathrm{b}}\mathbf{V}_{1}^{\mathrm{b}}- \mathbf{W}_{1}^{\mathrm{b,app}}\\ \vdots\\ \mathbf{M}_{n}^{\mathrm{b}}\mathbf{V}_{n}^{\mathrm{b}}-\mathbf{ad}_{\mathbf{V}_ {n}^{\mathrm{b}}}^{T}\mathbf{M}_{n}^{\mathrm{b}}\mathbf{V}_{n}^{\mathrm{b}}- \mathbf{W}_{n}^{\mathrm{b,app}}\end{array}\right)=\mathbf{0}. \tag{78}\]
This form allows for a concise and computationally efficient construction of the motion equations. The point of departure are the NE equations of the individual bodies. The body-fixed system Jacobian (112) is determined by the (constant) joint screw coordinates in \(\mathsf{X}^{\mathrm{b}}\) and the screw transformations encoded in \(\mathsf{A}^{\mathrm{b}}\). The same applies to the other representations. The accelerations are determined by (6), respectively (10). Explicit evaluation of (78) leads to the recursive algorithm in section 4.1. Inserting the twists and accelerations in (78) yields the equations (1) that determine the MBS dynamics on the tangent bundle \(T\mathbb{V}^{n}\) with state vector \(\left(\mathbf{q},\dot{\mathbf{q}}\right)\in T\mathbb{V}^{n}\). Alternatively, combining (78) with (111) yields a system of \(n+6n\) ODEs in the state variables \(\left(\mathbf{q},\mathbf{V}^{\mathrm{b}}\right)\in\mathbb{V}^{n}\times se\left( 3\right)^{n}\) that govern the dynamics on the state space \(\mathbb{V}^{n}\times se\left(3\right)^{n}\). The advantage of this formulation is that it is an ODE system of first order, and that the system has block triangular structure. Yet another interesting formulation follows with the NE equations (36) in terms of the
conjugate momenta in spatial representation
\[\left(\mathsf{J}^{\mathrm{s}}\right)^{T}\left(\begin{array}{c}\dot{\vec{\Pi}}_ {1}^{\mathrm{s}}-\vec{\mathbf{W}}_{1}^{\mathrm{s,app}}\\ \vdots\\ \dot{\vec{\Pi}}_{n}^{\mathrm{s}}-\vec{\mathbf{W}}_{n}^{\mathrm{s,app}}\end{array} \right)=\vec{\mathbf{0}} \tag{79}\]
This is a system of \(n+6n\) first order ODEs in the phase space \(\left(\vec{\mathbf{q}},\vec{\mathbf{\Pi}}^{\mathrm{s}}\right)\in\mathbb{V}^{n }\times se^{*}\left(3\right)^{n}\). The system (79) can be solved for the \(\dot{\vec{\Pi}}_{i}^{\mathrm{s}}\) and \(\dot{\vec{\mathbf{q}}}_{i}\) noting the block triangular structure of \(\mathsf{J}^{\mathrm{s}}\)[62]. From a numerical point of view the momentum formulation in phase space shall allow for momentum preserving integration schemes.
Various versions of (78) have been published. Using the hybrid representation of twists, basically the same equations were reported in [4]. There the system Jacobian is called the 'natural orthogonal complement' motivated by the fact that the columns of \(\mathsf{J}^{\mathrm{b}}\) are orthogonal to the vectorial representations of constraint wrenches (although the former are screws while the latter are co-screws). In classical vector notation they were reported in [40; 49] and [15]. In [40] the equations (78) are called Euler-Jourdain equations. In [49], emphasizing on the recursive evaluation of the body Jacobian the instantaneous body-fixed joint screws \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) are called 'kinematic basic functions' as they are the intrinsic objects in MBS kinematics. In [15] the equations (78) are called 'projection equations' since the NE equations of the individual bodies are restricted to the feasible motion (although \(\mathsf{J}^{\mathrm{b}}\) is not a projector). The equations (78) in body-fixed representation are equivalent to Kane's equations where \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) are called 'partial velocities' [41]. The instantaneous joint screw coordinates, i.e. the columns \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) of the geometric Jacobian, were also called 'kinematic influence coefficients' and their partial derivatives (5) the'second-order kinematic influence coefficients' [8; 84].
It should be finally remarked that due to the block triangular form of \(\mathsf{J}^{\mathrm{b}}\), solving (78), and using the inversion of \(\vec{\mathbf{A}}^{\mathrm{b}}\) (ref. (25) in [62]), leads immediately to an \(O\left(n\right)\) forward dynamics algorithm. This is the common starting point for deriving forward dynamics algorithms that applies to any twist representation.
### Lagrange Equations
The MBS motion equations can be derived as the Lagrange equations in terms of generalized coordinates. For simplicity, potential forces are omitted so that the Lagrangian is simply the kinetic energy. Then the equations attain the form
\[\frac{d}{dt}\left(\frac{\partial T}{\partial\dot{\vec{\mathbf{q}}}}\right)^{ T}-\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=\vec{ \mathbf{M}}\left(\vec{\mathbf{q}}\right)\ddot{\vec{\mathbf{q}}}+\vec{\mathbf{C }}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)\dot{\vec{\mathbf{q}}}= \vec{\mathbf{Q}}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}},t\right) \tag{80}\]
with generalized mass matrix \(\vec{\mathbf{M}}\), and \(\vec{\mathbf{C}}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)\dot{ \vec{\mathbf{q}}}\) representing Coriolis and centrifugal forces. The vector \(\vec{\mathbf{Q}}\) stands for all other generalized forces, including potential, dissipative, and applied forces. Using body-fixed twists, the kinetic energy of body \(i\) is \(T_{i}=\frac{1}{2}(\vec{\mathbf{V}}_{i}^{\mathrm{b}})^{T}\vec{\mathbf{M}}_{i}^{ \mathrm{b}}\vec{\mathbf{V}}_{i}^{\mathrm{b}}\). The kinetic energy of the MBS is \(T\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)=\sum_{i}T_{i}=\frac{1}{ 2}(\vec{\mathbf{V}}^{\mathrm{b}})^{T}\vec{\mathbf{M}}^{\mathrm{b}}\vec{ \mathbf{v}}^{\mathrm{b}}=\frac{1}{2}\vec{\mathbf{q}}^{T}\vec{\mathbf{M}}\vec {\mathbf{q}}\) with the generalized mass matrix
\[\left|\vec{\mathbf{M}}\left(\vec{\mathbf{q}}\right)=(\mathsf{J}^{\mathrm{b}}) ^{T}\vec{\mathbf{M}}^{\mathrm{b}}\mathsf{J}^{\mathrm{b}}\right| \tag{81}\]
and \(\vec{\mathbf{M}}^{\mathrm{b}}:=\mathrm{diag}\left(\vec{\mathbf{M}}_{1}^{ \mathrm{b}},\dots,\vec{\mathbf{M}}_{n}^{\mathrm{b}}\right)\). The conjugate momentum vector is thus \(\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=(\mathsf{J}^{ \mathrm{b}})^{T}\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{V}}^{\mathrm{b}}\). Its time derivative is with (10) given as \(\frac{d}{dt}\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}= \vec{\mathbf{M}}\left(\vec{\mathbf{q}}\right)\ddot{\vec{\mathbf{q}}}-(\mathsf{ J}^{\mathrm{b}})^{T}\left((\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b}} \vec{\mathbf{a}})^{T}+\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b }}\vec{\mathbf{a}}^{\mathrm{b}}\vec{\mathbf{a}}\right)\mathsf{J}^{\mathrm{b}}\dot {\vec{\mathbf{q}}}\), and \(\vec{\mathbf{a}}^{\mathrm{b}}\) defined in (9). From (7) follows \(\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=(\vec{\mathbf{M}}^{ \mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b}}\vec{\mathbf{b}}^{\mathrm{b}}\vec{ \mathbf{v}}^{\mathrm{b}})^{T}\mathsf{J}^{\mathrm{b}}\dot{\vec{\mathbf{q}}}\), with
\[\mathsf{b}^{\mathrm{b}}\left(\vec{\mathbf{V}}^{\mathrm{b}}\right):=\mathrm{ diag}\ (\vec{\mathbf{a}}\vec{\mathbf{d}}\vec{\mathbf{v}}_{1}^{\mathrm{b}},\dots,\vec{\mathbf{a}} \vec{\mathbf{d}}\vec{\mathbf{v}}_{n}^{\mathrm{b}}). \tag{82}\]
This admits to identify the generalized mass matrix (81) and the matrix
\[\mathbf{C}\left(\mathbf{q},\dot{\mathbf{q}}\right) =-(\mathbf{J}^{\mathrm{b}})^{T}\left((\mathbf{M}^{\mathrm{b}} \mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}})^{T}+\mathbf{M}^{\mathrm{b}} \mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\right)\mathbf{J}^{\mathrm{b}}-( \mathbf{M}^{\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{b}^{\mathrm{b}}\mathbf{ Y}^{\mathrm{b}})^{T}\mathbf{J}^{\mathrm{b}}\] \[=-(\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}+\mathbf{b}^{ \mathrm{b}}\mathbf{X}^{\mathrm{b}})^{T}(\mathbf{A}^{\mathrm{b}})^{T}\mathbf{M} ^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}-(\mathbf{J}^{\mathrm{b}})^{T}\mathbf{M}^ {\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{ b}}. \tag{83}\]
The first term on the right hand side in (83) can be simplified so that
\[\boxed{\mathbf{C}\left(\mathbf{q},\dot{\mathbf{q}}\right)}\quad=-(\mathbf{J }^{\mathrm{b}})^{T}(\mathbf{M}^{\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{a}^ {\mathrm{b}}+(\mathbf{b}^{\mathrm{b}})^{T}\mathbf{M}^{\mathrm{b}})\mathbf{J}^ {\mathrm{b}}. \tag{84}\]
The concise expressions (81) and (84) allow for construction of the Lagrange equations in closed form. Similar expressions can be derived using the spatial and hybrid representation of twists.
For analytic investigations of the MBS dynamics it may be useful to write the Lagrange equations in components as
\[\sum_{j=1}^{n}M_{ij}\left(\mathbf{q}\right)\ddot{q}_{j}+\sum_{j,k=1}^{n}\Gamma _{ijk}\left(\mathbf{q}\right)\dot{q}_{j}\dot{q}_{k}=Q_{i}\left(\mathbf{q}, \dot{\mathbf{q}},t\right) \tag{85}\]
where the Christoffel symbols of first kind are defined as \(\Gamma_{ijk}=\frac{1}{2}\left(\frac{\partial M_{ik}}{\partial q_{j}}+\frac{ \partial M_{ij}}{\partial q_{k}}-\frac{\partial M_{ik}}{\partial q_{i}}\right)= \Gamma_{ikj}\). The recursive relations (5) give rise to the closed form expressions
\[\Gamma_{ijk} =\frac{1}{2}\sum_{l=k}^{n}\left((\mathbf{J}_{l,k}^{\mathrm{b}}) ^{T}\mathbf{M}_{l}\mathbf{a}\mathbf{d}_{\mathbf{J}_{l,i}^{\mathrm{b}}} \mathbf{J}_{l,j}^{\mathrm{b}}+(\mathbf{J}_{lj}^{\mathrm{b}})^{T}\mathbf{M}_{ l}\mathbf{a}\mathbf{d}_{\mathbf{J}_{l,i}^{\mathrm{b}}}\mathbf{J}_{l,k}^{ \mathrm{b}}+(\mathbf{J}_{l,i}^{\mathrm{b}})^{T}\mathbf{M}_{l}\mathbf{a} \mathbf{d}_{\mathbf{J}_{l,s}^{\mathrm{b}}}\mathbf{J}_{l,r}^{\mathrm{b}}\right) \tag{86}\] \[\text{with }i<j\leq k\text{ or }j\leq i<k,\ r=\max\left(i,j \right),s=\min\left(i,j\right).\]
This expression for the Christoffel symbols in Lie group notation was reported in [17; 51], and already in [49] in tensor notation. This expression simplifies when Binet's inertia tensor \(\boldsymbol{\vartheta}_{i}=\frac{1}{2}\mathrm{tr}\left(\boldsymbol{\Theta}_{i} \right)\mathbf{I}-\boldsymbol{\Theta}_{i}\) is used in the mass matrix \(\mathbf{M}_{i}^{\mathrm{b}}\). Then (39) is replaced by \(\mathbf{\tilde{M}}_{ic}^{\mathrm{b}}=\text{diag}\left(\boldsymbol{\vartheta}_{ i},m_{i}\mathbf{I}\right)\), and (40) by \(\mathbf{\tilde{M}}_{i}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{\mathrm{ bc}}}^{-T}\mathbf{\tilde{M}}_{ic}^{\mathrm{b}}\mathbf{A}\mathbf{d}_{\mathbf{S}_{ \mathrm{bc}}}^{-1}\). This leads to
\[\Gamma_{ijk} =\frac{1}{2}\sum_{l=k}^{n}(\mathbf{J}_{l,j}^{\mathrm{b}})^{T} \mathbf{\tilde{M}}_{l}\mathbf{a}\mathbf{d}_{\mathbf{\tilde{J}}_{l,k}^{\mathrm{ b}}}\mathbf{J}_{l,i}^{\mathrm{b}} \tag{87}\] \[\text{with }i<j\leq k\text{ or }j\leq i<k.\]
The equations (86) were presented in [51; 70; 72; 69; 73], and (87) in [51]. Prior to these publications the equations (86) and (87) have been reported in [49; 50] using tensor notation rather than Lie group notation. Another publication that should be mentioned is [20] where the Lagrange equations were derived using similar algebraic operations.
The above closed forms of EOM are derived using body-fixed twists. The potential benefit of using spatial or hybrid twists remains to be explored.
## 6 Derivatives of Motion Equations
In various contexts the information about the sensitivity of the MBS kinematics and dynamics is required either w.r.t. joint angles, geometric parameters, or dynamic parameters. Whereas it is know that the EOM of a rigid body MBS attain a form that is linear in the dynamic parameters, they depend non-linearly on the generalized coordinates and geometry. The POE formulation provides a means to determine sensitivity w.r.t. to kinematic parameters.
### Sensitivity of Motion Equations
Gradients w.r.t. generalized coordinates are required for the linearization of the EOM (as basis for stability analysis and controller design) as well as for optimal control of MBS. Since the second-order and higher derivatives (12) of the body-fixed Jacobian, (20) of the spatial Jacobian, and (25) of the hybrid Jacobian are given as algebraic closed form expressions in terms of screw products, the linearized EOM can be evaluated recursively as well as expressed in closed-form. Using the Lie group notation this was reported in [78].
The same results were already presented in [50] using tensor notation. Comparing the two formulations reveals once more that the matrix Lie group formulation provides a level of abstraction leading to compact expressions. A closed form for partial derivatives of the inverse mass matrix has been reported in [52], which is required for investigating the controllability of MBS. Using the body-fixed representation of twists, recursive \(O\left(n\right)\) where reported in [36; 3].
### Geometric Sensitivity
Optimizing the design of an MBS requires information about the sensitivity w.r.t. to geometric parameters. A recursive algorithm was reported in [36] and its parallel implementation in [3] where the partial derivatives are computed on a case by case basis. The Lie group formulation gives rise to a general closed-form expression. To this end, the POE formula (92) is extended as follows.
The geometry of the two bodies \(i\) and \(i-1\) connected by joint \(i\) is encoded in the constant part \(\mathbf{S}_{i,i}\) and \(\mathbf{S}_{i-1,i}\) in (91), respectively in \(\mathbf{B}_{i}\) in the formulation in (92). These are frame transformations, and can hence be parameterized in terms of screw coordinates. If \(\mathbf{B}_{i}\) depends on \(\lambda\leq 6\) geometric parameters, it is expressed as \(\mathbf{B}_{i}\left(\pi_{i}\right)=\mathbf{B}_{i0}\exp(\mathbf{U}_{i1}\pi_{i 1})\cdot\ldots\exp(\mathbf{U}_{i\lambda}\pi_{i\lambda})\). The screw coordinates \(\mathbf{U}_{i1}\) and corresponding parameters \(\pi_{i1}\) account for the considered variations from the nominal geometry, represented by \(\mathbf{B}_{i0}\in SE\left(3\right)\). The relative configuration due to joint \(i\) and the geometric variations is thus \(\mathbf{C}_{i-1,i}\left(\mathbf{q},\pi_{i}\right)=\mathbf{B}_{i}\left(\pi_{i} \right)\exp({}^{i}\mathbf{X}_{i}q_{i})\). The key observation is that partial derivatives of \(\mathbf{B}_{i}\left(\pi_{i}\right)\) are available in closed form, as for the joint screw coordinates. Hence also the sensitivity w.r.t. the MBS geometry can be expressed in closed form [53]. This fact has been applied to robot calibration [22; 23] where the POE accounts for geometric imperfections to be identified.
### Time Derivatives of the EOM
The design of feedback-linearizing flatness-based controllers for robotic manipulators that are modeled as rigid body MBS actuated by elastic actuators (so-called series elastic actuators) requires the time derivatives of the inverse dynamics solution \(\mathbf{Q}\left(t\right)\)[26; 68]. That is, the first and second time derivatives of the EOM are necessary. Extensions of the classical recursive Newton-Euler inverse dynamics algorithms in body-fixed representations were presented in [19]. As it can be expected the relation are very complicated. Using the presented Lie group formulation of the inverse dynamics algorithms gives rise to rather compact and thus fail-safe algorithm. This was presented in [63] for the body-fixed and hybrid version.
## 7 Geometric Integration
This paper focussed on the MBS modeling in terms of relative (joint) coordinates. Alternatively, the MBS kinematics can be described in terms of absolute coordinates.
One of the issues that is being addressed when modeling MBS in terms of absolute coordinates is the _kinematic reconstruction_, i.e. the determination of the motion of a rigid body, represented by \(\mathbf{C}\left(t\right)\), from its velocity field \(\mathbf{V}\left(t\right)\). This amounts to solving one of the equations (see appendix A2 in [62])
\[\mathbf{\widehat{V}}^{\mathrm{b}}=\mathbf{C}^{-1}\dot{\mathbf{C}},\qquad \mathbf{\widehat{V}}^{\mathrm{s}}=\dot{\mathbf{C}}\mathbf{C}^{-1} \tag{88}\]
together with the NE (41) or (38), respectively. Classically, the orientation is parameterized with three parameters. The problem encountered is that there is no singularity-free global parameterization of rotations with three parameters. Instead of local parameters (position and rotation angles) the absolute configurations of the rigid bodies within the MBS can be represented by \(\mathbf{C}\left(t\right)\). Then a numerical integration step from time \(t_{k-1}\) to \(t_{k}=t_{k-1}+h\) shall determine the incremental configuration update \(\Delta\mathbf{C}_{k}=\mathbf{C}_{k-1}^{-1}\mathbf{C}_{k}\) with \(\mathbf{C}_{k}=\mathbf{C}\left(t_{k}\right)\) and \(\mathbf{C}_{k-1}=\mathbf{C}\left(t_{k-1}\right)\). The equations (88) are ODEs on the Lie group \(SE\left(3\right)\). These can be replaced by ODEs on the Lie algebra \(se\left(3\right)\). The motion increment from \(t_{k-1}\) to \(t_{k}\) is parameterized as \(\Delta\mathbf{C}\left(t\right)=\exp\mathbf{X}\left(t\right)\) with an algorithmic instantaneous screw coordinate vector \(\mathbf{X}\). Then (88) are equivalent to the ODEs on the Lie algebra
\[\mathbf{V}^{\mathrm{s}}=\mathbf{dexp}_{\mathbf{X}}\dot{\mathbf{X}},\ \ \ \ \mathbf{V}^{\mathrm{b}}=\mathbf{dexp}_{-\mathbf{X}}\dot{\mathbf{X}} \tag{89}\]
where \(\mathbf{dexp}_{\mathbf{X}}:se\left(3\right)\to se\left(3\right)\) is the right-trivialized differential of the \(\exp\) mapping on \(SE\left(3\right)\)[13; 60; 61; 71]. This is the basic idea of the class of Munthe-Kaas integration schemes [21; 37; 64]. This scheme has been adapted to MBS in absolute coordinates [82]. The advantage of these integration methods is that no global parameterization is necessary since the numerical integration is pursued in terms of the incremental parameters \(\mathbf{X}\). The ODEs (89) can be solved with any vector space integration scheme (originally the Munthe-Kaas scheme uses a Runge-Kutta method) with initial value \(\mathbf{X}\left(t_{k-1}\right)=\mathbf{0}\).
Recently the geometric integration concepts were incorporated in the generalized \(\alpha\) method [42; 18] for MBS described in absolute coordinates. In this case the representation of proper rigid body motions is crucial as discussed in [60; 59], which is frequently incorrectly represented by \(SO\left(3\right)\times\mathbb{R}^{3}\). Also momentum preserving schemes were proposed [83]. It should be mentioned that the concept of geometric integration schemes on \(SE\left(3\right)\) can be transferred to the kinematics of flexible bodies undergoing large deformations described as Cosserat continua. In this context the spatial description (referred to as fixed pole formulation) has proven to be beneficial [32]. Recent results on Lie group modeling of beams can be found in [79; 80].
## 8 Conclusions and Outlook
The computational effort of recursive \(O\left(n\right)\) algorithms, but also of the formalisms for evaluating the EOM in closed form, depends on the representation of rigid body motions and of the motions of technical joints. Since the geometry of finite rigid body and relative motions is described by the Lie group \(SE\left(3\right)\), and that of instantaneous motions be the screw algebra \(se\left(3\right)\), Lie group theory provides the geometric framework. As already shown in [62], Lie group formulations for the MBS kinematics give rise to compact recursive formulations in terms of relative coordinates. In this paper the corresponding recursive NE algorithms were presented and related to the various \(O\left(n\right)\) algorithms scattered in the literature. This allows for a comparative investigation of their efficiency in conjunction with the modeling procedure. For instance, whereas most \(O\left(n\right)\) algorithms used the hybrid representation, the spatial representation, as used by Featherstone [30] and Bottasso [13] (where it is called fixed point formulation), is receiving increased attention since it gives easily rise to structure preserving integration schemes [11; 12; 13; 32]. A conclusive investigation will be the subject of future research. Future research will also focus on combining the \(O\left(n\right)\) forward dynamics algorithm by Featherstone [30], based on NE equations using spatial representations with Naudet's algorithm [66] based on Hamilton's canonical equations in hybrid representation. The use of the spatial momentum balance shall allow for momentum preserving integration of the EOM and at the same time to reduce the number of frame transformations. A further important research topic is the derivation of structure preserving Lie group integration schemes for which the spatial formulation of EOM will be formulation of choice.
### A Summary of basic Kinematic Relations
As prerequisite the kinematic relations derived in [62] are summarized. Denote with
\[\mathbf{C}_{i}=\left(\begin{array}{cc}\mathbf{R}_{i}&\mathbf{r}_{i}\\ \mathbf{0}&1\end{array}\right)\in SE\left(3\right) \tag{90}\]
the _absolute configuration_ of body \(i\) w.r.t. the inertial frame (IFR) \(\mathcal{F}_{0}\). This is alternatively denoted with \(C_{i}=(\mathbf{R}_{i},\mathbf{r}_{i})\). The _relative configuration_ of body \(i\) relative to body \(i-1\) is given as
\[\mathbf{C}_{i-1,i}\left(q_{i}\right)=\mathbf{S}_{i-1,i}\exp({}^{i-1}\mathbf{Z} _{i}q_{i})\mathbf{S}_{i,i}^{-1}=\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q_{i}) \tag{91}\]
where \(\mathbf{B}_{i}:=\mathbf{S}_{i-1,i}\mathbf{S}_{i,i}^{-1}=\mathbf{C}_{i-1,i} \left(0\right)\) is the reference configuration of body \(i\) w.r.t. body \(i-1\), i.e. for \(q_{i}=0\), and \({}^{i-1}\mathbf{Z}_{i}\in\mathbb{R}^{6}\) is the screw coordinate vector of joint \(i\) represented in the joint frame (JFR) \(\mathcal{J}_{i-1,i}\) on body \(i-1\). Successive relative configurations can be combined to
\[\mathbf{C}_{i}\left(\mathbf{q}\right) = \mathbf{B}_{1}\exp({}^{1}\mathbf{X}_{1}q_{1})\cdot\mathbf{B}_{2} \exp({}^{2}\mathbf{X}_{2}q_{2})\cdot\ldots\cdot\mathbf{B}_{i}\exp({}^{i} \mathbf{X}_{i}q_{i}) \tag{92}\] \[= \exp(\mathbf{Y}_{1}q_{1})\cdot\exp(\mathbf{Y}_{2}q_{2})\cdot \ldots\cdot\exp(\mathbf{Y}_{i}q_{i})\mathbf{A}_{i} \tag{93}\]
where \({}^{i}\mathbf{X}_{i}\in\mathbb{R}^{6}\) is the screw coordinate vector of joint \(i\) represented in the joint frame fixed at body \(i\), \(\mathbf{Y}_{i}\in\mathbb{R}^{6}\) is the joint screw coordinate vector in spatial representation (measured and resolved in IFR) for the reference configuration \(\mathbf{q}=\mathbf{0}\), and \(\mathbf{A}_{i}=\mathbf{C}_{i}\left(\mathbf{0}\right)\) is the reference configuration of body \(i\). The two representations of joint screw coordinates are related by
\[\mathbf{Y}_{i}=\mathbf{Ad}_{\mathbf{A}_{i}}{}^{i}\mathbf{X}_{i},\ \ ^{i}\mathbf{X}_{i}=\mathbf{Ad}_{\mathbf{S}_{i,i}}{}^{i-1}\mathbf{Z}_{i} \tag{94}\]
where, in vector representation of screws, the adjoined transformation \(\mathbf{Ad}\) corresponding to \(\mathbf{C}\in SE\left(3\right)\) is given by the matrix
\[\mathbf{Ad}_{\mathbf{C}}=\left(\begin{array}{cc}\mathbf{R}&\mathbf{0}\\ \widetilde{\mathbf{r}}\mathbf{R}&\mathbf{R}\end{array}\right). \tag{95}\]
For sake of simplicity, the following notations are used
\[\mathbf{Ad}_{\mathbf{R}}=\left(\begin{array}{cc}\mathbf{R}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}\end{array}\right),\ \text{for}\ \ C=\left(\mathbf{R}, \mathbf{0}\right)\ \ \ \ \ \ \ \ \mathbf{Ad}_{\mathbf{r}}=\left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \widetilde{\mathbf{r}}&\mathbf{I}\end{array}\right),\ \text{for}\ \ C=\left(\mathbf{I}, \mathbf{r}\right) \tag{96}\]
so that \(\mathbf{Ad}_{\mathbf{C}}=\mathbf{Ad}_{\mathbf{r}}\mathbf{Ad}_{\mathbf{R}}\).
The twist of body \(i\) in _body-fixed_ representation \(\mathbf{V}_{i}^{\text{b}}=\left(\boldsymbol{\omega}_{i}^{\text{b}},\mathbf{v}_{ i}^{\text{b}}\right)^{T}\) and in _spatial_ representation \(\mathbf{V}_{i}^{\text{s}}=\left(\boldsymbol{\omega}_{i}^{\text{s}},\mathbf{v}_{ i}^{\text{s}}\right)^{T}\) is defined by
\[\mathbf{\widehat{V}}_{i}^{\text{b}}=\left(\begin{array}{cc}\widetilde{ \boldsymbol{\omega}}_{i}^{\text{b}}&\mathbf{v}_{i}^{\text{b}}\\ \mathbf{0}&0\end{array}\right)=\mathbf{C}_{i}^{-1}\dot{\mathbf{C}}_{i},\ \mathbf{\widehat{V}}_{i}^{\text{s}}=\left(\begin{array}{cc} \widetilde{\boldsymbol{\omega}}_{i}^{\text{s}}&\mathbf{v}_{i}^{\text{s}}\\ \mathbf{0}&0\end{array}\right)=\dot{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}. \tag{97}\]
Here \(\mathbf{v}_{i}^{\text{b}}=\mathbf{R}_{i}^{T}\dot{\mathbf{r}}_{i}\) the body-fixed translational velocity, i.e. the velocity of the origin of the body-fixed reference frame (RFR) \(\mathcal{F}_{i}\) of body \(i\) measured in the IFR \(\mathcal{F}_{0}\) and resolved in \(\mathcal{F}_{i}\), whereas \(\mathbf{v}_{i}^{\text{s}}=\dot{\mathbf{r}}_{i}+\mathbf{r}_{i}\times\boldsymbol{ \omega}_{i}^{\text{s}}\) is the spatial translational velocity, i.e. the velocity of the point of the body that is momentarily passing through the origin of the IFR \(\mathcal{F}_{0}\) resolved in the IFR. The body-fixed and spatial angular velocity, \(\boldsymbol{\omega}_{i}^{\text{b}}\) and \(\boldsymbol{\omega}_{i}^{\text{s}}\), is defined by \(\widetilde{\boldsymbol{\omega}}_{i}^{\text{b}}=\mathbf{R}_{i}^{T}\dot{\mathbf{ R}}_{i}\) and \(\widetilde{\boldsymbol{\omega}}_{i}^{\text{s}}=\dot{\mathbf{R}}_{i}\mathbf{R}_{i}^{T}\), respectively. The _hybrid twist_ is defined as \(\mathbf{V}_{i}^{\text{h}}=\left(\boldsymbol{\omega}_{i}^{\text{s}},\dot{ \mathbf{r}}_{i}\right)^{T}\), and finally the _mixed twist_ as \(\mathbf{V}_{i}^{\text{m}}=\left(\boldsymbol{\omega}_{i}^{\text{b}},\dot{ \mathbf{r}}_{i}\right)^{T}\). The four representations are related as follows
\[\mathbf{V}_{i}^{\text{h}} = \left(\begin{array}{cc}\mathbf{R}_{i}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}_{i}\end{array}\right)\mathbf{V}_{i}^{\text{b}}=\mathbf{Ad} _{\mathbf{R}_{i}}\mathbf{V}_{i}^{\text{b}} \tag{98}\] \[\mathbf{V}_{i}^{\text{s}} = \mathbf{Ad}_{\mathbf{C}_{i}}\mathbf{V}_{i}^{\text{b}}=\mathbf{Ad} _{\mathbf{C}_{i}}\mathbf{Ad}_{\mathbf{R}_{i}}^{-1}\mathbf{V}_{i}^{\text{h}}= \mathbf{Ad}_{\mathbf{r}_{i}}\mathbf{V}_{i}^{\text{h}}\] (99) \[\mathbf{V}_{i}^{\text{m}} = \left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}_{i}\end{array}\right)\mathbf{V}_{i}^{\text{b}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\mathbf{V}_{i}^{\text{h}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ -\ddot{\mathbf{r}}_{i}&\mathbf{I}\end{array}\right)\mathbf{V}_{i}^{\text{s}}. \tag{100}\]
The twist of body \(i\) within a kinematic chain is determined in terms of the generalized velocities \(\dot{\bf q}\) as
\[{\bf V}_{i}^{\rm b} =\sum_{j\leq i}{\bf J}_{i,j}^{\rm b}\dot{q}_{j}={\rm J}_{i}^{\rm b} \dot{\bf q}, {\bf V}_{i}^{\rm s}=\sum_{j\leq i}{\bf J}_{i}^{\rm s}\dot{q}_{j}={\rm J}_{i}^ {\rm s}\dot{\bf q} \tag{101}\] \[{\bf V}_{i}^{\rm h} =\sum_{j\leq i}{\bf J}_{i,j}^{\rm h}\dot{q}_{j}={\rm J}_{i}^{\rm h} \dot{\bf q}, {\bf V}_{i}^{\rm m}=\sum_{j\leq i}{\bf J}_{i,j}^{\rm m}\dot{q}_{j}={ \rm J}_{i}^{\rm m}\dot{\bf q} \tag{102}\]
with the Jacobian \({\rm J}_{i}^{\rm b}\) in body-fixed, \({\rm J}_{i}^{\rm s}\) in spatial, \({\rm J}_{i}^{\rm h}\) in hybrid, and \({\rm J}_{i}^{\rm m}\) in mixed representation. The \(i\)th column of the Jacobian is respectively given by
\[{\bf J}_{i,j}^{\rm b} ={\bf Ad}_{{\bf C}_{i,j}}{}^{j}{\bf X}_{j}={\bf Ad}_{{\bf C}_{i, j}{\bf A}_{j}^{-1}}{\bf Y}_{j} \tag{103}\] \[{\bf J}_{j}^{\rm s} ={\bf Ad}_{{\bf C}_{j}}{}^{j}{\bf X}_{j}={\bf Ad}_{{\bf C}_{j}{ \bf A}_{j}^{-1}}{\bf Y}_{j}\] (104) \[{\bf J}_{i,j}^{\rm h} ={\bf Ad}_{{\bf r}_{i,j}}{}^{0}{\bf X}_{j}^{j}, {\rm for}\ j\leq i. \tag{105}\]
These are the instantaneous joint screw coordinates in body-fixed, spatial, and hybrid representation. The Jacobians are related as
\[{\bf J}_{j}^{\rm s}={\bf Ad}_{{\bf r}_{i}}{\bf J}_{i,j}^{\rm h}={\bf Ad}_{{\bf C }_{i}}{\bf J}_{i}^{\rm b},\ \ {\bf J}_{i}^{\rm h}={\bf Ad}_{{\bf R}_{i}}{\bf J}_{i}^{\rm b}. \tag{106}\]
The representations of joint screw coordinates are related by (94) and by
\[{\bf Y}_{j}={\bf Ad}_{{\bf r}_{i}}{}^{0}{\bf X}_{j}^{j},\ \ \ {}^{0}{\bf X}_{j}^{j}={\bf Ad}_{{\bf R}_{j}}{}^{j}{\bf X}_{j} \tag{107}\]
where \({\bf r}_{i}\) is the current position of body \(i\) in \({\bf C}_{i}\). The twists admit the recursive expressions
\[{\bf V}_{i}^{\rm b} ={\bf Ad}_{{\bf C}_{i,i-1}}{\bf V}_{i-1}^{\rm b}+{}^{i}{\bf X}_{i }\dot{q}_{i} \tag{108}\] \[{\bf V}_{i}^{\rm s} ={\bf V}_{i-1}^{\rm s}+{\bf J}_{i}^{\rm s}\dot{q}_{i}\] (109) \[{\bf V}_{i}^{\rm h} ={\bf Ad}_{{\bf r}_{i,i-1}}{\bf V}_{i-1}^{\rm h}+{}^{0}{\bf X}_{i }^{i}\dot{q}^{i}. \tag{110}\]
Summarizing the twists of all bodies in \({\rm V}^{\rm b},{\sf V}^{\rm s},{\sf V}^{\rm h}\in\mathbb{R}^{6n}\), respectively, admits the expressions
\[{\sf V}^{\rm b}={\rm J}^{\rm b}\dot{\bf q},\ \ {\sf V}^{\rm s}={\rm J}^{\rm s} \dot{\bf q},\ \ {\sf V}^{\rm h}={\rm J}^{\rm h}\dot{\bf q} \tag{111}\]
in terms of the system Jacobians that admit the factorizations [62]
\[{\rm J}^{\rm b}={\sf A}^{\rm b}{\sf X}^{\rm b},\ \ {\sf J}^{\rm s}={\sf A}^{\rm s }{\sf Y}^{\rm s}={\sf A}^{\rm sb}{\sf X}^{\rm b},\ \ {\sf J}^{\rm h}={\sf A}^{\rm h}{\sf X}^{\rm h}. \tag{112}\]
This provides a compact description of the overall MBS kinematics. The explicit relations for the inverse of the matrices \({\sf A}\) is the starting point for deriving recursive forward dynamics \(O\left(n\right)\) algorithms.
## Appendix B Rigid Body Motions and the Lie Group \(\boldsymbol{SE\left(3\right)}\)
For an introduction to screws and to the motion Lie group \(SE\left(3\right)\) the reader is referred to the text books [5; 48; 65; 77].
### Derivatives of Screws
Let \(\mathbf{C}_{i}\) be time dependent. According to (95), the corresponding frame transformation of screw coordinates from \(\mathcal{F}_{i}\) to \(\mathcal{F}_{0}\) is \(\mathbf{X}\equiv{}^{0}\mathbf{X}=\mathbf{Ad}_{\mathbf{C}_{i}}{}^{i}\mathbf{X}\). Assume that the screw coordinates expressed in body-fixed frame are constant. The rate of change of the screw coordinates expressed in IFR is \(\frac{d}{dt}\widehat{\mathbf{X}}=\frac{d}{dt}\mathbf{Ad}_{\mathbf{C}_{i}}({}^{ i}\widehat{\mathbf{X}})=\hat{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}\mathbf{C}_{i} {}^{i}\widehat{\mathbf{X}}\mathbf{C}_{i}^{-1}-\mathbf{C}_{i}{}^{i}\widehat{ \mathbf{X}}\mathbf{C}_{i}^{-1}\hat{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}=\widehat {\mathbf{V}}_{i}^{\mathrm{s}}\widehat{\mathbf{X}}-\widehat{\mathbf{V}}_{i}^{ \mathrm{s}}\widehat{\mathbf{X}}=[\widehat{\mathbf{V}}_{i}^{\mathrm{s}}, \widehat{\mathbf{X}}]\). Therein
\[[\widehat{\mathbf{X}}_{1},\widehat{\mathbf{X}}_{2}]=\widehat{\mathbf{X}}_{1} \widehat{\mathbf{X}}_{2}-\widehat{\mathbf{X}}_{2}\widehat{\mathbf{X}}_{1}= \mathrm{ad}_{\mathbf{X}_{1}}(\mathbf{X}_{2}) \tag{113}\]
is the Lie bracket on \(se\left(3\right)\), also called the _adjoint_ mapping. In vector notation of screws, denoting a general screw vector with \(\mathbf{X}=\left(\boldsymbol{\xi},\boldsymbol{\eta}\right)^{T}\), this is
\[[\mathbf{X}_{1},\mathbf{X}_{2}]=\left(\boldsymbol{\xi}_{1}\times\boldsymbol{ \xi}_{2},\boldsymbol{\eta}_{1}\times\boldsymbol{\xi}_{2}+\boldsymbol{\xi}_{1} \times\boldsymbol{\eta}_{2}\right)^{T}=\mathbf{ad}_{\mathbf{X}_{1}}\mathbf{X} _{2} \tag{114}\]
with
\[\mathbf{ad}_{\mathbf{X}}=\left(\begin{matrix}\widetilde{\boldsymbol{\xi}}& \mathbf{0}\\ \widetilde{\boldsymbol{\eta}}&\widetilde{\boldsymbol{\xi}}\end{matrix}\right). \tag{115}\]
The form (114) is known as the screw product [14; 77]. The matrix (115) has appeared under different names, such as'spatial cross product' in [29; 30; 39], or the 'north-east cross product' [13]. The Lie bracket obeys the Jacobi identity
\[[\mathbf{X}_{1},[\mathbf{X}_{2},\!\mathbf{X}_{3}]+[\mathbf{X}_{2},[\mathbf{X }_{3},\!\mathbf{X}_{1}]+[\mathbf{X}_{3},[\mathbf{X}_{1},\!\mathbf{X}_{2}]= \mathbf{0}. \tag{116}\]
Allowing for time dependent body-fixed screw coordinates \({}^{i}\mathbf{X}\), the above relation gives rise to an expression for the time derivative of screw coordinates in moving frames
\[\mathbf{Ad}_{\mathbf{C}_{i}}^{-1}\dot{\mathbf{X}}={}^{i}\dot{\mathbf{X}}+[ \mathbf{V}_{i}^{\mathrm{b}},{}^{i}\mathbf{X}]. \tag{117}\]
This is the spatial extension of Euler's formula for the derivative of a vector resolved in a moving frame.
For the sake of simplicity throughout the paper, the following notations are used
\[\dot{\mathbf{V}}=\left(\begin{matrix}\boldsymbol{\omega}\\ \mathbf{0}\end{matrix}\right),\quad\dot{\mathbf{V}}=\left(\begin{matrix} \mathbf{0}\\ \mathbf{v}\end{matrix}\right). \tag{118}\]
Then the matrices
\[\mathbf{ad}_{\boldsymbol{\omega}}=\left(\begin{matrix}\widetilde{\boldsymbol{ \omega}}&\mathbf{0}\\ \mathbf{0}&\widetilde{\boldsymbol{\omega}}\end{matrix}\right),\quad\mathbf{ad}_ {\mathbf{v}}=\left(\begin{matrix}\mathbf{0}&\mathbf{0}\\ \widetilde{\mathbf{v}}&\mathbf{0}\end{matrix}\right) \tag{119}\]
are used to denote the matrix (115) for twists \(\dot{\mathbf{V}}\) and \(\dot{\mathbf{V}}\), respectively, which are the infinitesimal versions on (96).
### Wrenches as Co-Screws - \(se^{\ast}\left(3\right)\)
Screws are the geometric objects embodying twists, wrenches, and momenta of rigid bodies. These different physical meanings imply different mathematical interpretations of the geometric object. A wrench, defined by a force and moment, is denoted with \(\mathbf{W}=\left(\mathbf{t},\mathbf{f}\right)^{T}\). The force applied at a point with position vector \(\mathbf{p}\) generates the moment \(\mathbf{t}=\mathbf{p}\times\mathbf{f}\). The dual to Chasles theorem is the Poisson theorem stating that every system of forces can be reduced to a force together with a couple with moment parallel to the force.
Geometrically a screw is determined by the Plucker coordinates of the line along the screw axis and the pitch. If \(\mathbf{e}\) is the unit vector along the screw axis, and \(\mathbf{p}\) is a position vector of a point on that axis, the screw coordinate vector of a twist is \(\mathbf{V}=\left(\boldsymbol{\omega},\mathbf{v}\right)^{T}=\omega\left( \mathbf{e},\mathbf{p}\times\mathbf{e}+h\mathbf{e}\right)^{T}\), where \(\omega=\|\boldsymbol{\omega}\|\) is its magnitude, and \(h=\mathbf{v}^{T}\boldsymbol{\omega}/\omega^{2}\) is its pitch. The screw coordinate vector of a wrench, i.e. the force \(\mathbf{f}\) producing a torque \(\mathbf{t}\) about the axis \(\mathbf{e}\) when the point of application is displaced according
to \(\mathbf{p}\) from the axis, is \(\mathbf{W}=\left(\mathbf{t},\mathbf{f}\right)^{T}=f\left(\mathbf{p}\times\mathbf{ e}+h\mathbf{e},\mathbf{e}\right)^{T}\), with pitch \(h=\mathbf{t}^{T}\mathbf{f}/\left\|\mathbf{f}\right\|^{2}\). Apparently the linear and angular components of the screw coordinates are interchanged for twists and wrenches. The different definition of screw coordinate vectors allows to describe the action of a wrench on a twist as the scalar product: \(\mathbf{W}^{T}\mathbf{V}\) is the power performed by the wrench acting on twist \(\mathbf{V}\).
A twist \({}^{2}\mathbf{V}\) represented in frame \(\mathcal{F}_{2}\) transforms to its representation in frame \(\mathcal{F}_{1}\) according to \({}^{1}\mathbf{V}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{1,2}}{}^{2}\mathbf{V}\). The power conservation yields that a wrench represented in \(\mathcal{F}_{1}\) transforms to its representation in \(\mathcal{F}_{2}\) according to
\[{}^{2}\mathbf{W}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{1,2}}^{T}{}^{1}\mathbf{W}. \tag{120}\]
While this notation is useful for kinetostatic formulations, it is inconsistent in the sense that it treats screw coordinates differently for twists and wrenches. In screw theory, aiming on a consistent treatment of screw entities, a screw is represented by its coordinates as defined by (67) in [62] and the so-called _reciprocal product_ of two screws is used [5; 14; 77]. The latter is defined for \(\mathbf{X}_{1}=(\boldsymbol{\xi}_{1},\boldsymbol{\eta}_{1})^{T}\) and \(\mathbf{X}_{2}=(\boldsymbol{\xi}_{2},\boldsymbol{\eta}_{2})^{T}\) as \(\mathbf{X}_{1}\odot\mathbf{X}_{2}=\boldsymbol{\xi}_{1}^{T}\boldsymbol{\eta}_{2 }+\boldsymbol{\eta}_{1}^{T}\boldsymbol{\xi}_{2}\). Two screws are said to be _reciprocal_ if \(\mathbf{X}_{1}\odot\mathbf{X}_{2}=0\). Obviously, if twists and wrenches are represented consistently with the same definition of screw coordinates, a reciprocal twist and wrench screws means that they perform no work. Geometrically, for zero pitch screws, this means that the screw axes intersect.
In screw theory wrench screws are called co-screws to distinguish them from motion screws and to indicate that a wrench acts on a motion screw (a twist) as a linear operator that returns work or power. As twists form the Lie algebra \(se\left(3\right)\), wrenches from the dual \(se^{*}\left(3\right)\).
## Appendix C Nomenclature
\begin{tabular}{l l} \(\mathcal{F}_{0}\) & - IFR \\ \(\mathcal{F}_{i}\) & - BFR of body \(i\) \\ \(\mathcal{J}_{i,i}\) & - JFR for joint \(i\) at body \(i\), joint \(i\) connects body \(i\) with its predecessor body \(i-1\) \\ \(\mathcal{J}_{i-1,i}\) & - JFR for joint \(i\) at body \(i-1\) \\ \({}^{i}\mathbf{r}\) & - Coordinate representation of a vector resolved in BFR on body \(i\). \\ & The index is omitted if this is the IFR: \(\mathbf{r}\equiv{}^{0}\mathbf{r}\). \\ \(\mathbf{R}_{i}\) & - Rotation matrix from BFR \(\mathcal{F}_{i}\) at body \(i\) to IFR \(\mathcal{F}_{0}\) \\ \(\mathbf{R}_{i,j}\) & - Rotation matrix transforming coordinates resolved in BFR \(\mathcal{F}_{j}\) \\ & to coordinates resolved in \(\mathcal{F}_{i}\) \\ \(\mathbf{r}_{i}\) & - Position vector of origin of BFR \(\mathcal{F}_{i}\) at body \(i\) resolved in IFR \(\mathcal{F}_{0}\) \\ \(\mathbf{r}_{i,j}\) & - Position vector from origin of BFR \(\mathcal{F}_{i}\) to origin of BFR \(\mathcal{F}_{j}\) \\ \(\widetilde{\mathbf{x}}\) & - skew symmetric matrix associated to the vector \(\mathbf{x}\in\mathbb{R}^{3}\) \\ \(C_{i}=\left(\mathbf{R}_{i},\mathbf{r}_{i}\right)\) & - Absolute configuration of body \(i\). This is denoted in matrix form with \(\mathbf{C}_{i}\) \\ \(\mathbf{C}_{i,j}=\mathbf{C}_{i}^{-1}\mathbf{C}_{j}\) & - Relative configuration of body \(j\) w.r.t. body \(i\) \\ \({}^{k}\mathbf{v}_{i}^{\mathbf{y}}\) & - Translational velocity of body \(i\) measured at origin of BFR \(\mathcal{F}_{j}\), resolved in BFR \(\mathcal{F}_{k}\) \\ \(\mathbf{v}_{i}^{\mathbf{b}}\equiv{}^{i}\mathbf{v}_{i}^{\mathbf{f}}\) & - Body-fixed representation of the translational velocity of body \(i\) \\ \({}^{k}\boldsymbol{\omega}_{i}\) & - Angular velocity of body \(i\) measured and resolved in BFR \(\mathcal{F}_{k}\) \\ \(\boldsymbol{\omega}_{i}^{\mathbf{b}}\equiv{}^{i}\boldsymbol{\omega}_{i}\) & - Body-fixed representation of the angular velocity of body \(i\) \\ \({}^{k}\mathbf{v}_{i}^{\mathbf{z}}\equiv{}^{0}\boldsymbol{\omega}_{i}\) & - Spatial representation of the angular velocity of body \(i\) \\ \({}^{k}\mathbf{V}_{i}^{\mathbf{z}}\) & - Twist of (RFR of) body \(i\) measured in \(\mathcal{F}_{j}\) and resolved in \(\mathcal{F}_{k}\) \\ \(\mathbf{V}_{i}^{\mathbf{b}}\equiv{}^{i}\mathbf{V}_{i}^{\mathbf{f}}\) & - Body-fixed representation of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{s}}\equiv{}^{0}\mathbf{V}_{i}^{\mathbf{0}}\) & - Spatial representation of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{h}}={}^{0}\mathbf{V}_{i}^{\mathbf{f}}\) & - Hybrid form of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{b}}\) & - Vector of system twists in body-fixed representation \\ \(\mathbf{V}^{\mathbf{s}}\) & - Vector of system twists in spatial representation \\ \(\mathbf{V}^{\mathbf{h}}\) & - Vector of system twists in hybrid representation \\ \(\mathbf{V}^{\mathbf{m}}\) & - Vector of system twists in mixed representation \\ \(\mathbf{W}_{i}^{\mathbf{b}}\) & - Applied wrench at body \(i\) in body-fixed representation \\ \(\mathbf{W}_{i}^{\mathbf{s}}\) & - Applied wrench at body \(i\) in spatial representation \\ \(\mathbf{W}_{i}^{\mathbf{h}}\) & - Applied wrench at body \(i\) in hybrid representation \\ \(\mathbf{M}_{i}^{\mathbf{b}}\) & - Inertia matrix of body \(i\) in body-fixed representation \\ \(\mathbf{M}_{i}^{\mathbf{s}}\) & - Inertia matrix of body \(i\) in spatial representation \\ \(\mathbf{M}_{i}^{\mathbf{h}}\) & - Inertia matrix of body \(i\) in hybrid representation \\ \(\mathbf{Add}_{\mathbf{R}}\) & - Screw transformation associated with \(C=\left(\mathbf{R},\mathbf{0}\right)\) \\ \(\mathbf{Add}_{\mathbf{r}}\) & - Screw transformation associated with \(C=\left(\mathbf{I},\mathbf{r}\right)\) \\ \(\mathbf{Add}_{\mathbf{c}_{i,j}}\) & - Transformation matrix transforming screw coordinates represented in \(\mathcal{F}_{j}\) \\ & to screw coordinates represented in \(\mathcal{F}_{i}\) \\ \(\mathbf{ad}_{\mathbf{X}}\) & - Screw product matrix associated with screw coordinate vector \(\mathbf{X}\in\mathbb{R}^{6}\) \\ \(\left[\mathbf{X},\mathbf{Y}\right]\) & - Lie bracket of screw coordinate vectors \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{6}\). It holds \(\left[\mathbf{X},\mathbf{Y}\right]=\mathbf{ad}_{\mathbf{X}}\mathbf{Y}\). \\ \(\widetilde{\mathbf{X}}\in se\left(3\right)\) & - \(4\times 4\) matrix associated with the screw coordinate vectors \(\mathbf{X}\in\mathbb{R}^{6}\) \\ \(SE\left(3\right)\) & - Special Euclidean group in three dimensions - Lie group of rigid body motions \\ \(se\left(3\right)\) & - Lie algebra of \(SE\left(3\right)\) - algebra of screws \\ \(\mathbf{q}\in\mathbb{V}^{n}\) & - Joint coordinate vector \\ \(\mathbb{V}^{n}\) & - Configuration space \\ \end{tabular}
## Acknowledgement
The author acknowledges that this work has been partially supported by the Austrian COMET-K2 program of the Linz Center of Mechatronics (LCM). |
2301.12894 | F-transforms determined by overlap and grouping maps over a complete
lattice | This paper is about the study of F-transforms based on overlap and grouping
maps, residual and co-residual implicator over complete lattice from both
constructive and axiomatic approaches. Further, the duality, basic properties,
and the inverse of proposed F-transforms have been studied, and axiomatic
characterizations of proposed direct F-transforms are investigated. | Abha Tripathi, S. P. Tiwari, Sutapa Mahato | 2022-12-16T14:44:50Z | http://arxiv.org/abs/2301.12894v1 | # \(F\)-transforms determined by overlap and grouping maps over a complete lattice
###### Abstract
This paper is about the study of \(F\)-transforms based on overlap and grouping maps, residual and co-residual implicator over complete lattice from both constructive and axiomatic approaches. Further, the duality, basic properties, and the inverse of proposed \(F\)-transforms have been studied, and axiomatic characterizations of proposed direct \(F\)-transforms are investigated.
**Keywords:** Complete lattice; Overlap map; Grouping map; Direct \(F\)-transforms; \(L\)-fuzzy transformation systems.
## 1 Introduction
The theory of fuzzy transform (\(F\)-transform) was firstly introduced by Perfilieva [22], a notion that piqued the curiosity of many researchers. It has now been greatly expanded upon, and a new chapter in the notion of semi-linear spaces has been opened. The fundamental idea of the \(F\)-transform is to factorize (or fuzzify) the precise values of independent variables by using a proximity relationship, and to average the precise values of dependent variables to an approximation value (cf., [22, 23]), from fuzzy sets to parametrized fuzzy sets [31] and from the single variable to the two (or more variables) (cf., [2, 3, 4, 32]). Recently, several studies have begun to look into \(F\)-transforms based on an arbitrary \(L\)-fuzzy partition of an arbitrary universe (cf., [11, 14, 15, 17, 18, 19, 20, 25, 26, 35]), where \(L\) is a complete residuated lattice. Among these researches, the concept of a general transformation operator determined by a monadic relation was introduced in [14], the links between \(F\)-transforms and semimodule homomorphisms were examined in [17], while the connections between \(F\)-transforms and similarity relations were discussed in [20]. Further, a fascinating relationship of \(L\)-fuzzy topologies/co-topologies and \(L\)-fuzzy approximation operators (all of which are ideas employed in
the study of an operator-oriented perspective of rough set theory) with \(F\)-transforms was also discovered in [25], while the connection of \(L^{M}\)-valued \(F\)-transforms with \(L^{M}\)-valued fuzzy approximation operators and \(ML\)-graded topologies/co-topologies was discussed in [35]. Also, the concept of \(F\)-transforms and \(L\)-fuzzy pretopologies were examined in [26]. In which it has been shown that weaker closure and interior operators, called after Cech, may also be expressed by using \(F\)-transforms, implying that \(L\)-valued \(F\)-transforms could be utilized in parallel with closure and interior operators as their canonical representation. Also, classes of \(F\)-transforms taking into account three well-known classes of implicators, namely \(R-,S-,QL-\) implicators were discussed in [34]. Several studies in the subject of \(F\)-transforms applications have been conducted, e.g., trend-cycle estimation [7], data compression [8], numerical solution of partial differential equations [10], scheduling [13], time series [21], data analysis [24], denoising [27], face recognition [30], neural network approaches [33] and trading [36].
### Motivation of our research
In contrast to the usual fuzzy logical connectives \(t\)-norm and \(t\)-conorms, the overlap and grouping maps can also be regarded as a new structure of classical logic's intersection and union operations on the unit interval. Even though these maps are closely linked to \(t\)-norm and \(t\)-conorm, they do not have any nontrivial zero divisors. Recently, several researchers have examined the construction technique and properties of overlap and grouping maps over complete lattices and conducted extensive research. Qiao presented the concepts of overlap and grouping maps over complete lattices in [29] and provided two construction techniques. In [37], complete homomorphisms and complete \(0_{L},1_{L}\)-endomorphisms were used to examine the construction techniques of overlap and grouping maps over complete lattices. Further, the ordinal sums of overlap and grouping maps were discussed in [38]. Also, the overlap and grouping maps have been used in various aspects of practical application problems such as in image processing [9], classification [5], and decision-making [1] problems. Specifically, these maps have more advantages than \(t\)-norm and \(t\)-conorm in dealing with some real issues. It seems that using the ideas of the overlap and grouping maps in \(F\)-transform may further open some new areas of application. Accordingly, the study of the theory of \(F\)-transform using the ideas of such maps is a theme of this paper.
### Main contributions
In this work, we present the theory of \(F\)-transforms based on overlap and grouping maps, residual and co-residual implicators over complete lattices. Interestingly, under certain conditions, the \(F\)-transforms introduced in [22, 25, 34] are special cases of proposed \(F\)-transforms. Further, we study \(F\)-transforms from constructive and axiomatic approaches based on the above logic operations over complete lattices. The main findings are summarized below:
* we discuss the duality of the proposed direct \(F\)-transforms and investigate their basic properties;
* we introduce the inverse of the proposed \(F\)-transforms and discuss some basic properties; and
* we show a close connection between proposed \(F\)-transforms and \(L\)-fuzzy transformation systems and discuss the duality of \(L\)-fuzzy transformation systems.
The remainder of this paper is arranged in the following manner. In Section 2, we recall some key concepts that will be used throughout the main sections. We introduce and examine various classes of direct \(F\)-transforms determined by overlap and grouping maps over the complete lattice in Section 3. In Section 4, we introduce the inverse of the proposed direct \(F\)-transforms. In the next section, we characterize proposed direct \(F\)-transforms from the axiomatic approach.
## 2 Preliminaries
Herein, we recall the basic ideas related to complete lattices, overlap and grouping maps, \(L\)-fuzzy sets from [6, 12, 28, 29, 37, 38]. Throughout this paper, a complete lattice with the smallest element \(0\) and the largest element \(1\) is denoted by \(L\equiv(L,\vee,\wedge,0,1)\). We start with the following.
**Definition 2.1**: _Let \(X\) be a nonempty set. Then an \(L\)_**-fuzzy set** _in \(X\) is a map \(f:X\to L\)._
The family of all \(L\)-fuzzy sets in \(X\) is denoted by \(L^{X}\). For all \(u\in L,\textbf{u}\in L^{X}\), \(\textbf{u}(x)=u,x\in X\) denotes **constant \(L\)-fuzzy set**. Also, the **core** of an \(L\)-fuzzy set \(f\) is given as a crisp set \(core(f)=\{x\in X,f(x)=1\}.\) If \(core(f)\neq\emptyset\), then \(f\) is called a **normal \(L\)-fuzzy set**. For \(A\subseteq X\), the **characteristic map** of \(A\) is a map \(1_{A}:X\rightarrow\{0,1\}\) such that
\[1_{A}(x)=\begin{cases}1&\text{ if }x\in A,\\ 0&\text{ otherwise.}\end{cases}\]
In the following, we recall and introduce the some basic concepts.
**Definition 2.2**: _An_ **overlap map** _on \(L\) is a map \(\theta:L\times L\to L\) such that for all \(u,v\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
* \(\theta(u,v)=\theta(v,u)\)_,_
* \(\theta(u,v)=0\) _iff_ \(u=0\) _or_ \(v=0\)_,_
* \(\theta(u,v)=1\) _iff_ \(u=1\) _and_ \(v=1\)_,_
* \(\theta(u,v)\leq\theta(u,w)\) _if_ \(v\leq w\)_, and_
* \(\theta(u,\bigvee_{i\in J}v_{i})=\bigvee_{i\in J}\theta(u,v_{i}),\theta(\bigwedge _{i\in J}u_{i},v)=\bigwedge_{i\in J}\theta(u_{i},v)\)_._
If \(\theta(1,u)=u,\,\forall\,u\in L\), we say that \(1\) is a neutral element of \(\theta\). Also, an overlap map is called
1. **deflation** if \(\theta(1,u)\leq u,\,\forall u\in L\),
2. **inflation** if \(u\leq\theta(1,u),\,\forall u\in L\), and
3. \(EP\)**-overlap map** if \(\theta(u,\theta(v,w))=\theta(v,\theta(u,w)),\,\forall\,u,v,w\in L\).
**Example 2.1**: _(i) Every continuous \(t\)-norm \(\mathcal{T}\) with no nontrivial zero divisors is an overlap map, (ii) \(\theta_{M}(u,v)=u\wedge v,\,\forall\,u,v\in L\) on a frame with the prime element \(0\) is an overlap map._
**Definition 2.3**: \(A\) **grouping map** _on \(L\) is a map \(\eta:L\times L\to L\) such that for all \(u,v\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
1. \(\eta(u,v)=\eta(v,u)\)_,_
2. \(\eta(u,v)=0\) _iff_ \(u=0\) _and_ \(v=0\)_,_
3. \(\eta(u,v)=1\) _iff_ \(u=1\) _or_ \(v=1\)_,_
4. \(\eta(u,v)\leq\eta(u,w)\) _if_ \(v\leq w\)_, and_
5. \(\eta(u,\bigvee\limits_{i\in J}v_{i})=\bigvee\limits_{i\in J}\eta(u,v_{i}),\eta (\bigwedge\limits_{i\in J}u_{i},v)=\bigwedge\limits_{i\in J}\eta(u_{i},v)\)_._
If \(\eta(0,u)=u,\,\forall\,u\in L\), we say that \(0\) is a neutral element of \(\eta\). Also, a grouping map is called
1. **deflation** if \(\eta(0,u)\geq u,\,\forall u\in L\),
2. **inflation** if \(u\geq\eta(0,u),\,\forall u\in L\), and
3. \(EP\)**-grouping map** if \(\eta(u,\eta(v,w))=\eta(v,\eta(u,w)),\,\forall\,u,v,w\in L\).
**Example 2.2**: _(i) Every continuous \(t\)-conorm \(\mathcal{S}\) with no nontrivial zero divisors is a grouping map, (ii) \(\eta_{M}(u,v)=u\lor v,\,\forall\,u,v\in L\) on a frame with the prime element \(0\) is an grouping map._
**Definition 2.4**: \(A\) **negator** _on \(L\) is a decreasing map \(\mathbf{N}:L\to L\) such that \(\mathbf{N}(0)=1\) and \(\mathbf{N}(1)=0\)._
A negator \(\mathbf{N}\) is called **involutive** (strong), if \(\mathbf{N}(\mathbf{N}(u))=u,\,\forall\,u\in L\). In addition, a negator \(\mathbf{N}\) is called **strict**, if \(\mathbf{N}\) is stictly decreasing and continuous, i.e., involutive (as every involutive negator is stictly decreasing and continuous).
The negator \(\mathbf{N}_{S}(u)=1-u\) on \(L=[0,1]\) is usually regarded as the standard negator. For a given negator \(\mathbf{N}\), an overlap map \(\theta\) and a grouping map \(\eta\) are dual with respect to \(\mathbf{N}\) if \(\eta(\mathbf{N}(u),\mathbf{N}(v))=\mathbf{N}(\theta(u,v)),\theta(\mathbf{N}(u ),\mathbf{N}(v))=\mathbf{N}(\eta(u,v)),\,\forall\,u,v\in L\).
**Definition 2.5**: _Let \(\mathbf{N}\) be a negator, \(\theta\) be an overlap map and \(\eta\) be a grouping map. Then_
1. _the_ **residual****implicator** _induced by an overlap map_ \(\theta\) _is a map_ \(\mathcal{I}_{\theta}:L\times L\to L\) _such that_ \(\mathcal{I}_{\theta}(u,v)=\{w\in L:\theta(u,w)\leq v\},\,\forall\,u,v\in L\)_, and_
2. _the_ **co-residual****implicator** _induced by a grouping map_ \(\eta\) _is a map_ \(\mathcal{I}_{\eta}:L\times L\to L\) _such that_ \(\mathcal{I}_{\eta}(u,v)=\{w\in L:\eta(u,w)\geq v\},\,\forall\,u,v\in L\)_._
**Example 2.3**: _Let \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M}\). Then for all \(u,v\in L\)_
1. _the residual implicator_ \(\mathcal{I}_{\theta_{M}}\) _is given as_ \(\mathcal{I}_{\theta_{M}}(u,v)=\begin{cases}1&\text{ if }u\leq v,\\ v&\text{ otherwise},\,and\end{cases}\)__
2. _the co-residual__implicator_ \(\mathcal{I}_{\eta_{M}}\) _is given as_ \(\mathcal{I}_{\eta_{M}}(u,v)=\begin{cases}0&\text{ if }u\leq v,\\ u&\text{ otherwise}.\end{cases}\)__
**Lemma 2.1**: _Let \(\theta\) and \(\eta\) be overlap and grouping maps, respectively. Then \(\theta\) and \(\mathcal{I}_{\theta}\), \(\eta\) and \(\mathcal{I}_{\eta}\) form two adjoint pairs, respectively, i.e., for all \(u,v,w\in L,\,\theta(u,v)\leq w\Leftrightarrow u\leq\mathcal{I}_{\theta}(v,w), \,\eta(u,v)\geq w\Leftrightarrow u\geq\mathcal{I}_{\eta}(v,w)\), respectively._
**Lemma 2.2**: _Let \(\theta\) be an overlap map. Then for all \(u,v,w\in L\)_
1. \(\mathcal{I}_{\theta}(0,0)=\mathcal{I}_{\theta}(1,1)=1,\mathcal{I}_{\theta}(1, 0)=0\)_,_
2. \(\mathcal{I}_{\theta}(u,w)\geq\mathcal{I}_{\theta}(v,w),\,\mathcal{I}_{\theta} (w,u)\leq\mathcal{I}_{\theta}(w,v)\) _if_ \(u\leq v\)_,_
3. \(\mathcal{I}_{\theta}\) _is an_ \(OP\)_,_ \(NP\)_-residual implicator, i.e.,_ \(u\leq v\Leftrightarrow\mathcal{I}_{\theta}(u,v)=1,\mathcal{I}_{\theta}(1,u)=u\)_, respectively iff_ \(1\) _is a neutral element of_ \(\theta\)_,_
4. \(\mathcal{I}_{\theta}\) _is an_ \(IP\)_-residual implicator, i.e.,_ \(\mathcal{I}_{\theta}(u,u)=1\) _iff_ \(\theta\) _is a deflation overlap map,_
5. \(\mathcal{I}_{\theta}\) _is an_ \(EP\)_-residual implicator, i.e.,_ \(\mathcal{I}_{\theta}(u,\mathcal{I}_{\theta}(v,w))=\mathcal{I}_{\theta}(v, \mathcal{I}_{\theta}(u,w))\) _iff_ \(\theta\) _is an_ \(EP\)_-overlap map._
**Lemma 2.3**: _Let \(\theta\) be an overlap map. Then for all \(u,v,w\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
1. \(\theta(u,\mathcal{I}_{\theta}(u,v))\leq v,\mathcal{I}_{\theta}(u,\theta(u,v)) \geq v,\mathcal{I}_{\theta}(\theta(u,v),0)=\mathcal{I}_{\theta}(u,\mathcal{I} (v,0))\)_,_
2. \(\mathcal{I}_{\theta}(u,\bigwedge\limits_{i\in J}v_{i})=\bigwedge\limits_{i\in J }\mathcal{I}_{\theta}(u,v_{i}),\mathcal{I}_{\theta}(\bigvee\limits_{i\in J}u_ {i},v)=\bigwedge\limits_{i\in J}\mathcal{I}_{\theta}(u_{i},v)\)_,_
3. \(\mathcal{I}_{\theta}(u,\bigvee\limits_{i\in J}v_{i})\geq\bigvee\limits_{i\in J }\mathcal{I}_{\theta}(u,v_{i})\)_,_
4. \(\theta\) _is an_ \(EP\)_-overlap map iff_ \(\mathcal{I}_{\theta}(\theta(u,v),w)=\mathcal{I}_{\theta}(u,\mathcal{I}_{\theta} (v,w))\)_._
If \(\theta\) and \(\eta\) are dual with respect to an involutive negator \(\mathbf{N}\), then \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are dual with respect to the involutive negator \(\mathbf{N}\), i.e., \(\mathcal{I}_{\eta}(\mathbf{N}(u),\mathbf{N}(v))=\mathbf{N}(\mathcal{I}_{ \theta}(u,v)),\mathcal{I}_{\theta}(\mathbf{N}(u),\mathbf{N}(v))\)\(=\mathbf{N}(\mathcal{I}_{\eta}(u,v)),\,\forall\,u,v\in L\). Then we have the following dual properties of \(\mathcal{I}_{\eta}\) by the properties of \(\mathcal{I}_{\theta}\) as follows:
1. \(\mathcal{I}_{\eta}(0,0)=\mathcal{I}_{\eta}(1,1)=0,\mathcal{I}(0,1)=1\),
2. \(\mathcal{I}_{\eta}(u,w)\geq\mathcal{I}_{\eta}(v,w)\), \(\mathcal{I}_{\eta}(w,u)\leq\mathcal{I}_{\eta}(w,v)\) if \(u\leq v\),
3. \(\mathcal{I}_{\eta}\) is \(OP\) and \(NP\)-co-residual implicator, i.e., \(u\geq v\Leftrightarrow\mathcal{I}_{\eta}(u,v)=0\) and \(\mathcal{I}_{\eta}(0,u)=u\), respectively iff \(0\) is a neutral element of \(\eta\),
4. \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator, i.e., \(\mathcal{I}_{\eta}(u,u)=0\) iff \(\eta\) is a deflation grouping map,
5. \(\mathcal{I}_{\eta}\) is an \(EP\)-co-residual implicator, i.e., \(\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,w))=\mathcal{I}_{\eta}(v,\mathcal{I }_{\eta}(u,w))\) iff \(\eta\) is an \(EP\)-grouping map,
6. \(\eta(u,\mathcal{I}_{\eta}(u,v))\geq v,\mathcal{I}_{\eta}(u,\eta(u,v))\leq v, \mathcal{I}_{\eta}(\eta(u,v),1)=\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,1))\),
7. \(\mathcal{I}_{\eta}(u,\bigvee\limits_{i\in J}v_{i})=\bigvee\limits_{i\in J} \mathcal{I}_{\eta}(u,v_{i}),\mathcal{I}_{\eta}(\bigwedge\limits_{i\in J}u_{i}, v)=\bigvee\limits_{i\in J}\mathcal{I}_{\eta}(u_{i},v)\),
8. \(\mathcal{I}_{\eta}(u,\bigwedge\limits_{i\in J}v_{i})\leq\bigwedge\limits_{i\in J }\mathcal{I}_{\eta}(u,v_{i})\),
9. \(\eta\) is an \(EP\)-grouping map iff \(\mathcal{I}_{\eta}(\eta(u,v),w)=\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,w))\).
For any \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), \(\mathbf{N}_{\mathcal{I}_{\theta}}(u)=\mathcal{I}_{\theta}(u,0)\) and \(\mathbf{N}_{\mathcal{I}_{\eta}}(u)=\mathcal{I}_{\eta}(u,1),\forall\,u\in L\) are called the negators induced by \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), respectively. Next, we introduce the following notations which are going to be used in subsequent sections.
Given an overlap map \(\theta\), a grouping map \(\eta\), a residual implicator \(\mathcal{I}_{\theta}\), a co-residual implicator \(\mathcal{I}_{\eta}\), a negator \(\mathbf{N}\), and \(L\)-fuzzy sets \(f,g\in L^{X}\), we define \(L\)-fuzzy sets \(\theta(f,g),\eta(f,g),\mathcal{I}_{\theta}(f,g),\mathcal{I}_{\eta}(f,g)\) and \(\mathbf{N}(f)\) as follows:
\[\theta(f,g)(x) = \theta(f(x),g(x)),\forall\,x\in X,\] \[\eta(f,g)(x) = \eta(f(x),g(x)),\forall\,x\in X,\] \[\mathcal{I}_{\theta}(f,g)(x) = \mathcal{I}_{\theta}(f(x),g(x)),\forall\,x\in X,\] \[\mathcal{I}_{\eta}(f,g)(x) = \mathcal{I}_{\eta}(f(x),g(x)),\forall\,x\in X,\text{ and}\] \[(\mathbf{N}(f))(x) = \mathbf{N}(f(x)),\forall\,x\in X.\]
## 3 Direct \(F\)-transforms
Herein, we consider that \(\theta\) and \(\eta\) are overlap and grouping maps, and these are dual with respect to an involutive negator \(\mathbf{N}\). Also, \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are residual and co-residual implicators induced by \(\theta\) and \(\eta\), respectively, introduced as in Section 2. The main content of this section is to present the concepts of the direct \(F\)-transforms of \(L\)-fuzzy sets with respect to the above logic operations. Further, we study and investigate their relationships and discuss their basic properties. We start with the definition of \(L\)-fuzzy partition from [25].
**Definition 3.1**: _A collection \(\mathcal{P}\) of normal \(L\)-fuzzy sets \(\{A_{j}:j\in J\}\) is called an \(L\)-fuzzy partition of a nonempty set \(X\) if the corresponding collection of ordinary sets \(\{core(A_{j}):j\in J\}\) is partition of \(X\). The pair \((X,\mathcal{P})\) is called a_ **space with \(L\)-fuzzy partition**_._
For an \(L\)-fuzzy partition \(\mathcal{P}=\{A_{j}:j\in J\}\), it is possible to associate the onto index map \(k:X\to J\) such that \(k(x)=j\) iff \(x\in core(A_{j})\).
The following is towards the direct \(F\)-transforms computed with \(\theta\), \(\eta\), \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), where \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are residual and co-residual implicators induced by overlap and grouping maps \(\theta,\eta\), respectively. Now, we begin with the following.
**Definition 3.2**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of a set \(X\) and \(f\in L^{X}\). Then_
1. _the_ **(direct \(\theta\)-upper)**__\(F^{\uparrow,\theta}\)**-transform** _of_ \(f\) _computed with an overlap map_ \(\theta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\uparrow,\theta}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\theta\)_-upper)_ \(F^{\uparrow,\theta}\)_-transform is given by_ \[F^{\uparrow,\theta}_{j}[f]=\bigvee_{x\in X}\theta(A_{j}(x),f(x)),\]
2. _the_ **(direct \(\eta\)-lower)**__\(F^{\downarrow,\eta}\)**-transform** _of_ \(f\) _computed with a grouping map_ \(\eta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\downarrow,\eta}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\eta\)_-lower)_ \(F^{\downarrow,\eta}\)_-transform is given by_ \[F^{\downarrow,\eta}_{j}[f]=\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),f(x)),\]
3. _the_ **(direct \(\mathcal{I}_{\eta}\)-upper)**__\(F^{\uparrow,\mathcal{I}_{\eta}}\)**-transform** _of_ \(f\) _computed with a co-residual implicator_ \(\mathcal{I}_{\eta}\) _induced by a grouping map_ \(\eta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\mathcal{I}_{\eta}\)_-upper)_ \(F^{\uparrow,\mathcal{I}_{\eta}}\)_-transform is given by_ \[F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]=\bigvee_{x\in X}\mathcal{I}_{\eta}( \mathbf{N}(A_{j}(x)),f(x)),\,and\]
4. _the_ **(direct \(\mathcal{I}_{\theta}\)-lower)**__\(F^{\downarrow,\mathcal{I}_{\theta}}\)**-transform** _of_ \(f\) _computed with a residual implicator_ \(\mathcal{I}_{\theta}\) _induced by an overlap map_ \(\theta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\mathcal{I}_{\theta}\)_-lower)_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transform is given by_ \[F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\bigwedge_{x\in X}\mathcal{I}_{\theta }(A_{j}(x),f(x)).\]
The direct upper \(F\)-transform computed with a \(t\)-norm and the direct lower \(F\)-transform computed with an \(R\)-implicator proposed in [22, 25, 34] are special cases of \(F^{\uparrow,\theta}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transforms, respectively. Also, the direct lower \(F\)-transform computed with an \(S\)-implicator proposed in [34] is a special case of \(F^{\downarrow,\eta}\)-transform. In above-introduced direct \(F\)-transforms, \(F^{\uparrow,\mathcal{I}_{\eta}}\)-transform is a new definition.
**Example 3.1**: _Let \(L=\{0,p,q,r,s,t,u,1\}\) be a complete lattice such that \(0<p<r<t<u<1,0<p<q<s<u<1\) and \(\{q,r\}\) and \(\{s,t\}\) are pairwise incomparable (Figure 1). Then \((X,\mathcal{P})\) is a space with an \(L\)-fuzzy partition \(\mathcal{P}\), where \(X=\{x_{1},x_{2},x_{3}\}\) and \(\mathcal{P}=\{A_{1},A_{2},A_{3}\}\) such that \(A_{1}=\frac{1}{x_{1}}+\frac{p}{x_{2}}+\frac{q}{x_{3}}\), \(A_{2}=\frac{s}{x_{1}}+\frac{1}{x_{2}}+\frac{u}{x_{3}}\), \(A_{3}=\frac{s}{x_{1}}+\frac{p}{x_{2}}+\frac{1}{x_{3}}\). Further, let \(f\in L^{X}\) such that \(f=\frac{p}{x_{1}}+\frac{q}{x_{2}}+\frac{u}{x_{3}}\) and \(\mathbf{N}\) be an involutive negator such that \(\mathbf{N}(0)=1,\mathbf{N}(a)=u,\mathbf{N}(q)=t,\mathbf{N}(r)=s,\mathbf{N}(s)= r,\mathbf{N}(t)=q,\mathbf{N}(u)=p,\mathbf{N}(1)=0\). The the direct \(F\)-transforms with respect to \(\theta_{M},\eta_{M},\mathcal{I}_{\eta_{M}},\mathcal{I}_{\theta_{M}}\) are \(F^{\uparrow,\theta_{M}}[f]=\{F^{\uparrow,\theta_{M}}_{1}[f]=q,F^{\uparrow, \theta_{M}}_{2}[f]=u,F^{\uparrow,\theta_{M}}_{2}[f]=u\}\), \(F^{\downarrow,\eta_{M}}[f]=\{F^{\downarrow,\eta_{M}}_{1}[f]=p,F^{\downarrow, \eta_{M}}_{2}[f]=r,F^{\downarrow,\eta_{M}}_{3}[f]=r\}\), \(F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{1}[f]=\{F^{\uparrow,\mathcal{I}_{\eta_{M}} }_{1}[f]=u,F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{2}[f]=r,F^{\uparrow,\mathcal{I} _{\eta_{M}}}_{3}[f]=u\}\), \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}[f]=\{F^{\downarrow,\mathcal{I}_{\theta_{M}} }_{1}[f]=p,F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{2}[f]=p\}\)._
**Remark 3.1**: _(i) If \(L=[0,1]\), \(\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}= \mathcal{I}_{\eta_{M}}\) and \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\), then the \(j^{th}\) components of \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transforms become as follows:_
\[F^{\uparrow,\theta_{M}}_{j}[f] = \bigvee_{x\in X}(A_{j}(x)\wedge f(x)),\] \[F^{\downarrow,\eta_{M}}_{j}[f] = \bigwedge_{x\in X}((1-A_{j}(x))\lor f(x)),\] \[F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta_{M}}((1-A_{j}(x)),f(x)),\,\text{ and}\] \[F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\theta_{M}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously \(F^{\uparrow,\theta_{M}}\) and \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}\)-transforms coincide with the special cases of direct upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively. Also, \(F^{\downarrow,\eta_{M}}\)-transform coincides with the special case of the direct lower \(F\)-transform proposed
Figure 1: Diagram for lattice \(L\)
_in [34]._
_(ii) If_ \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}=\mathcal{I}_{\eta_{M}}\) _and_ \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\)_, then the_ \(j^{th}\) _components of_ \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transforms become as follows:_
\[F^{\uparrow,\theta_{M}}_{j}[f] = \bigvee_{x\in X}(A_{j}(x)\wedge f(x)),\] \[F^{\downarrow,\eta_{M}}_{j}[f] = \bigwedge_{x\in X}(\mathbf{N}(A_{j}(x))\lor f(x)),\] \[F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta_{M}}(\mathbf{N}(A_{j}(x)),f(x )),\,\text{and}\] \[F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\theta_{M}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously_ \(F^{\uparrow,\theta_{M}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}\)_-transforms coincide with the special cases of direct upper and lower_ \(F\)_-transforms proposed in_ _[_22, 25, 34_]__, respectively. Also,_ \(F^{\downarrow,\eta_{M}}\)_-transform coincides with the special case of the direct lower_ \(F\)_-transform proposed in_ _[_34_]__._
_(iii) If_ \(L=[0,1],\theta=\mathcal{T}\) _and_ \(\eta=\mathcal{S}\)_, where_ \(\mathcal{T},\mathcal{S}\) _are continuous_ \(t\)_-norm,_ \(t\)_-conorm with no nontrivial zero divisors, respectively, then the_ \(j^{th}\) _components of_ \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transforms become as follows:_
\[F^{\uparrow,\mathcal{T}}_{j}[f] = \bigvee_{x\in X}\mathcal{T}(A_{j}(x),f(x)),\] \[F^{\downarrow,\mathcal{S}}_{j}[f] = \bigwedge_{x\in X}\mathcal{S}(\mathbf{N}(A_{j}(x)),f(x)),\] \[F^{\uparrow,\mathcal{I}_{S}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\mathcal{S}}(\mathbf{N}(A_{j}(x)),f( x)),\,\text{and}\] \[F^{\downarrow,\mathcal{I}_{\mathcal{T}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\mathcal{T}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously_ \(F^{\uparrow,\mathcal{T}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\mathcal{T}}}\)_-transforms coincide with the direct upper and lower_ \(F\)_-transforms computed with_ \(t\)_-norm and_ \(R\)_-implicator proposed in_ _[_22, 25, 34_]__, respectively. Also,_ \(F^{\downarrow,\mathcal{S}}\)_-transform coincide with the direct lower_ \(F\)_-transform computed with an_ \(S\)_-implicator proposed in_ _[_34_]__, respectively._
From the above, it is clear that some existing direct \(F\)-transforms are special cases of the proposed direct \(F\)-transforms. Among these, some direct \(F\)-transforms coincide with the proposed direct \(F\)-transforms and some of the proposed direct \(F\)-transforms coincide with the special cases of the existing direct \(F\)-transforms. That is to say; the proposed direct \(F\)-transforms are more general than other existing ones.
**Proposition 3.1**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \(\mathbf{N}\). Then for all \(j\in J,f\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[ \mathbf{N}(f)])\)_, i.e.,_ \(\mathbf{N}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]\)_, and_
* \(F_{j}^{\downarrow,\eta}[f]=\mathbf{N}(F_{j}^{\uparrow,\theta}[\mathbf{N}(f)])\)_, i.e,_ \(\mathbf{N}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,\theta}[\mathbf{N}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}(F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]) = \mathbf{N}(\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),(\mathbf{N }(f))(x)))\] \[= \mathbf{N}(\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\mathbf{N }(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\eta(\mathbf{N}(A_{j}(x)),\mathbf{N}( f(x))))\] \[= \bigvee_{x\in X}\theta(A_{j}(x),f(x))\] \[= F_{j}^{\uparrow,\theta}[f].\]
Thus \(F_{j}^{\uparrow,\theta}[f]=\mathbf{N}(F_{j}^{\downarrow,\eta}[\mathbf{N}(f)])\), or that \(\mathbf{N}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]\). Similarly, we can show that \(F_{j}^{\downarrow,\eta}[f]=\mathbf{N}(F_{j}^{\uparrow,\theta}[\mathbf{N}(f)])\), or that, \(\mathbf{N}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,\theta}[\mathbf{N}(f)]\).
**Proposition 3.2**: _Let \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) be dual with respect to an involutive negator \(\mathbf{N}\). Then for all \(j\in J,f\in L^{X}\)_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\mathbf{N}(f)])\)_, i.e,_ \(\mathbf{N}(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])=F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\mathbf{N}(f)]\)_, and_
* \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\mathbf{N}(F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)])\)_, i.e.,_ \(\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])=F_{j}^{\uparrow, \mathcal{I}_{\eta}}[\mathbf{N}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathbf{N}(f)]) = \mathbf{N}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),( \mathbf{N}(f))(x)))\] \[= \mathbf{N}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathbf{ N}(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\mathcal{I}_{\theta}(A_{j}(x),\mathbf{ N}(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\mathcal{I}_{\theta}(\mathbf{N}( \mathbf{N}(A_{j}(x))),\mathbf{N}(f(x))))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x))\] \[= F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f].\]
Thus \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[\mathbf{N}(f)])\), or that \(\mathbf{N}(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])=F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[\mathbf{N}(f)]\). Similarly, we can prove that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\mathbf{N}(F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)])\), or that, \(\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])=F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)]\).
The above two propositions show that the duality of \(F_{j}^{\uparrow,\theta}\) and \(F_{j}^{\downarrow,\eta}\), \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\theta}}\)
are dual with respect \(\mathbf{N}\). Generally, duality concept for \(F_{j}^{\uparrow,\theta}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\theta}}\), \(F_{j}^{\downarrow,\eta}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}\) are not true with respect to \(\mathbf{N}\). But they satisfy the following result. For which, we assume \(\bigwedge\limits_{b\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(u,v),v)=u\), \(\forall\,u\in L\).
**Proposition 3.3**: _Let \(\mathbf{N}\) be an involutive reactor, \(\theta\) and \(\eta\) be \(EP\)-overlap and \(EP\)-grouping maps, respectively. Then for \(j\in J,u\in L,\boldsymbol{u},f\in L^{X}\)_
1. \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{u\in L}\mathcal{I }_{\theta}(F_{j}^{\uparrow,\theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\)_,_ \(F_{j}^{\uparrow,\theta}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\)_, and_
2. \(F_{j}^{\downarrow,\eta}[f]=\bigvee\limits_{u\in L}\mathcal{I}_{\eta}(F_{j}^{ \uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(f,\boldsymbol{u})],u)\)_,_ \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{u\in L}\mathcal{I}_{\eta}(F_{j }^{\downarrow,\eta}[\mathcal{I}_{\eta}(f,\boldsymbol{u})],u)\)_._
**Proof:** Let \(u\in L\) and \(f\in L^{X}\). Then from Definition 3.2
\[\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{\uparrow, \theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u) = \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigvee\limits_{x\in X }\theta(A_{j}(x),\mathcal{I}_{\theta}(f,\boldsymbol{u})(x)),u)\] \[= \bigwedge\limits_{u\in L}\bigwedge\limits_{x\in X}\mathcal{I}_{ \theta}(\theta(A_{j}(x),\mathcal{I}_{\theta}(f(x),u)),u)\] \[= \bigwedge\limits_{u\in L}\bigwedge\limits_{x\in X}\mathcal{I}_{ \theta}(A_{j}(x),\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(f(x),u),u))\] \[= \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\bigwedge \limits_{u\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(f(x),u),u))\] \[= \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),f(x))\] \[= F^{\downarrow,\mathcal{I}_{\theta}}[f].\]
Thus \(F^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{ \theta}(F_{j}^{\uparrow,\theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\) and
\[\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u) = \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathcal{I}_{\theta}(f,\boldsymbol{u})(x)),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathcal{I}_{\theta}(f(x),u)),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta} (\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x)),u),u)\] \[= \bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\] \[= F^{\uparrow,\theta}[f].\]
Thus \(F^{\uparrow,\theta}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\).
(ii) Let \(u\in L\) and \(f\in L^{X}\). Then from Definition 3.2 and Propositions 3.1 and 3.2
\[F^{\downarrow,\eta}[f] = {\bf N}(F^{\uparrow,\theta}[{\bf N}(f)])\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(F^{\downarrow,{\cal I} _{\theta}}_{j}[{\cal I}_{\theta}({\bf N}(f),{\bf u})],u))\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(\bigwedge_{x\in X}{ \cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\bf N}({\cal I}_{\theta}(\bigwedge_{x\in X}{\cal I }_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\mathbf{N}(\bigwedge_{x\in X}{ \cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\bf N}({\cal I} _{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\cal I}_{\eta}( {\bf N}(A_{j}(x)),{\bf N}({\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\cal I}_{\eta}( {\bf N}(A_{j}(x)),{\cal I}_{\eta}(f,{\bf N}({\bf u}))(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(F^{\uparrow,{\cal I}_{\eta}}[{ \cal I}_{\eta}(f,{\bf N}({\bf u}))],{\bf N}(u)).\]
Thus \(F^{\downarrow,\eta}[f]=\bigvee_{u\in L}{\cal I}_{\eta}(F^{\uparrow,{\cal I}_{ \eta}}[{\cal I}_{\eta}(f,{\bf N}({\bf u}))],{\bf N}(u))\) and
\[F^{\downarrow,{\cal I}_{\eta}}[f] = {\bf N}(F^{\downarrow,{\cal I}_{\theta}}[{\bf N}(f)])\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(F^{\uparrow,\theta}_{ j}[{\cal I}_{\theta}({\bf N}(f),{\bf u})],u))\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal N}({\cal I}_{\theta}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\mathbf{N}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigwedge_{x\in X}\mathbf{N}( \theta(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigwedge_{x\in X}\eta(\mathbf{N }(A_{j}(x)),{\cal I}_{\eta}(f,{\bf N}({\bf u}))(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(F^{\downarrow,\eta}[{\cal I}_{ \eta}(f,{\bf N}({\bf u}))],{\bf N}(u)).\]
Thus \(F^{\downarrow,\mathcal{I}_{\eta}}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}(F^{\uparrow, \eta}[\mathcal{I}_{\eta}(f,\mathbf{N}(\mathbf{u}))],\mathbf{N}(u))\).
From above three results, we have the following result which present the connections between \(F^{\uparrow,\theta}\) and \(F^{\uparrow,\mathcal{I}_{\eta}}\), \(F^{\downarrow,\eta}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\).
**Proposition 3.4**: _Let \(\mathbf{N}\) be an involutive negator. Then for \(j\in J,u\in L,\boldsymbol{u},f\in L^{X}\)_
1. \(F^{\uparrow,\theta}_{j}[f]=\bigwedge_{u\in L}\mathcal{I}_{\theta}(\mathbf{N}(F^ {\uparrow,\mathcal{I}_{\eta}}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f),\mathbf{N} (\boldsymbol{u}))]),u)\)_,_
2. \(F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}( \mathbf{N}(F^{\uparrow,\theta}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f),\mathbf{N }(\boldsymbol{u}))]),u)\)_,_
3. \(F^{\downarrow,\eta}_{j}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}(\mathbf{N}(F^{ \downarrow,\mathcal{I}_{\theta}}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f), \mathbf{N}(\boldsymbol{u}))]),u)\)_, and_
4. \(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\bigwedge_{u\in L}\mathcal{I}_{ \theta}(\mathbf{N}(F^{\downarrow,\eta}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f), \mathbf{N}(\boldsymbol{u}))]),u)\)_._
**Proof:** Propositions 3.1, 3.2 and 3.3 lead to this proof.
The following are towards the duality of \(F^{\uparrow,\theta}_{j}\) and \(F^{\uparrow,\mathcal{I}_{\theta}}_{j}\), \(F^{\downarrow,\eta}_{j}\) and \(F^{\uparrow,\mathcal{I}_{\eta}}_{j}\) with respect to involutive negators \(\mathbf{N}_{\mathcal{I}_{\theta}},\mathbf{N}_{\mathcal{I}_{\eta}}\), respectively.
**Proposition 3.5**: _Let \(\mathbf{N}_{\mathcal{I}_{\theta}}\) be an involutive negator such that \(\mathbf{N}_{\mathcal{I}_{\theta}}(.)=\mathcal{I}_{\theta}(.,0)\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F^{\uparrow,\theta}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\)_, i.e.,_ \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\uparrow,\theta}_{j}[f])=F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\)_, and_
2. \(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^ {\uparrow,\theta}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\)_, i.e,_ \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f])=F^ {\uparrow,\theta}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow,\mathcal{I}_{ \theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]) = \mathbf{N}_{\mathcal{I}_{\theta}}(\bigwedge_{x\in X}\mathcal{I}_ {\theta}(A_{j}(x),(\mathbf{N}_{\mathcal{I}_{\theta}}(f))(x)))\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j} (x),\mathbf{N}_{\mathcal{I}_{\theta}}(f(x))),0)\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j} (x),\mathcal{I}_{\theta}(f(x),0)),0)\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}( \theta(A_{j}(x),f(x)),0),0)\] \[= \bigvee_{x\in X}\mathbf{N}_{\mathcal{I}_{\theta}}(\mathcal{I}_{ \theta}(A_{j}(x),\mathbf{N}(f(x))))\] \[= \bigvee_{x\in X}\theta(A_{j}(x),f(x))\] \[= F^{\uparrow,\theta}_{j}[f].\]
Thus \(F^{\uparrow,\theta}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\), or that \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\uparrow,\theta}_{j}[f])=F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\).
(ii) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[{\bf N}_{{\cal I}_{\theta}}(F_{j}^{\uparrow,{\cal I}_{\theta}}[{\bf N }_{{\cal I}_{\theta}}(f)]) = {\bf N}_{{\cal I}_{\theta}}(\bigvee_{x\in X}\theta(A_{j}(x),({\bf N }_{{\cal I}_{\theta}}(f))(x)))\] \[= {\cal I}_{\theta}(\bigvee_{x\in X}\theta(A_{j}(x),{\bf N}_{{\cal I }_{\theta}}(f(x))),0)\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({ \bf N}_{{\cal I}_{\theta}}(f(x)),0))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),{\bf N}_{{\cal I}_{ \theta}}({\bf N}_{{\cal I}_{\theta}}(f(x))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),f(x))\] \[= F_{j}^{\downarrow,{\cal I}_{\theta}}[f].\]
Thus \(F_{j}^{\uparrow,\theta}[f]={\bf N}_{{\cal I}_{\theta}}(F_{j}^{\downarrow,{ \cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(f)])\), or that \({\bf N}_{{\cal I}_{\theta}}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,{ \cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(f)]\).
**Proposition 3.6**: _Let \({\bf N}_{{\cal I}_{\eta}}\) be an involutive negator such that \({\bf N}_{{\cal I}_{\eta}}(.)={\cal I}_{\eta}(.,1)\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\downarrow,\eta}[f]={\bf N}_{{\cal I}_{\eta}}(F_{j}^{\uparrow,{\cal I}_ {\eta}}[{\bf N}_{{\cal I}_{\eta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\eta}}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,{\cal I} _{\eta}}[{\bf N}_{{\cal I}_{\eta}}(f)]\)_, and_
2. \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]={\bf N}_{{\cal I}_{\eta}}(F_{j}^{\downarrow, \eta}[{\bf N}_{{\cal I}_{\eta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\eta}}(F_{j}^{\uparrow,{\cal I}_{\eta}}[f])=F_{j}^{ \downarrow,\eta}[{\bf N}_{{\cal I}_{\eta}}(f)]\)_._
**Proof:** Similar to that of Proposition 3.5.
Below, we discuss basic results of \(F_{j}^{\uparrow,\theta},F_{j}^{\downarrow,\eta},F_{j}^{\uparrow,{\cal I}_{\eta}}\) and \(F_{j}^{\uparrow,{\cal I}_{\theta}}\).
**Proposition 3.7**: _Let \({\cal P}=\{A_{j}:j\in J\},{\cal P}^{\prime}=\{B_{j^{\prime}}:j^{\prime}\in J\}\) be L-fuzzy partitions of \(X\) and \(A_{j}\leq B_{j^{\prime}},\,\forall\,j,j^{\prime}\in J\). Then for all \(f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[f]\leq F_{j^{\prime}}^{\uparrow,\theta}[f],F_{j}^{ \downarrow,\eta}[f]\geq F_{j^{\prime}}^{\downarrow,\eta}[f]\)_, and_
2. \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq F_{j^{\prime}}^{\uparrow,{\cal I}_{ \eta}}[f],F_{j}^{\downarrow,{\cal I}_{\theta}}[f]\geq F_{j^{\prime}}^{ \downarrow,{\cal I}_{\theta}}[f]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then \(F_{j}^{\uparrow,\theta}[f]=\bigvee_{x\in X}\theta(A_{j}(x),f(x))\leq\bigvee_{x \in X}\theta(B_{j^{\prime}}(x),f(x))=F_{j^{\prime}}^{\uparrow,\theta}[f].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j^{\prime}}^{\uparrow,\theta}[f]\). Similarly, we can show \(F_{j}^{\downarrow,\eta}[f]\geq F_{j^{\prime}}^{\downarrow,\eta}[f]\).
(ii) Let \(j\in J\) and \(f\in L^{X}\). Then \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]=\bigvee_{x\in X}{\cal I}_{\eta}({\bf N}(A_ {j}(x)),f(x))\leq\bigvee_{x\in X}{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))=\bigvee _{j^{\prime}}{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))=F_{j^{\prime}}^{\uparrow,{ \cal I}_{\eta}}[f].\) Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq F_{j^{\prime}}^{\downarrow,{\cal I}_{ \eta}}[f]\). Similarly, we can show \(F_{j}^{\downarrow,{\cal I}_{\theta}}[f]\geq F_{j^{\prime}}^{\downarrow,{\cal I}_ {\theta}}[f]\).
**Proposition 3.8**: _Let \({\cal P}\) be an L-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X},x_{j}\in core(A_{j})\)_
* \(F_{j}^{\uparrow,\theta}[f]\geq f(x_{j}),F_{j}^{\downarrow,\eta}[f]\leq f(x_{j})\) _if_ \(x_{j}\in core(A_{j})\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\geq f(x_{j}),F_{j}^{\downarrow,\mathcal{I }_{\theta}}[f]\leq f(x_{j})\) _if_ \(x_{j}\in core(A_{j})\)_._
**Proposition 3.9**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f,g\in L^{X}\) and \(f\leq g\)_
* \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g],F_{j}^{\downarrow, \eta}[f]\leq F_{j}^{\downarrow,\eta}[g]\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\leq F_{j}^{\uparrow,\mathcal{I}_{\eta }}[g],F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[g]\)_._
**Proof:** (i) Let \(j\in J,f,g\in L^{X}\) and \(f\leq g\). Then \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\leq \bigvee\limits_{x\in X}\theta(A_{j}(x),\eta(x))=F_{j}^{\uparrow,\theta}[g].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g]\). Similarly, we can show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{I}_ {\theta}}[g]\).
(ii) Let \(j\in J,f,g\in L^{X}\) and \(f\leq g\). Then \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(\textbf{N}(A_{j}(x)),f(x))\leq\bigvee\limits_{x\in X}\mathcal{I}_{\eta }(\textbf{N}(A_{j}(x)),\eta(x))=F_{j}^{\uparrow,\mathcal{I}_{\eta}}[g].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g]\). Similarly, we can show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{I}_ {\theta}}[g]\).
**Proposition 3.10**: _Let \(\theta\) and \(\eta\) be \(EP\)-overlap and \(EP\)-grouping maps, respectively. Then for all \(u\in L,\textbf{u},f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)]=\theta(a,F_{j}^{\uparrow,\theta}[f]),F _{j}^{\downarrow,\eta}[\eta(\textbf{u},f)]=\eta(a,F_{j}^{\downarrow,\eta}[f])\)_, and_
2. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(\textbf{u},f)]= \mathcal{I}_{\eta}(a,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]),F_{j}^{\downarrow, \mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(\textbf{u},f)]=\mathcal{I}_{\theta }(a,F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])\)_._
**Proof:** (i) Let \(\textbf{u},f\in L^{X}\). Then
\[F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)] = \bigvee_{x\in X}\theta(A_{j}(x),\theta(\textbf{u},f)(x))=\bigvee _{x\in X}\theta(A_{j}(x),\theta(u,f(x)))\] \[= \bigvee_{x\in X}\theta(u,\theta(A_{j}(x),f(x)))=\theta(u,\bigvee _{x\in X}\theta(A_{j}(x),f(x)))\] \[= \theta(u,F_{j}^{\uparrow,\theta}[f]).\]
Therefore \(F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)]=\theta(u,F_{j}^{\uparrow, \theta}[f])\). Similarly, we can show \(F_{j}^{\downarrow,\eta}[\eta(\textbf{u},f)]=\eta(u,F_{j}^{\downarrow,\eta}[f])\).
(ii) Let \(u\in L\) and \(\textbf{u},f\in L^{X}\). Then
\[F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),\mathcal{ I}_{\eta}(\textbf{u},f)(x))=\bigvee_{x\in X}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)), \mathcal{I}_{\eta}(u,f(x)))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\textbf{N }(A_{j}(x)),f(x)))=\mathcal{I}_{\eta}(u,\bigvee_{x\in X}\mathcal{I}_{\eta}( \textbf{N}(A_{j}(x)),f(x)))\] \[= \mathcal{I}_{\eta}(u,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]).\]
Therefore \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(\textbf{u},f)]= \mathcal{I}_{\eta}(u,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])\). Similarly, we can show \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(\textbf{u},f)]= \mathcal{I}_{\theta}(u,F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])\).
**Proposition 3.11**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(\textbf{u},f\in L^{X},\{f_{j}:j\in J\}\subseteq L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_{j}^{ \uparrow,\theta}[f_{k}],F_{j}^{\downarrow,\eta}[\bigwedge_{k\in J}f_{k}]= \bigwedge_{k\in J}F_{j}^{\downarrow,\eta}[f_{k}]\)_, and_
2. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_ {j}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}],F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[\bigwedge_{k\in J}f_{k}]=\bigwedge_{k\in J}F_{j}^{\downarrow,\mathcal{I} _{\theta}}[f_{k}]\)_._
**Proof:** (i) Let \(\{f_{k}:k\in J\}\subseteq L^{X}\). Then \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{x\in X}\theta(A_{j}(x),\bigvee_{k\in J}f_{k}(x))=\bigvee_{x\in X}\bigvee_{k\in J}\theta(A_{j}(x),f_ {k}(x))=\bigvee_{k\in J}F_{k}^{\uparrow,\theta}[f_{k}]\). Therefore \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_{k}^{\uparrow,\theta}[f_{k}]\). Similarly, we obtain \(F_{j}^{\downarrow,\eta}[\bigwedge_{j\in J}f_{k}]=\bigwedge_{j\in J}F_{j}^{ \downarrow,\eta}[f_{k}]\).
(ii) Let \(\{f_{k}:k\in J\}\subseteq L^{X}\). Then \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{x\in X} \mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),\bigvee_{k\in J}f_{k}(x))=\bigvee_{x \in X}\bigvee_{k\in J}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),f_{k}(x))= \bigvee_{k\in J}F_{k}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}]\). Therefore \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_ {k}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}]\). Similarly, we obtain \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\bigwedge_{j\in J}f_{k}]=\bigwedge_{j\in J }F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f_{k}]\).
**Proposition 3.12**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for \(u\in L,\textbf{u}\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\theta(1,u),F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[\textbf{u}]=\mathcal{I}_{\theta}(1,u)\)_, and_
* \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}(\mathbf{N} (A_{j}(x)),u),F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{ \eta}(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),u)\)_. In addition, for a strict negator_ \(\mathbf{N}\)_,_ \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u),F_{j}^{\downarrow,\mathcal{I}_{ \eta}}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\)_._
**Proof:** (i) Let \(u\in L\) and \(\textbf{u}\in L^{X}\). Then \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\bigvee\limits_{x\in X}\theta(A_{j}(x), \textbf{u}(x))=\theta(\bigvee\limits_{x\in X}A_{j}(x),a)\)
\(=\theta(1,u)\). Thus \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\theta(1,u)\). Similarly, we can show \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\textbf{u}]=\mathcal{I}_{\theta}(1,u)\).
(ii) Let \(u\in L\) and \(\textbf{u}\in L^{X}\). Then \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\bigwedge\limits_{x\in X}\eta(\mathbf{N} (A_{j}(x)),\textbf{u}(x))=\eta(\bigwedge\limits_{x\in X}\mathbf{N}(A_{j}(x)),a)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}\mathbf{N} (A_{j}(x)),a)\). Now, let \(\mathbf{N}\) be a strict negator. Then we obtain \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}\mathbf{N} (A_{j}(x)),a)=\eta(\mathbf{N}(\bigvee\limits_{x\in X}A_{j}(x)),a)=\eta( \mathbf{N}(1),a)=\eta(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\). Similarly, we can show that \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{\eta}(\bigwedge \limits_{x\in X}\mathbf{N}(A_{j}(x)),a)\) and for a strict negator \(\mathbf{N}\), \(\mathcal{I}_{\eta}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\).
**Corollary 3.2**: _Let the conditions of Proposition 3.12 be fulfilled and \(1,0\) be neutral elements of \(\theta,\eta\), respectively. Then for all \(f\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=u,F_{j}^{\downarrow,\eta}[\textbf{u}]=u\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=u,F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\textbf{u}]=u\)_._
**Proof:** Let \(1,0\) be neutral elements of \(\theta,\eta\), respectively. Then we have, \(\theta(1,u)=u,\eta(0,u)=u,\mathcal{I}_{\eta}(0,u)=u\) and \(\mathcal{I}_{\theta}(1,u)=u\). Also, from Proposition 3.12, we have
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=u,F_{j}^{\downarrow,\eta}[\textbf{u}]=u\), and
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=u,F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\textbf{u}]=u\).
From Proposition 3.12, we have the follwoing.
**Proposition 3.13**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(u\in L,\textbf{u}\in L^{X}\)_
* \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\) _iff_ \(F_{j}^{\downarrow,\eta}[0_{X}]=0\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\) _iff_ \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[1_{X}]=1\)_._
**Proof:** (i) Let \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\), \(\forall\,u\in L,\textbf{u}\in L^{X}\). Then by assuming \(\textbf{u}=0_{X}\), we have \(F_{j}^{\downarrow,\eta}[0_{X}]=\eta(0,0)=0\). Thus \(F_{j}^{\downarrow,\eta}[0_{X}]=0\). Conversely, from Proposition 3.9(ii), we have \(F_{j}^{\downarrow,\eta}[0_{X}]=0\)\(\Leftrightarrow\)\(\eta(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),0)=0\)\(\Leftrightarrow\)\(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x))=0.\) Therefore \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),u)= \eta(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\).
(ii) Let \(F_{j}^{\uparrow,{\cal I}_{\eta}}[{\bf u}]={\cal I}_{\eta}(0,u)\), \(\forall\,u\in L,{\bf u}\in L^{X}\). Then by assuming \({\bf u}=1_{X}\), we have \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]={\cal I}_{\eta}(0,1)=1.\) Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]=1\). Conversely, from Proposition 3.9(ii), we have \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]=1\Leftrightarrow{\cal I}_{\eta}( \bigwedge\limits_{x\in X}({\bf N}(A_{j}(x)),1)=1\Leftrightarrow\bigwedge \limits_{x\in X}({\bf N}(A_{j}(x))=0\). Therefore \(F_{j}^{\downarrow,\eta}[{\bf u}]={\cal I}_{\eta}(\bigwedge\limits_{x\in X}({ \bf N}(A_{j}(x)),u)={\cal I}_{\eta}(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[{\bf u}]={\cal I}_{\eta}(0,u)\).
The following results are towards the characterization of the components of the direct \(F\)-transforms of an original \(L\)-fuzzy set as its lower and upper mean values give the greatest and the least elements to certain sets, respectively.
**Proposition 3.14**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\) and \(f\in L^{X}\). Then_
* _the_ \(j^{th}\) _component of_ \(F^{\uparrow,\theta}\)_-transform of_ \(f\) _is the least element of the set_ \(U_{j}=\{u\in L:\theta(A_{j}(x),f(x))\leq u,\,\forall x\in X\},\,j\in J\)_, and_
* _the_ \(j^{th}\) _component of_ \(F^{\downarrow,\eta}\)_-transform of_ \(f\) _is the greatest element of the set_ \(V_{j}=\{v\in L:v\leq\eta({\bf N}(A_{j}(x)),f(x)),\,\forall x\in X\},\,j\in J\)_._
**Proof:** (i) To prove this, we need to show that \(F_{j}^{\uparrow,\theta}[f]\in U_{j}\) and \(F_{j}^{\uparrow,\theta}[f]\leq u\). It follows from Definition 3.2(i) that \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\geq \theta(A_{j}(x),f(x))\). Thus \(F_{j}^{\uparrow,\theta}[f]\in U_{j}\). Now, let \(u\in L,x\in X\). Then from the given condition \(\theta(A_{j}(x),f(x))\leq u\Rightarrow\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\). Thus the \(j^{th}\) component of \(F^{\uparrow,\theta}\)-transform is the least element of the set \(U_{j}\).
(ii) To prove this, we need to show that \(F_{j}^{\downarrow,\eta}[f]\in V_{j}\) and \(v\leq F_{j}^{\downarrow,\eta}[f]\). It follows from Definition 3.2(ii) that \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta({\bf N}(A_{j}(x)),f(x)) \leq\eta({\bf N}(A_{j}(x)),f(x))\). Thus \(F_{j}^{\downarrow,\eta}[f]\in V_{j}\). Now, let \(v\in L,x\in X\). Then from the given condition \(v\leq\eta({\bf N}(A_{j}(x)),f(x))\Rightarrow v\leq\bigwedge\limits_{x\in X} \eta({\bf N}(A_{j}(x)),f(x))\Rightarrow v\leq F_{j}^{\downarrow,\eta}[f]\). Thus the \(j^{th}\) component of \(F^{\downarrow,\eta}\)-transform is the greatest element of the set \(V_{j}\).
**Proposition 3.15**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\) and \(f\in L^{X}\). Then_
* _the_ \(j^{th}\) _component of_ \(F^{\uparrow,{\cal I}_{\eta}}\)_-transform of_ \(f\) _is the least element of the set_ \(U_{j}=\{u\in L:{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u,\,\forall x\in X\}, \,j\in J\)_, and_
* _the_ \(j^{th}\) _component of_ \(F^{\downarrow,{\cal I}_{\theta}}\)_-transform of_ \(f\) _is the greatest element of the set_ \(V_{j}=\{v\in L:v\leq{\cal I}_{\theta}(A_{j}(x),f(x)),\,\forall x\in X\},\,j\in J\)_._
**Proof:** (i) To prove this, we need to show that \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\in U_{j}\) and \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq u\). It follows from Definition 3.2(i) that \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]=\bigvee\limits_{x\in X}{\cal I}_{\eta}({ \bf N}(A_{j}(x)),f(x))\geq{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\). Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\in U_{j}\). Now, let \(u\in L,x\in X\). Then from the given condition \({\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u\Rightarrow\bigvee\limits_{x\in X }{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u\Rightarrow F_{j}^{\uparrow,\theta} [f]\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{ \uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{ \uparrow
\(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\leq u\). Thus the \(j^{th}\) component of \(F^{\uparrow,\mathcal{I}_{\eta}}\)-transform is the least element of the set \(U_{j}\).
(ii) To prove this, we need to show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\in V_{j}\) and \(v\leq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\). It follows from Definition 3.2(ii) that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X}\mathcal{I }_{\theta}(A_{j}(x),f(x))\leq\mathcal{I}_{\theta}(A_{j}(x),f(x))\). Thus \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\in V_{j}\). Now, let \(v\in L,x\in X\). Then from the given condition \(v\leq\mathcal{I}_{\theta}(A_{j}(x),f(x))\Rightarrow v\leq\bigwedge\limits_{x \in X}\mathcal{I}_{\theta}(A_{j}(x),f(x))\Rightarrow v\leq F_{j}^{\downarrow, \mathcal{I}_{\theta}}[f]\). Thus the \(j^{th}\) component of \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transform is the greatest element of the set \(V_{j}\).
**Proposition 3.16**: _Let conditions of Proposition 3.14 be fulfilled, \(\theta\) and \(\eta\) be deflation overlap and deflation grouping maps, respectively. Then for all \(u\in U_{j},v\in V_{j}\)_
1. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u))=1\) _and_ \(j^{th}\) _component of_ \(F^{\uparrow,\theta}\)_-transform is the smallest such_ \(u\)_, and_
2. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(\eta(\mathbf{N}(A_{j}(x)),f(x)),v)=0\) _and_ \(j^{th}\) _component of_ \(F^{\downarrow,\eta}\)_-transform is the greatest such_ \(v\)_._
**Proof:** (i) Let \(j\in J\). Then for all \(x\in X\), \(\theta(A_{j}(x),f(x))\leq u\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u)=1\), as \(\mathcal{I}_{\theta}\) is an \(IP\)-residual implicator.
(ii) Let \(j\in J\). Then for all \(x\in X\), \(\eta(\mathbf{N}(A_{j}(x)),f(x))\geq v\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(\eta(\mathbf{N}(A_{j}(x)),f(x)),v)=0\), as \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator.
**Proposition 3.17**: _Let conditions of Proposition 3.15 be fulfilled, \(\theta\) and \(\eta\) be deflation overlap and deflation grouping maps, respectively. Then for all \(u\in U_{j},v\in V_{j}\)_
1. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\mathbf{N}( A_{j}(x)),f(x)))=0\) _and_ \(j^{th}\) _component of_ \(F^{\uparrow,\mathcal{I}_{\eta}}\)_-transform is the smallest such_ \(u\)_, and_
2. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(v,\mathcal{I}_{\theta}(A_{j}(x),f(x)))=1\) _and_ \(j^{th}\) _component of_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transform is the greatest such_ \(v\)_._
**Proof:** (i) Let \(j\in J\). Then for all \(x\in X\), \(\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x))\leq u\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\mathbf{N}( A_{j}(x)),f(x)))=0\), as \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator.
(ii) Let \(j\in J\). Then for all \(x\in X\), \(\mathcal{I}_{\theta}(A_{j}(x),f(x))\geq v\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(v,\mathcal{I}_{\theta}(A_{j}(x),f(x)))=1\), as \(\mathcal{I}_{\theta}\) is an \(IP\)-residual implicator.
## 4 Inverse \(F\)-transforms
In this section, we introduce the concepts of the inverse \(F\)-transforms computed with overlap and grouping maps, residual and co-residual implicators over \(L\), respectively. Further, we discuss their properties. Now, we begin with the following.
**Definition 4.1**: _Let \((X,\mathcal{P})\) be a space with an \(L\)-fuzzy partition \(\mathcal{P}\) and \(f\in L^{X}\), where \(\mathcal{P}=\{A_{j}\in L^{X}:j\in J\}\). Further, let \(F_{j}^{\uparrow,\theta}[f]\) and \(F_{j}^{\downarrow,\eta}[f]\) be the \(j^{th}\) components of \(F^{\uparrow,\theta}\)-transform of \(f\) computed with an overlap map \(\theta\) over \(\mathcal{P}\) and \(F^{\downarrow,\eta}\)-transform of \(f\) computed with a grouping map \(\eta\) over \(\mathcal{P}\), respectively. Then_
1. _the_ **inverse (upper)**__\(F^{\uparrow,\theta}\)**-transform** _of_ \(f\) _computed with a residual implication_ \(\mathcal{I}_{\theta}\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)=\bigwedge_{j\in J}\mathcal{I}_{ \theta}(A_{j}(x),F_{j}^{\uparrow,\theta}[f]),\]
2. _the_ **inverse (lower)**__\(F^{\downarrow,\mathcal{I}_{\theta}}\)**-transform** _of_ \(f\) _computed with an overlap map_ \(\theta\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\downarrow,\theta}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\downarrow,\theta}(x)=\bigvee_{j\in J}\theta(A_{j}(x),F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[f]),\]
3. _the_ **inverse (upper)**__\(F^{\uparrow,\mathcal{I}_{\eta}}\)**-transform** _of_ \(f\) _computed with a grouping map_ \(\eta\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\uparrow,\eta}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\uparrow,\eta}=\bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),F_{j}^{ \uparrow,\mathcal{I}_{\eta}}[f]),\,\,and\]
4. _the_ **inverse (lower)**__\(F^{\downarrow,\eta}\)**-transform** _of_ \(f\) _computed with a co-residual implicator_ \(\mathcal{I}_{\eta}\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)=\bigvee_{j\in J}\mathcal{I}_{\eta} (\mathbf{N}(A_{j}(x)),F_{j}^{\downarrow,\eta}[f]).\]
The inverse \(F\)-transforms computed with a \(t\)-norm and an \(R\)-implicator proposed in [22, 25, 34] are special cases of the proposed inverse \(F\)-transforms with respect to \(\theta\) and \(\mathcal{I}_{\theta}\). In the above-introduced inverse \(F\)-transforms, \(\hat{f}^{\downarrow,\eta}\), \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}\) are new definitions.
**Example 4.1**: _In continuation to Example 3.1, the inverse \(F\)-transforms with respect to \(\mathcal{I}_{\theta_{M}},\theta_{M},\eta_{M},\mathcal{I}_{\eta_{M}}\) are \(\hat{f}^{\uparrow,\mathcal{I}_{\theta_{M}}}=\frac{q}{x_{1}}+\frac{u}{x_{2}}+ \frac{u}{x_{3}},\,\,\hat{f}^{\downarrow,\theta_{M}}=\frac{p}{x_{1}}+\frac{p}{ x_{2}}+\frac{p}{x_{3}},\)\(\hat{f}^{\uparrow,\eta_{M}}=\frac{r}{x_{1}}+\frac{r}{x_{2}}+\frac{r}{x_{3}},\)\(\hat{f}^{\downarrow,\mathcal{I}_{\eta_{M}}}=\frac{0}{x_{1}}+\frac{u}{x_{2}}+\frac{t}{x_{3}}.\)_
**Remark 4.1**: _(i) If \(L=[0,1],\,\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M}\) and \(\eta=\eta_{M}\), then the inverse \(F\)-transforms \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,\mathcal{I}_{\theta_{M}}}(x) = \bigwedge_{j\in J}\mathcal{I}_{\theta_{M}}(A_{j}(x),F_{j}^{ \uparrow,\theta_{M}}[f]),\] \[\hat{f}^{\downarrow,\theta_{M}}(x) = \bigvee_{j\in J}(A_{j}(x)\wedge F_{j}^{\downarrow,\mathcal{I}_{ \theta_{M}}}[f]),\] \[\hat{f}^{\uparrow,\eta_{M}}(x) = \bigwedge_{j\in J}((1-A_{j}(x))\lor F_{j}^{\uparrow,\mathcal{I}_{ \eta_{M}}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,\mathcal{I}_{\eta_{M}}}(x) = \bigvee_{j\in J}\mathcal{I}_{\eta_{M}}((1-A_{j}(x)),F_{j}^{ \downarrow,\eta_{M}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\theta_{M}}}\) and \(f^{\downarrow,\theta_{M}}\) coincide with the special cases of inverse upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively._
_(ii) If \(L=[0,1],\theta=\theta_{M}\) and \(\eta=\eta_{M}\), then the inverse transforms \(\hat{f}^{\uparrow,{\cal I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,{\cal I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,{\cal I}_{\theta_{M}}}(x) = \bigwedge_{j\in J}{\cal I}_{\theta_{M}}(A_{j}(x),F_{j}^{\uparrow, \theta_{M}}[f]),\] \[\hat{f}^{\downarrow,\theta_{M}}(x) = \bigvee_{j\in J}(A_{j}(x)\wedge F_{j}^{\downarrow,{\cal I}_{\theta _{M}}}[f]),\] \[\hat{f}^{\uparrow,\eta_{M}}(x) = \bigwedge_{j\in J}({\bf N}(A_{j}(x))\lor F_{j}^{\uparrow,{\cal I}_ {\eta_{M}}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,{\cal I}_{\eta_{M}}}(x) = \bigvee_{j\in J}{\cal I}_{\eta_{M}}({\bf N}(A_{j}(x)),F_{j}^{ \downarrow,\eta_{M}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\theta_{M}}}\) and \(f^{\downarrow,\theta_{M}}\) coincide with the special cases of inverse upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively._
_(iii) If \(L=[0,1],\theta={\cal T}\) and \(\eta={\cal S}\), where \({\cal T},{\cal S}\) are continuous \(t\)-norm, \(t\)-conorm with no nontrivial zero divisors, respectively, then the inverse transforms \(\hat{f}^{\uparrow,{\cal I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,{\cal I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,{\cal I}_{\cal T}}(x) = \bigwedge_{j\in J}{\cal I}_{\cal T}(A_{j}(x),F_{j}^{\uparrow,{ \cal T}}[f]),\] \[\hat{f}^{\downarrow,{\cal T}}(x) = \bigvee_{j\in J}{\cal T}(A_{j}(x),F_{j}^{\downarrow,{\cal I}_{ \cal T}}[f]),\] \[\hat{f}^{\uparrow,{\cal S}}(x) = \bigwedge_{j\in J}{\cal S}({\bf N}(A_{j}(x)),F_{j}^{\uparrow,{ \cal I}_{\cal S}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,{\cal I}_{\cal S}}(x) = \bigvee_{j\in J}{\cal I}_{\cal S}({\bf N}(A_{j}(x)),F_{j}^{ \downarrow,{\cal S}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\cal T}}\) and \(f^{\downarrow,{\cal T}}\) coincide with the of inverse upper and lower \(F\)-transforms computed with \(t\)-norm and \(R\)-implicator proposed in [22, 25, 34], respectively._
From the above, it is clear that some existing inverse \(F\)-transforms are special cases of the proposed inverse \(F\)-transforms. Among these, some inverse \(F\)-transforms coincide with the proposed inverse \(F\)-transforms and some of the proposed inverse \(F\)-transforms coincide with the special cases of the existing inverse \(F\)-transforms. That is to say; the proposed inverse \(F\)-transforms are more general than some existing ones.
The following two results are towards the inverse \(F\)-transforms approximates the original \(L\)-fuzzy set.
**Proposition 4.1**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(x\in X,f\in L^{X}\)_
1. \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x)\)_, and_
2. \(\hat{f}^{\downarrow,\theta}(x)\leq f(x)\)_._
**Proof:** (i) Let \(x\in X,f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x) = \bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),F_{j}^{\uparrow, \theta}[f])=\bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),\bigvee_{y\in X} \theta(A_{j}(y),f(y)))\] \[\geq \bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),\theta(A_{j}(x),f (x)))\geq f(x).\]
Thus \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x)\).
(ii) Let \(x\in X\) and \(f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\downarrow,\theta}(x) = \bigvee_{j\in J}\theta(A_{j}(x),F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[f])=\bigvee_{j\in J}\theta(A_{j}(x),\bigwedge_{y\in X}\mathcal{I}_{ \theta}(A_{j}(y),f(y)))\] \[\leq \bigvee_{j\in J}\theta(A_{j}(x),\mathcal{I}_{\theta}(A_{j}(x),f( x)))\leq f(x).\]
Thus \(\hat{f}^{\downarrow,\theta}(x)\leq\mathcal{I}_{\theta}(1,f(x))\).
**Proposition 4.2**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(x\in X,f\in L^{X}\)_
1. \(\hat{f}^{\uparrow,\eta}(x)\geq f(x)\)_, and_
2. \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq\mathcal{I}_{\eta}(0,f(x))\)_._
**Proof:** (i) Let \(x\in X,f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\uparrow,\eta}(x) = \bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),F_{j}^{\uparrow, \mathcal{I}_{\eta}}[f])=\bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),\bigvee_{y \in X}\mathcal{I}_{\eta}((\mathbf{N}(A_{j}(y)),f(y)))\] \[\geq \bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),\mathcal{I}_{\eta}(( \mathbf{N}(A_{j}(x)),f(x)))\geq f(x).\]
Thus \(\hat{f}^{\uparrow,\eta}(x)\geq f(x)\).
(ii) Let \(x\in X\) and \(f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x) = \bigvee_{j\in J}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),F_{j}^{ \downarrow,\eta}[f])=\bigvee_{j\in J}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)), \bigwedge_{y\in X}\eta(\mathbf{N}(A_{j}(y)),f(y)))\] \[\leq \bigvee_{j\in J}\mathcal{I}_{\eta}(A_{j}(x),\eta(\mathbf{N}(A_{j }(x)),f(x)))\leq f(x).\]
Thus \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq f(x)\).
Below, we show that the \(L\)-fuzzy set \(f\) and inverse \(F\)-transforms have the same \(F\)-transforms, respectively. Therefore the inverse \(F\)-transforms of the inverse \(F\)-transforms is again inverse \(F\)-transforms, respectively. This can easily follows from the following.
**Proposition 4.3**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{ \uparrow,\mathcal{I}_{\theta}}(x))\)_, and_
2. \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X}\mathcal{I} _{\theta}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x))\)_._
**Proof:** (i) From Proposition 4.1(i), \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x ),f(x)) \leq \bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{ I}_{\theta}}(x))\text{ and }\] \[\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)) = \theta(A_{j}(x),\bigwedge\limits_{k\in J}\mathcal{I}_{\theta}(A_{ k}(x),F_{k}^{\uparrow,\theta}[f])\] \[\leq \theta(A_{j}(x),\mathcal{I}_{\theta}(A_{j}(x),F_{j}^{\uparrow, \theta}[f])\] \[\leq F_{j}^{\uparrow}[f].\]
Thus \(\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{I}_{\theta}} (x))\leq F_{j}^{\uparrow,\theta}[f]\) or \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{ \uparrow,\mathcal{I}_{\theta}}(x))\).
(ii) From Proposition 4.1(ii), \(\hat{f}^{\downarrow}(x)\leq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X }\mathcal{I}_{\theta}(A_{j}(x),f(x)) \geq \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{ \downarrow,\theta}(x))\text{ and }\] \[\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x)) = \mathcal{I}_{\theta}(A_{j}(x),\bigvee\limits_{k\in J}\theta(A_{k}( x),F_{k}^{\downarrow,\mathcal{I}_{\theta}}[f]))\] \[\geq \mathcal{I}_{\theta}(A_{j}(x),\theta(A_{j}(x),F_{j}^{\downarrow, \mathcal{I}_{\theta}}[f]))\] \[\geq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f].\]
Thus \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{\downarrow, \theta}(x))\geq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\) or \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X} \mathcal{I}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x))\).
**Proposition 4.4**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow,\eta}(x))\)_, and_
2. \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)), \hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x))\)_._
**Proof:** (i) From Proposition 4.2(i), \(\hat{f}^{\uparrow,\eta}(x)\geq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X} \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x)) \leq \bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)), \hat{f}^{\uparrow,\eta}(x))\text{ and }\] \[\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow,\eta}(x )) = \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\bigwedge\limits_{k\in J} \eta(\mathbf{N}(A_{k}(x)),F_{k}^{\uparrow,\mathcal{I}_{\eta}}[f]))\] \[\leq \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\eta(\mathbf{N}(A_{j}(x)),F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]))\] \[\leq F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f].\]
Thus \(\bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow, \eta}(x))\leq F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\) or \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(A_{j}(x),\hat{f}^{\uparrow,\eta}(x))\).
(ii) From Proposition 4.2(ii), \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq f(x)\), \(\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta(\mathbf{ N}(A_{j}(x)),f(x)) \geq \bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{ \downarrow,\mathcal{I}_{\eta}}(x))\text{ and }\] \[\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{\downarrow,\mathcal{I}_{\eta}} (x)) = \eta(\mathbf{N}(A_{j}(x)),\bigvee\limits_{k\in J}\mathcal{I}_{ \eta}(\mathbf{N}(A_{k}(x)),F_{k}^{\downarrow,\eta}[f]))\] \[\geq \eta(\mathbf{N}(A_{j}(x)),\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x) ),F_{j}^{\downarrow,\eta}[f]))\] \[\geq F_{j}^{\downarrow,\eta}[f].\]
Thus \(\bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{\downarrow, \mathcal{I}_{\eta}}(x))\geq F_{j}^{\downarrow,\eta}[f]\) or \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\mathcal{I}(A_{j}(x),\hat {f}^{\downarrow,\mathcal{I}_{\eta}}(x))\).
## 5 Axiomatic approaches of \(F\)-transforms
In [16], the axiomatic approaches of the direct \(F\)-transforms computed with \(t\)-norm and \(R\)-implicator were studied in detail. This section focuses on the axiomatic characterizations of the direct \(F\)-transforms computed with respect to \(\theta,\eta,\mathcal{I}_{\eta},\mathcal{I}_{\theta}\), respectively by some independent axioms. Also, we first present the axioms for each direct \(F\)-transform that guarantee the existence of an \(L\)-fuzzy partition that produces the same \(F\)-transform. Now, we begin with the following.
For any \(f\in L^{X}\) and for an \(L\)-fuzzy partition \(\mathcal{P}\), it can be seen that the direct \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) and \(F^{\uparrow,\mathcal{I}_{\theta}}\)-transforms induce the maps \(F_{\mathcal{P}}^{\uparrow,\theta},F_{\mathcal{P}}^{\downarrow,\eta},F_{ \mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}},F_{\mathcal{P}}^{\uparrow,\mathcal{ I}_{\theta}}:L^{X}\to L^{J}\) such that
\[F_{\mathcal{P}}^{\uparrow,\theta}[f](j) = F_{j}^{\uparrow,\theta}[f],\ F_{\mathcal{P}}^{\downarrow,\eta} [f](j)=F_{j}^{\downarrow,\eta}[f],\] \[F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}[f](j) = F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f],\ F_{\mathcal{P}}^{ \downarrow,\mathcal{I}_{\theta}}[f](j)=F_{j}^{\downarrow,\mathcal{I}_{\theta} }[f],\text{ respectively.}\]
Now, we introduce the concepts of \(L\)-fuzzy upper and lower transformation systems with respect to overlap and grouping maps \(\theta\) and \(\eta\) ( a co-residual and residual implications \(\mathcal{I}_{\eta}\) and \(\mathcal{I}_{\theta}\) induced by grouping and overlap maps \(\eta\) and \(\theta\)), respectively.
**Definition 5.1**: _Let \(X\) be a nonempty set, \(\theta\) be an overlap map and \(\mathcal{I}_{\eta}\) be a co-residual indicator over \(L\). Then a system \(\mathcal{U}_{F}=(X,Y,u,U_{F})\), where \(F=\theta\) or \(\mathcal{I}_{\eta}\) and_
1. \(Y\) _is a nonempty set,_
2. \(u:X\to Y\) _is an onto map,_
3. \(U_{F}:L^{X}\to L^{Y}\) _is a map, where_ 1. _for all_ \(\{f_{k}:k\in J\}\subseteq L^{X}\)_,_ \(U_{F}[\bigvee\limits_{k\in J}f_{k}]=\bigvee\limits_{k\in J}U_{F}[f_{k}]\)_,_ 2. _for all_ \(\textbf{u},f\in L^{X}\)_,_ \(U_{F}[F(\textbf{u},f)]=F(\textbf{u},U_{F}[f])\)
_for all_ \(x\in X,\,y\in Y\)_,_ \(U_{F}[1_{\{x\}}](y)=1\) _iff_ \(y=u(x)\)_,_
_is called an \(L\)_**-fuzzy upper transformation system** _on_ \(X\) _with respect to_ \(F\)_._
**Definition 5.2**: _Let \(X\) be a nonempty set, \(\eta,\mathcal{I}_{\theta}\) and \(\mathbf{N}\) be grouping map, residual implicator and negator over \(L\), respectively. Then system \(\mathcal{H}_{F}=(X,Y,v,H_{F})\), where \(F=\eta\) or \(\mathcal{I}_{\theta}\) and_
1. \(Y\) _is a nonempty set,_
2. \(v:X\to Y\) _is an onto map,_
3. \(H_{F}:L^{X}\to L^{Y}\) _is a map, where_ 1. _for all_ \(\{f_{k}:k\in J\}\subseteq L^{X}\)_,_ \(H_{F}[\bigwedge\limits_{k\in J}f_{k}](y)=\bigwedge\limits_{k\in J}H_{F}[f_{k}] (y),\)__ 2. _for all_ \(\textbf{u},f\in L^{X}\)_,_ \(H_{F}[F(\textbf{u},f)]=F(\textbf{u},H_{F}[f])\)_, and_ 3. _for_ \(y\in Y\) _and_ \(x\in X\)_,_ \((\mathbf{N}(H_{F}[\mathbf{N}(1_{\{x\}})]))(y)=1\) _iff_ \(y=v(x)\)_,_
_is called an \(L\)_**-fuzzy lower transformation system** _on \(X\) with respect to \(F\)._
The \(L\)-fuzzy upper transformation system with respect to a \(t\)-norm and the \(L\)-fuzzy lower transformation system with respect to an \(R\)-implicator proposed in [16, 34] are special cases of \(\mathcal{U}_{\theta}\) and \(\mathcal{H}_{\mathcal{I}_{\theta}}\), respectively. Also, the \(L\)-fuzzy lower transformation system with respect to an \(S\)-implicator proposed in [34] is a special case of \(\mathcal{H}_{\eta}\). The \(L\)-fuzzy upper transformation system with respect to \(\mathcal{U}_{\mathcal{I}_{\eta}}\) is a new definition.
**Example 5.1**: _Let \(X\) be a nonempty set and \(id:X\to X\) be an identity map. Now, we define maps \(U_{F},H_{F^{\prime}}:L^{X}\to L^{X}\) such that \(U_{F}[f](x)=f(x),H_{F^{\prime}}[f](x)=f(x),x\in X\), where \(F=\theta\) or \(\mathcal{I}_{\eta}\) and \(F^{\prime}=\eta\) or \(\mathcal{I}_{\theta}\). Then for all \(\{f_{k}:k\in J\}\subseteq L^{X}\), let \(\textbf{u},f\in L^{X}\). Then \(U_{F}[F(\textbf{u},f)]=F(\textbf{u},U_{F}[f])\) and \(H_{F^{\prime}}[F^{\prime}(\textbf{u},f)]=F^{\prime}(\textbf{u},H_{F^{\prime }}[f])\). Finally, let \(x,z\in X\). Then \(U_{F}[1_{\{x\}}](z)=U_{F}[1_{\{x\}}](z)=1_{\{x\}}(z)=1\) iff \(x=z\) and \((\mathbf{N}(H_{F^{\prime}}[\mathbf{N}(1_{\{x\}})])(z))=\mathbf{N}(H_{F^{ \prime}}[\mathbf{N}(1_{\{x\}})](z))=\mathbf{N}(\mathbf{N}(1_{\{x\}})(z))=1\) iff \(x=z\). Thus \(U_{F}(1_{\{x\}})(z)=1,(\mathbf{N}(H_{F^{\prime}}[\mathbf{N}(1_{\{x\}})]))(z)=1\) iff \(z=id(x)\). Hence \(\mathcal{U}_{F}=(X,X,id,U_{F})\) and \(\mathcal{H}_{F^{\prime}}=(X,X,id,H_{F^{\prime}})\) are \(L\)-fuzzy upper and lower transformation systems on \(X\) with respect to \(F\) and \(F^{\prime}\), respectively._
**Remark 5.1**: _(i) If \(L=[0,1],\,\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M},\eta=\eta_{M},\mathcal{ I}_{\eta}=\mathcal{I}_{\eta_{M}}\) and \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\). Then \(\mathcal{U}_{\theta_{M}}\) and \(\mathcal{H}_{\mathcal{I}_{\theta_{M}}}\) coincide with the special cases of the \(L\)-fuzzy upper and lower transformation systems proposed in [16, 34], respectively. Also, \(\mathcal{H}_{\eta_{M}}\) coincides with the special case of the \(L\)-fuzzy lower transformation system proposed in [34]._
_(ii) If \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}=\mathcal{I}_{\eta _{M}}\) and \(\mathcal{I}_{\eta}=\mathcal{I}_{\theta_{M}}\). Then \(\mathcal{U}_{\theta_{M}}\) and \(\mathcal{H}_{\mathcal{I}_{\theta_{M}}}\) coincide with the special cases of the \(L\)-fuzzy upper and lower transformation systems proposed in [16, 34], respectively. Also, \(\mathcal{H}_{\eta_{M}}\) coincides with the special case of the \(L\)-fuzzy lower transformation system proposed in [34]._
_(iii) If_ \(L=[0,1],\theta=\mathcal{T}\) _and_ \(\eta=\mathcal{S}\)_, where_ \(\mathcal{T},\mathcal{S}\) _are continuous_ \(t\)_-norm,_ \(t\)_-conorm with no nontrivial zero divisors, respectively. Then_ \(\mathcal{U}_{\mathcal{T}}\) _and_ \(\mathcal{H}_{\mathcal{T}\tau}\) _coincide with the_ \(L\)_-fuzzy upper and lower transformation systems with respect to_ \(t\)_-norm and_ \(R\)_-implicator proposed in_ _[_16, 34_]__, respectively. Also,_ \(\mathcal{H}\)_s coincides with the_ \(L\)_-fuzzy lower transformation system with respect to_ \(S\)_-implicator proposed in_ _[_34_]__._
From the above remark, it is clear that some existing \(L\)-fuzzy transformation systems are special cases of the proposed \(L\)-fuzzy transformation systems. Among these, some \(L\)-fuzzy transformation systems coincide with the proposed \(L\)-fuzzy transformation systems, and some proposed \(L\)-fuzzy transformation systems coincide with the special cases of the existing \(L\)-fuzzy transformation systems. That is to say; the proposed \(L\)-fuzzy transformation systems are more extended form than some existing ones.
The following shows a close connection of the \(L\)-fuzzy transformation systems with the \(F\)-transforms. To do this, we need some results, which are given by the following proposition.
**Proposition 5.1**: _Let \(\mathbf{N}\) be a negator, \(\theta,\eta\) be overlap and grouping maps with neutral elements \(0,1\), respectively. In addition, let \(\mathbf{N}_{\mathcal{I}_{\eta}},\mathbf{N}_{\mathcal{I}_{\eta}}\) be involutive negators. Then for all \(f\in L^{X}\)_
* \(f=\bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}}),f=\bigwedge\limits_{x \in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}))\)_, and_
* \(f=\bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}}),f=\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}( \mathbf{N}_{\mathcal{I}_{\theta}}(\textbf{f(x)}),\mathbf{N}_{\mathcal{I}_{ \theta}}(1_{\{x\}}))\)_._
**Proof:** (i) Let \(y\in X,f\in L^{X}\). Then
\[f(y) = \bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})(y)= \bigvee\limits_{x\in X}\theta(f(x),1_{\{x\}}(y))\] \[= \theta(f(x),1_{\{y\}}(y))\vee\bigvee\limits_{x\neq y\in X}\theta (f(y),1_{\{x\}}(y))\] \[= \theta(f(y),1)=f(y).\]
Thus \(f=\bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})\) and
\[f(y) = \bigwedge\limits_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}) )(y)=\bigwedge\limits_{x\in X}\eta(f(x),\mathbf{N}(1_{\{x\}})(y))\] \[= \eta(f(y),\mathbf{N}(1_{\{y\}}(y)))\vee\bigwedge\limits_{x\neq y \in X}\eta(f(y),\mathbf{N}(1_{\{x\}}(y)))\] \[= \eta(f(y),0)=f(y).\]
Thus \(f=\bigwedge\limits_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}))\).
(ii) Let \(y\in X,f\in L^{X}\). Then
\[f(y) = \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}})(y)=\bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{ \mathcal{I}_{\eta}}(f(x)),1_{\{x\}}(y))\] \[= \mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(x)),1_{\{y\} }(y))\vee\bigvee_{x\neq y\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{ \eta}}(f(x)),1_{\{x\}}(y))\] \[= \mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(y)),1)= \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(y))=f(y).\]
Thus \(f=\bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}})\) and
\[f(y) = \bigwedge_{x\in X}\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{ \theta}}(\textbf{f(x)}),\mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{x\}}))(y)= \bigwedge_{x\in X}\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(x) ),\mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{x\}}(y)))\] \[= \mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)), \mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{y\}}(y)))\wedge\bigwedge_{x\neq y\in X }\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(x)),\mathbf{N}_{ \mathcal{I}_{\theta}}(1_{\{x\}}(y)))\] \[= \mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)),0)= \mathbf{N}_{\mathcal{I}_{\theta}}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)))=f(y).\]
Now, we have the following.
**Proposition 5.2**: _Let \(\theta\) be an overlap map \(L\). Then the following statements are equivalent:_
* \(\mathcal{U}_{\theta}=(X,Y,u,U_{\theta})\) _is an_ \(L\)_-fuzzy upper transformation system on_ \(X\) _determined by an overlap map_ \(\theta\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\) _such that_ \(u(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\)_._
**Proof:** Let \(\mathcal{U}_{\theta}=(X,Y,u,U_{\theta})\) be an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\theta\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=U_{\theta}[1_{\{x\}}](y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{u(x)}(x)=U_{\theta}[1_{\{x\}}](u(x))=1\), or that, \(x\in core(A_{u(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z}),U_{\theta}[1_{\{t\}}](y)=1=U_{ \theta}[1_{\{t\}}](z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=u(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\uparrow,\theta}[f](y) = \bigvee_{x\in X}\theta(A_{y}(x),f(x))\] \[= \bigvee_{x\in X}\theta(U_{\theta}[1_{\{x\}}](y),f(x))\] \[= \bigvee_{x\in X}\theta(f(x),U_{\theta}[1_{\{x\}}](y))\] \[= \bigvee_{x\in X}U_{\theta}[\theta(\textbf{f(x)},1_{\{x\}})](y)\] \[= U_{\theta}[\bigvee_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})](y)\] \[= U_{\theta}[f](y).\]
Thus \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(u:X\to Y\) such that \(u(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\theta\) be an overlap map with mental element \(1\) and \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\). Then for all \(y\in Y,x\in X\)\(U_{\theta}[1_{\{x\}}](y)=F_{\mathcal{P}}^{\uparrow,\theta}[1_{\{x\}}](y)= \bigvee_{z\in X}\theta(A_{y}(z),\ 1_{\{x\}}(z))=\theta(A_{y}(x),1))=A_{y}(x)\). Thus \(U_{\theta}[1_{\{x\}}](y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,u,U_{\theta})\) is an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\theta\).
**Proposition 5.3**: _Let \(\mathcal{I}_{\eta}\) be an \(EP\)-co-residual implicator over \(L\) such that \(\mathbf{N}_{\mathcal{I}_{\eta}}\) is an involutive negator. Then the following statements are equivalent:_
* \(\mathcal{U}_{\mathcal{I}_{\eta}}=(X,Y,u,U_{\mathcal{I}_{\eta}})\) _is an_ \(L\)_-fuzzy upper transformation system on_ \(X\) _determined by a co-residual implicator_ \(\mathcal{I}_{\eta}\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\) _such that_ \(u(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\)_._
**Proof:** Let \(\mathcal{U}_{\mathcal{I}_{\eta}}=(X,Y,u,U_{\mathcal{I}_{\eta}})\) be an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\mathcal{I}_{\eta}\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{u(x)}(x)=U_{\mathcal{I}_{\eta}}[1_{\{x\}}](u(x))=1\), or that, \(x\in core(A_{u(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z})\), \(U_{\mathcal{I}_{\eta}}[1_{\{t\}}](y)=1=U_{\mathcal{I}_{\eta}}[1_{\{t\}}](z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=u(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}[f](y) = \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(A_{y}(x)),f(x))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(A_{y}(x)),\mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(f( x))))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(f(x)),A_{y}(x))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(f(x)),U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y))\] \[= \bigvee_{x\in X}U_{\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}( \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{f(x)}),1_{\{x\}})](y)\] \[= U_{\mathcal{I}_{\eta}}[\bigvee_{x\in X}\mathcal{I}_{\eta}( \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{f(x)}),1_{\{x\}})](y)\] \[= U_{\mathcal{I}_{\eta}}[f](y).\]
Thus \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(u:X\to Y\) such that \(u(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\mathcal{I}_{\eta}\) be a co-residual implicator such that \(\mathbf{N}_{\mathcal{I}_{\eta}}(\cdot)=\mathcal{I}_{\eta}(\cdot,1)\) is an involutive negator) and \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\). Then for all \(y\in Y,x\in X,\ U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)=F_{\mathcal{P}}^{ \uparrow,\mathcal{I}_{\eta}}[1_{\{x\}}](y)=F_{\mathcal{P}}^{\uparrow,\mathcal{ I}_{\eta}}[1_{\{x\}}](y)=\bigvee_{z\in X}\mathcal{I}_{\eta}(\mathbf{N}_{ \mathcal{I}_{\eta}}(A_{y}(z)),1_{\{x\}})(z))=\bigwedge_{z\in X}\mathcal{I}_{ \eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(z)),1_{\{x\}}(z))\)\(=\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(x)),1))=\mathbf{N}_{ \mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(x)))=A_{y}(x)\). Thus \(U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)=1\) iff
\(A_{y}(x)=1\) iff \(u(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,u,U_{\mathcal{I}_{\eta}})\) is an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\mathcal{I}_{\eta}\).
**Proposition 5.4**: _Let \(\eta\) be an \(EP\)-grouping map with neutral element \(0\) over \(L\) such that \(\mathbf{N}\) be an involutive negator. Then the following statements are equivalent:_
* \(\mathcal{H}_{\eta}=(X,Y,v,H_{\eta})\) _is an_ \(L\)_-fuzzy lower transformation system on_ \(X\) _determined by_ \(\eta\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\)_, such that_ \(v(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\)_._
**Proof:** Let \(\mathcal{H}_{\eta}=(X,Y,v,H_{\eta})\) be an \(L\)-fuzzy lower transformation system on \(X\) determined by \(\eta\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})])(y)\), \(x\in X\). Now, from Definition 5.2(iii), \(A_{v(x)}(x)=(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})]))(v(x))=1\), or that, \(x\in core(A_{v(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z}),(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{ \{t\}})]))(y)=1=(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{t\}})]))(z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=v(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\downarrow,\eta}[f](y) = \bigwedge_{x\in X}\eta(\mathbf{N}(A_{y}(x)),f(x))\] \[= \bigwedge_{x\in X}\eta(H_{\eta}[\mathbf{N}(1_{\{x\}})](y),f(x))\] \[= \bigwedge_{x\in X}\eta(f(x),H_{\eta}[\mathbf{N}(1_{\{x\}})](y))\] \[= \bigwedge_{x\in X}H_{\eta}[\eta(\textbf{f(x)},\mathbf{N}(1_{\{x \}}))](y)\] \[= H_{\eta}[\bigwedge_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x \}}))](y)\] \[= H_{\eta}[f](y).\]
Thus \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(v:X\to Y\) such that \(v(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\eta\) be a grouping map with neutral element \(0\), \(\mathbf{N}\) be an involutive negator and \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\).Then for all \(y\in Y,x\in X\)
\[(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})]))(y) = (\mathbf{N}(F_{\mathcal{P}}^{\downarrow,\eta}[\mathbf{N}(1_{\{x\}} )]))(y)\] \[= \mathbf{N}(F_{\mathcal{P}}^{\downarrow,\eta}[\mathbf{N}(1_{\{x \}})](y))\] \[= \mathbf{N}(\bigwedge_{z\in X}\eta(\mathbf{N}_{\eta}(A_{y}(z)),( \mathbf{N}(1_{\{x\}}))(z)))\] \[= \mathbf{N}(\bigwedge_{z\in X}\eta(\mathbf{N}(A_{y}(z)),\mathbf{N }(1_{\{x\}}(z))))\] \[= \mathbf{N}(\eta(\mathbf{N}(A_{y}(x)),0))\] \[= \mathbf{N}(\mathbf{N}(A_{y}(x)))\] \[= A_{y}(x).\]
Thus \(({\bf N}(H_{\eta}[{\bf N}(1_{\{x\}})]))(y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,v,H_{\eta})\) is an \(L\)-fuzzy lower transformation system on \(X\) determined by \(\eta\).
**Proposition 5.5**: _Let \({\cal I}_{\theta}\) be an \(EP\)-residual implicator over \(L\) such that \({\bf N}_{{\cal I}_{\theta}}\) is an involutive negator. Then the following statements are equivalent:_
* \({\cal H}_{{\cal I}_{\theta}}=(X,Y,v,H_{{\cal I}_{\theta}})\) _is an_ \(L\)_-fuzzy lower transformation system on_ \(X\) _determined by_ \({\cal I}_{\theta}\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \({\cal P}\) _of_ \(X\) _indexed by_ \(Y\)_, such that_ \(v(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(H_{{\cal I}_{\theta}}=F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}\)_._
**Proof:** Let \({\cal H}_{{\cal I}_{\theta}}=(X,Y,v,H_{{\cal I}_{\theta}})\) be an \(L\)-fuzzy lower transformation system on \({\bf X}\) determined by \({\cal I}_{\theta}\). Also, let \({\cal P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I} _{\theta}}(1_{\{x\}})]))(y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{v(x)}(x)=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(1_{\{x\}})]))(v(x))=1\), or that, \(x\in core(A_{v(x)})\). Further, for \(t\in core(A_{y})\cap core(A_{z})\), \(y,z\in Y\) and the fact that \({\bf N}_{{\cal I}_{\theta}}(x)={\cal I}_{\theta}(x,0)\), \(({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta} }(1_{\{t\}})]))(y)=1=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N} _{{\cal I}_{\theta}}(1_{\{t\}})]))(z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=v(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \({\cal P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}[f](y) = \bigwedge_{x\in X}{\cal I}_{\theta}(A_{y}(x),f(x))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(({\bf N}_{{\cal I}}(H_{{\cal I }_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})]))(y),f(x))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}({\bf N}_{{\cal I}_{\theta}}(H_ {{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y)),{\bf N}_{{ \cal I}_{\theta}}({\bf N}_{{\cal I}_{\theta}}(f(x))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}({\bf N}_{{\cal I}_{\theta}}(f(x )),{\bf N}_{{\cal I}_{\theta}}({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(({\bf N}_{{\cal I}_{\theta}}(f ))(x),H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))\] \[= \bigwedge_{x\in X}H_{{\cal I}_{\theta}}[{\bf\cal I}_{\theta}({\bf N }_{{\cal I}_{\theta}}({\bf f(x)}),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))](y)\] \[= H_{{\cal I}_{\theta}}[\bigwedge_{x\in X}{\cal I}_{\theta}({\bf N }_{{\cal I}_{\theta}}({\bf f(x)}),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))](y)\] \[= H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}({\bf N}_{{\cal I }_{\theta}}(f))](y)\] \[= H_{{\cal I}_{\theta}}[f](y).\]
Thus \(H_{{\cal I}_{\theta}}=F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}\). Conversely, let \({\cal P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(v:X\to Y\) such that \(v(x)=y\) iff \(x\in core(A_{y})\). Further, let \({\cal I}_{\theta}\) be a residual implicator such that \({\bf N}_{{\cal I}_{\theta}}(\cdot)={\cal I}_{\theta}(\cdot,0)\) is
an involutive negator) and \(H_{{\cal I}_{\theta}}=F_{\cal P}^{\downarrow,{\cal I}_{\theta}}\). Then for all \(y\in Y,x\in X\)
\[({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(1_{\{x\}})]))(y) = ({\bf N}_{{\cal I}_{\theta}}(F_{\cal P}^{\downarrow,{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})]))(y)\] \[= {\bf N}_{{\cal I}_{\theta}}(F_{\cal P}^{\downarrow,{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))\] \[= {\bf N}_{{\cal I}_{\theta}}(\bigwedge_{z\in X}{\cal I}_{\theta}(A _{y}(z),({\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))(z)))\] \[= {\bf N}_{{\cal I}_{\theta}}(\bigwedge_{z\in X}{\cal I}_{\theta}(A _{y}(z),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}(z))))\] \[= {\bf N}_{{\cal I}_{\theta}}({\cal I}_{\theta}(A_{y}(x),0))=A_{y}( x).\]
Thus \(({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1 _{\{x\}})]))(y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,v,H_{{\cal I}_{\theta}})\) is an \(L\)-fuzzy lower transformation system on \(X\) determined by \({\cal I}_{\theta}\).
Next, we have the following.
**Proposition 5.6**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \({\bf N}\), \({\cal U}_{\theta}=(X,Y,u,U_{\theta})\) and \({\cal H}_{\eta}=(X,Y,u,H_{\eta})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(U_{\theta}=F_{\cal P}^{\uparrow,\theta},\)\(H_{\eta}=F_{\cal P}^{\downarrow,\eta}\) iff for all \(f\in L^{X}\),_
* \(U_{\theta}[f]={\bf N}(H_{\eta}[{\bf N}(f)])\)_, i.e.,_ \({\bf N}(U_{\theta}[f])=H_{\eta}[{\bf N}(f)]\)_, and_
* \(H_{\eta}[f]={\bf N}(U_{\theta}[{\bf N}(f)])\)_, i.e,_ \({\bf N}(H_{\eta})=U_{\theta}[{\bf N}(f)]\)_._
**Proof:** From Propositions 3.1, is can be easily show that conditions (i) and (ii) hold. Now, we only need to show that the converse part. For which, let condition (i) holds. Further, let \(\{A_{1,y}:y\in Y\},\{A_{2,y}:y\in Y\}\subseteq L^{X}\) such that \(A_{1,y}(x)=U_{\theta}[1_{\{x\}}](y)\), \(A_{2,y}(x)={\bf N}(H_{\eta}[{\bf N}(1_{\{x\}})])(y)\), \(\forall\,x\in X,y\in Y\). Then from propositions 5.2 and 5.3, it is clear that \(\{A_{1,y}:y\in Y\},\{A_{2,y}:y\in Y\}\subseteq L^{X}\) are \(L\)-fuzzy partitions of \(X\) and \(U_{\theta}=F_{1,\cal P}^{\uparrow,\theta},H_{\eta}=F_{2,\cal P}^{\downarrow,\eta}\). Now, from condition (i), we have \(U_{\theta}[f]={\bf N}(H_{\eta}[{\bf N}(f)])={\bf N}(F_{2,\cal P}^{\downarrow, \eta}[{\bf N}(f)])=F_{2,\cal P}^{\uparrow,\theta}[f]\). Thus \(F_{1,\cal P}^{\uparrow,\theta}=F_{2,\cal P}^{\uparrow,\theta}\) and \(A_{1y}=A_{2y},\,\forall\,y\in Y\). Similarly, we can show that when condition (ii) holds.
**Proposition 5.7**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \({\bf N}\), \({\cal U}_{{\cal I}_{\eta}}=(X,Y,u,U_{{\cal I}_{\eta}})\) and \({\cal H}_{{\cal I}_{\theta}}=(X,Y,u,H_{{\cal I}_{\eta}})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F_{\cal P}^{\uparrow,\tau_{\eta}}=U_{{\cal I}_{\eta}},F_{\cal P}^{\downarrow, \tau_{\theta}}=H_{{\cal I}_{\theta}}\) iff for all \(f\in L^{X}\)_
* \(U_{{\cal I}_{\eta}}[f]={\bf N}(H_{{\cal I}_{\theta}}[{\bf N}(f)])\)_, i.e.,_ \({\bf N}(U_{{\cal I}_{\eta}}[f])=H_{{\cal I}_{\theta}}[{\bf N}(f)]\)_, and_
* \(H_{{\cal I}_{\theta}}[f]={\bf N}(U_{{\cal I}_{\eta}}[{\bf N}(f)])\)_, i.e,_ \({\bf N}(H_{{\cal I}_{\theta}})=U_{{\cal I}_{\eta}}[{\bf N}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
**Proposition 5.8**: _Let \({\bf N}_{{\cal I}_{\theta}}\) be involutive negator, \({\cal U}_{\theta}=(X,Y,u,U_{\theta})\) and \({\cal H}_{{\cal I}_{\theta}}=(X,Y,u,H_{{\cal I}_{\eta}})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F_{\cal P}^{\uparrow,\theta}=U_{\theta},F_{\cal P}^{\downarrow,\tau_{\theta}}=H_{{ \cal I}_{\theta}}\) iff_
* \(U_{\theta}[f]={\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I} _{\theta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\theta}}(U_{\theta}[f])=H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(f)]\)_, and_
* \(H_{{\cal I}_{\theta}}[f]={\bf N}_{{\cal I}_{\theta}}(U_{\theta}[{\bf N}_{{\cal I }_{\theta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}})=U_{\theta}[{\bf N}_{{\cal I }_{\theta}}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
**Proposition 5.9**: _Let \({\bf N}_{{\cal I}_{\eta}}\) be involutive negators, \({\cal U}_{{\cal I}_{\eta}}=(X,Y,u,U_{{\cal I}_{\eta}})\) and \({\cal H}_{\eta}=(X,Y,u,H_{\eta})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F^{\uparrow,{\cal I}_{\eta}}_{\cal P}=U_{{\cal I}_{\eta}},F^{\downarrow,\eta} _{\cal P}=H_{\eta}\) iff for all \(f\in L^{X}\)_
* \(U_{{\cal I}_{\eta}}[f]={\bf N}_{{\cal I}_{\eta}}(H_{\eta}[{\bf N}_{{\cal I}_{ \eta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\eta}}(U_{{\cal I}_{\eta}}[f])=H_{\eta}[{\bf N}_{{\cal I}_{ \eta}}(f)]\)_, and_
* \(H_{\eta}[f]={\bf N}_{{\cal I}_{\eta}}(U_{{\cal I}_{\eta}}[{\bf N}_{{\cal I}_{ \eta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\eta}}(H_{\eta})=U_{{\cal I}_{\eta}}[{\bf N}_{{\cal I}_{ \eta}}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
## 6 Concluding remarks
In this contribution, we have presented the theory of direct \(F\)-transforms determined by overlap and grouping maps, residual and co-residual implicators from both constructive and axiomatic approaches. In which, \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\downarrow,{\cal I}_{\theta}}\) are the extension of the direct \(F\)-transforms introduced in [22, 25, 34] and \(F^{\uparrow,{\cal I}_{\eta}}\) is a new definition. The main contributions of this paper are listed as follows.
* We have shown the duality of the proposed direct \(F\)-transform and established a connection among these direct \(F\)-transforms. In addition, we have discussed the basic results of these direct \(F\)-transforms.
* We have introduced the idea of the inverse of these \(F\)-transforms. Further, we have shown that the original \(L\)-fuzzy set and inverse of these \(F\)-transform have the same \(F\)-transform under certain conditions.
* Further, we have shown an axiomatic characterization of the proposed direct \(F\)-transforms.
* Finally, the duality of \(L\)-fuzzy transformation systems has been examined.
Both the theories viz., theory of \(F\)-transforms and the theory of overlap and grouping maps have already shown to be helpful in practical applications. Accordingly, combining both ideas may provide us with new applications in data analysis and image processing problems.
|
2309.16208 | Low-rank tensor completion via tensor joint rank with logarithmic
composite norm | Low-rank tensor completion (LRTC) aims to recover a complete low-rank tensor
from incomplete observed tensor, attracting extensive attention in various
practical applications such as image processing and computer vision. However,
current methods often perform well only when there is a sufficient of observed
information, and they perform poorly or may fail when the observed information
is less than 5\%. In order to improve the utilization of observed information,
a new method called the tensor joint rank with logarithmic composite norm
(TJLC) method is proposed. This method simultaneously exploits two types of
tensor low-rank structures, namely tensor Tucker rank and tubal rank, thereby
enhancing the inherent correlations between known and missing elements. To
address the challenge of applying two tensor ranks with significantly different
directly to LRTC, a new tensor Logarithmic composite norm is further proposed.
Subsequently, the TJLC model and algorithm for the LRTC problem are proposed.
Additionally, theoretical convergence guarantees for the TJLC method are
provided. Experiments on various real datasets demonstrate that the proposed
method outperforms state-of-the-art methods significantly. Particularly, the
proposed method achieves satisfactory recovery even when the observed
information is as low as 1\%, and the recovery performance improves
significantly as the observed information increases. | Hongbing Zhang | 2023-09-28T07:17:44Z | http://arxiv.org/abs/2309.16208v2 | # Nonconvex third-order Tensor Recovery Based on Logarithmic Minimax Function
###### Abstract
Recent researches have shown that low-rank tensor recovery based non-convex relaxation has gained extensive attention. In this context, we propose a new Logarithmic Minimax (LM) function. The comparative analysis between the LM function and the Logarithmic, Minimax concave penalty (MCP), and Minimax Logarithmic concave penalty (MLCP) functions reveals that the proposed function can protect large singular values while imposing stronger penalization on small singular values. Based on this, we define a weighted tensor LM norm as a non-convex relaxation for tensor tubal rank. Subsequently, we propose the TLM-based low-rank tensor completion (LRTC) model and the TLM-based tensor robust principal component analysis (TRPCA) model respectively. Furthermore, we provide theoretical convergence guarantees for the proposed methods. Comprehensive experiments were conducted on various real datasets, and a comparison analysis was made with the similar EMLCP method. The results demonstrate that the proposed method outperforms the state-of-the-art methods.
keywords: Tensor recovery, Logarithmic Minimax (LM) function, low-rank tensor completion (LRTC), tensor robust principal component analysis (TRPCA). +
Footnote †: journal:
## 1 Introduction
As the dimensionality of real data increases and its structure becomes more complex, tensors, as high-order generalizations of vectors and matrices, have received widespread attention from researchers. Currently, tensors play an increasingly important role in various applications such as image/video processing [1; 2], hyperspectral/multispectral image (HSI/MSI) processing [3; 4], background subtraction [5; 6], and magnetic resonance imaging (MRI) data recovery [7; 8].
In general, tensor recovery problems can be described as the process of reconstructing the original tensor from partially observed or corrupted data, which involves solving the problems of low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). Their corresponding models are as follows:
\[\min_{\mathcal{X}}\,rank(\mathcal{X})\;s.t.\;\mathcal{P}_{\Omega}(\mathcal{T} )=\mathcal{P}_{\Omega}(\mathcal{X}), \tag{1}\]
\[\min_{\mathcal{X},\mathcal{E}}\,rank(\mathcal{X})+\tau_{1}\|\mathcal{E}\|_{1}\,\,s.t. \,\,\mathcal{T}=\mathcal{X}+\mathcal{E}, \tag{2}\]
where \(\mathcal{T}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is the observed tensor; \(\mathcal{X}\) is the original tensor; \(\mathcal{E}\) is the sparse tensor; \(\mathcal{P}_{\Omega}(\mathcal{X})\) is a projection operator that keeps the entries of \(\mathcal{X}\) in \(\Omega\) and sets all others to zero. A crucial issue of tensor recovery is how to define the tensor rank. Different from the matrix rank, the definition of tensor rank is not unique. The mainstream definitions of tensor rank are the CANDECOMP/PARAFAC (CP) rank based on CP decomposition [9], Tucker rank based on Tucker decomposition [10], and tubal rank [11] induced by tensor singular value decomposition (t-SVD) [12]. Nevertheless, directly solving the CP rank of the given tensor is NP-hard [13]. The calculation of Tucker rank requires data to be folded and unfolded, which will cause structural damage to data. Compared with CP rank and Tucker rank, the tubal rank can better maintain the data structure. Therefore, the LRTC problem is mostly based on tubal rank. Subsequently, Zhang et al. [14] defined the tensor nuclear norm (TNN) based on t-SVD and tensor tubal rank to solve the LRTC problem and obtained the most advanced tensor recovery results. Moreover, Lu et al. [15] performed TRPCA with TNN based on t-SVD by using Fourier transform.
In tensor recovery, larger singular values typically correspond to significant information such as contours, sharp edges, and smooth regions, while smaller singular values are mainly composed of noise or outliers [16]. For convex relaxation, although it is easier to solve, it will produce biased estimates [17]. The TNN method, as a convex relaxation with a uniform penalty on all singular values, may excessively penalize large singular values, resulting in suboptimal tensor recovery. Therefore, breaking away from convex relaxation methods with biased estimation constraints and adopting non-convex relaxation methods is crucial for further improving the accuracy of tensor recovery. Non-convex methods can impose smaller penalties on larger singular values and greater penalties on smaller singular values. Recently, the Logarithmic function [18, 19] and Minimax concave penalty (MCP) function [20] as non-convex relaxation has achieved good results in the tensor recovery problems. The Logarithmic function happens to be better able to penalize small singular values, but it is deficient in dealing with large singular values. The MCP function can better protect the large singular values, but its penalty for small singular values is weak. In order to break the limitation of the Logarithmic function and MCP function, Zhang et al.[21] proposed the Minimax Logarithmic Concave Penalty (MLCP) function. The MLCP function can both protect the large singular values well and also impose a strong penalty on the small singular values.
However, directly applying the MLCP function to tensor recovery problems may result in the inability to obtain explicit solutions, which is highly unfavorable for algorithmic solutions. Additionally, it is challenging to improve the penalty on small singular values while protecting large singular values. To overcome the limitations of the MLCP function's inability to be directly solved and further enhance
the penalty on small singular values, we propose the Logarithmic Minimax (LM) function in this paper. The LM function not only possesses the property of protecting large singular values like the MLCP function but also has stronger penalties on small singular values. Furthermore, we propose the proximal operator for the LM function theorem, which ensures that the LM function can directly obtain its explicit solution in tensor recovery problems, which the MLCP function lacks. Based on this, we further propose the weighted tensor \(LM\)-norm, which improves the flexibility of the LM function in handling different singular values of tensors.
The main contributions of this paper are summarized below:
Firstly, we propose the LM function, a new non-convex function. It possesses the property of protecting large singular values and has stronger penalties on small singular values compared to the MLCP function. In this paper, the introduction of the Proximal operator for the LM function theorem guarantees the direct solvability of the LM function in tensor recovery problems. Based on this, the proposed weighted tensor \(LM\)-norm further enhances the flexibility of the LM function in handling tensor singular values.
Secondly, we construct the TLM-based LRTC model and TLM-based TRPCA model for two typical tensor recovery problems, namely LRTC problem and TRPCA problem. The solution algorithms for the models are provided using the alternating direction multipliers method (ADMM). Furthermore, we prove that the proposed methods have the convergence guarantees under some assumptions.
Thirdly, we conduct experiments on various real-world datasets to evaluate the proposed methods. The LRTC experiments on MSI, MRI, and video, as well as the TRPCA experiment on HSI denoising, demonstrate the superior performance of the proposed methods. Additionally, we compare the proposed methods with the EMLCP method based on the MLCP function, validating that the LM function outperforms the MLCP function as a non-convex relaxation in tensor recovery.
The summary of this article is as follows: In Section 2, some preliminary knowledge and background of the tensors are given. The LM function and its properties and corresponding theorems are presented in Section 3. The main results, including the proposed models and algorithms, are shown in Section 4. Then, in section 5, we study the convergence of the proposed methods. The results of extensive experiments and discussions are presented in Section 6. Conclusions are drawn in Section 7.
## 2 Preliminaries
In this section, we list some basic notations and briefly introduce some definitions used throughout the paper. Generally, a lowercase letter and an uppercase letter denote a vector \(x\) and a matrix \(X\), respectively. An third-order tensor is denoted by a calligraphic uppercase letter \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\mathcal{X}_{i_{1},i_{2},i_{3}}\) is its \((i_{1},i_{2},i_{3})\)-th element. The Frobenius norm of a tensor is defined as
\((\sum_{i_{1},i_{2},i_{3}}\mathcal{X}_{i_{1},i_{2},i_{3}}^{2})^{1/2}\). For a third-order tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the frontal slice \(\mathcal{X}(:,:,i)\) is denoted compactly as \(\mathcal{X}^{(i)}\), \(\mathcal{X}(i_{1},i_{2},:)\) is denoted \(i_{1},i_{2}\)-th tube of \(\mathcal{X}\). \(\bar{\mathcal{X}}\) represents fast Fourier transform (FFT) along the third dimension of tensor \(\mathcal{X}\), i.e., \(\bar{\mathcal{X}}=fft(\mathcal{X},[\![],3)\), and \(\mathcal{X}=ifft(\bar{\mathcal{X}},[\![],3)\). For a third-order tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the block circulation operation is defined as
\[bcirc(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}&\mathcal{X}^{(I_{3})}& \ldots&\mathcal{X}^{(2)}\\ \mathcal{X}^{(2)}&\mathcal{X}^{(1)}&\ldots&\mathcal{X}^{(3)}\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{X}^{(I_{3})}&\mathcal{X}^{(I_{3}-1)}&\ldots&\mathcal{X}^{(1)}\end{pmatrix} \in\mathbb{R}^{I_{1}I_{3}\times I_{2}I_{3}}.\]
The block diagonalization operation and its inverse operation are respectively determined by
\[bdiag(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}&&&&\\ &\mathcal{X}^{(2)}&&&\\ &&\ddots&\\ &&&\mathcal{X}^{(I_{3})}\end{pmatrix}\in\mathbb{R}^{I_{1}I_{3}\times I_{2}I_{3 }},\quad bdfold(bdiag(\mathcal{X})):=\mathcal{X}.\]
The block vectorization operation and its inverse operation are respectively defined as
\[bvec(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}\\ \mathcal{X}^{(2)}\\ \vdots\\ \mathcal{X}^{(I_{3})}\end{pmatrix}\in\mathbb{R}^{I_{1}I_{3}\times I_{2}}, \quad bvfold(bvec(\mathcal{X})):=\mathcal{X}.\]
**Definition 1** (t-product [12]).: Let \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\mathcal{B}\in\mathbb{R}^{I_{2}\times J\times I_{3}}\). Then the t-product \(\mathcal{A}*\mathcal{B}\) is defined to be a tensor of size \(I_{1}\times J\times I_{3}\),
\[\mathcal{A}*\mathcal{B}:=bvfold(bcirc(\mathcal{A})bvec(\mathcal{B})).\]
The **tensor conjugate transpose** of a tensor \(\mathcal{A}\in\mathbb{C}^{I_{1}\times I_{2}\times I_{3}}\) is the tensor \(\mathcal{A}^{H}\in\mathbb{C}^{I_{2}\times I_{1}\times I_{3}}\) obtained by conjugate transposing each of the frontal slices and then reversing the order of transposed frontal slices 2 through \(I_{3}\). The **identity tensor**\(\mathcal{I}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is the tensor whose first frontal slice is the \(I_{1}\times I_{1}\) identity matrix, and whose other frontal slices are all zeros. It is clear that \(bcirc(\mathcal{I})\) is the \(I_{1}I_{3}\times I_{1}I_{3}\) identity matrix. So it is easy to get \(\mathcal{A}*\mathcal{I}=\mathcal{A}\) and \(\mathcal{I}*\mathcal{A}=\mathcal{A}\). A tensor \(\mathcal{Q}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is **orthogonal tensor** if it satisfies \(\mathcal{Q}*\mathcal{Q}^{H}=\mathcal{Q}^{H}*\mathcal{Q}=\mathcal{I}.\) A tensor is called **f-diagonal** if each of its frontal slices is a diagonal matrix.
**Theorem 1** (t-SVD [15]).: _Let \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) be a third-order tensor, then it can be factored as_
\[\mathcal{X}=\mathcal{U}*\mathcal{S}*\mathcal{V}^{H},\]
_where \(\mathcal{U}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) and \(\mathcal{V}\in\mathbb{R}^{I_{2}\times I_{2}\times I_{3}}\) are orthogonal tensors, and \(\mathcal{S}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is an f-diagonal tensor._
**Definition 2** (Tensor tubal-rank [11]): _The tubal-rank of a tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted as \(rank_{t}(\mathcal{X})\), is defined to be the number of non-zero singular tubes of \(\mathcal{S}\), where \(\mathcal{S}\) comes from the t-SVD of \(\mathcal{X}:\mathcal{Y}=\mathcal{U}\ast\mathcal{S}\ast\mathcal{V}^{H}\). That is_
\[rank_{t}(\mathcal{X})=\#\{i:\mathcal{S}(i,:,:)\neq 0\}. \tag{3}\]
**Definition 3** (Tensor nuclear norm (TNN) [14]): _The tensor nuclear norm of a tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted as \(\|\mathcal{X}\|_{TNN}\), is defined as the sum of the singular values of all the frontal slices of \(\bar{\mathcal{X}}\), i.e.,_
\[\|\mathcal{X}\|_{TNN}:=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{\mathcal{X}}^{( i)}\|_{\ast} \tag{4}\]
_where \(\bar{\mathcal{X}}^{(i)}\) is the \(i\)-th frontal slice of \(\bar{\mathcal{X}}\), with \(\bar{\mathcal{Y}}=fft(\mathcal{X},[],3)\)._
## 3 Logarithmic Minimax (LM) Function
In this section, we first define the definition of the Logarithmic Minimax (LM) function.
**Definition 4** (Logarithmic Minimax (LM) function): _Let \(\lambda>0,\gamma>0,\varepsilon>0\). The LM function \(f_{LM}:\mathbb{R}\rightarrow\mathbb{R}_{+}\) is defined as_
\[f_{LM}(x)=\left\{\begin{array}{ll}\log\left(\frac{\lambda|x|-\frac{|x|^{2}}{ 2\gamma}}{\varepsilon}+1\right),&|x|\leqslant\lambda\gamma,\\ \log\left(\frac{\gamma\lambda^{2}}{2\varepsilon}+1\right),&|x|>\lambda\gamma. \end{array}\right. \tag{5}\]
_where \(\mathbb{R}_{+}\) denotes the domain of non-negative real numbers._
The LM function is a symmetric function, so we only discuss its functional properties on \([0,+\infty)\).
**Proposition 1**: _The LM function defined in (5) satisfies the following properties: **(a)**: \(f_{LM}(x)\) is continuous, smooth and_
\[f_{LM}(0)=0,\lim_{x\rightarrow+\infty}\frac{f_{LM}(x)}{x}=0;\]
_(b): \(f_{LM}(x)\) is monotonically non-decreasing and concave on \([0,+\infty)\); **(c)**: \(f^{\prime}_{LM}(x)\) is non-negativity and monotonicity non-increasing on \([0,+\infty)\). Moreover, it is Lipschitz bounded, i.e., there exists constant \(L(\ell)\) such that_
\[|f^{\prime}_{LM}(x)-f^{\prime}_{LM}(y)|\leq L(\ell)|x-y|;\]
_(d): Especially, for the LM function, it is increasing in parameter \(\gamma\), and_
\[\lim_{\gamma\rightarrow+\infty}f_{LM}(x)=\log(\frac{\lambda|x|}{\varepsilon} +1). \tag{6}\]
Proof. **(a)**: \(\lim_{-}f_{LM}(\lambda\gamma)=\lim_{+}f_{LM}(\lambda\gamma)=\log\left(\frac{ \gamma\lambda^{2}}{2\varepsilon}+1\right)\) and \(f^{\prime}_{-LM}(\lambda\gamma)=f^{\prime}_{+LM}(\lambda\gamma)=0\), thus it's continuous and smooth; At last, the conclusions \(f_{LM}(0)=0\) and \(\lim\limits_{x\rightarrow+\infty}\frac{f_{LM}(x)}{x}=0\) are easily to verified though the formulas in (5).
**(b)**: This conclusion is direct from its first order and second order derivative function. Its first order and second order derivative functions are as follows:
\[\begin{array}{l}f^{\prime}_{LM}(x)=\left\{\begin{array}{l}\frac{2\lambda \gamma-2|x|}{2\lambda\gamma|x|-|x|^{2}+2\gamma\varepsilon},|x|\leqslant\lambda \gamma,\\ 0,\hskip 28.452756pt|x|>\lambda\gamma.\end{array}\right.\\ f^{\prime\prime}_{LM}(x)=\left\{\begin{array}{l}\frac{-2((|x|-\lambda\gamma )^{2}+\lambda^{2}\gamma^{2}+2\gamma\varepsilon)}{(2\lambda\gamma|x|-|x|^{2}+2 \gamma\varepsilon)^{2}},|x|\leqslant\lambda\gamma,\\ 0,\hskip 28.452756pt|x|>\lambda\gamma.\end{array}\right.\end{array} \tag{7}\]
We can find that its first order derivative functions are non-negative and second order derivative functions are non-positive, thus \(f_{LM}(x)\) is concave and monotonically non-decreasing on \([0,+\infty)\).
**(c)**: The non-negativity and monotonicity of \(f^{\prime}_{LM}(x)\) is direct from the formulas presented in (7). Next, we verify its Lipschitz bounded. The proof is mainly based on that \(f^{\prime\prime}_{LM}(x)\leqslant 0\) and monotonically non-decreasing on \((0,+\infty)\), which turns that \(f^{\prime\prime}_{LM}(x)\) is always bounded. Thus exists constant \(L(\ell):=\max\{f^{\prime\prime}_{LM}(x),f^{\prime\prime}_{LM}(y)\}\) for any \(x,y\in(0,+\infty)\), we have
\[|f^{\prime}_{LM}(x)-f^{\prime}_{LM}(y)|\leq L(\ell)|x-y|.\]
**(d)**: Consider \(f_{LM}(x)\) is a function with respect to \(\gamma\) when \(x\), \(\lambda\) and \(\varepsilon\) are fixed, then its derivative functions is computed as follows:
\[\left\{\begin{array}{ll}\frac{\lambda^{2}}{\lambda^{2}\gamma+2\varepsilon}, &\gamma<\frac{|x|}{\lambda},\\ \frac{|x|^{2}}{2\gamma^{2}\lambda|x|-\gamma|x|^{2}+2\gamma^{2}\varepsilon},& \gamma\geqslant\frac{|x|}{\lambda}.\end{array}\right. \tag{9}\]
It demonstrates that LM functions is increasing in \(\gamma\) since its derivative functions is non-negative. Note that as \(\gamma\rightarrow+\infty\),
\[\log\left(\frac{\lambda|x|-\frac{|x|^{2}}{2\gamma}}{\varepsilon}+1\right) \rightarrow\log\left(\frac{\lambda|x|}{\varepsilon}+1\right).\]
Then the limit results follow easily. This completes the proof.
From Fig. 1, it can be found that the LM function has an extremely strong similarity with the MCP function and MLCP function. Furthermore, it can be observed from Fig. 1 that the LM function yields a smaller value under the same set of parameters. This indicates that the LM function possesses the property of preserving large singular values and imposing stronger penalization on small singular values.
**Definition 5** (Tensor \(Lm\)-norm).: The tensor \(LM\)-norm of \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted by \(\|\mathcal{X}\|_{LM}\), is defined as follows:
\[\|\mathcal{X}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{ \mathcal{X}}^{(i)}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R}f_{LM} (\sigma_{j}(\bar{\mathcal{X}}^{(i)})). \tag{10}\]
where \(R=\min(I_{1},I_{2})\).
Unlike the tensor nuclear norm penalty, the Tensor \(LM\)-norm (10) do not satisfy the triangle inequality. Some vital properties of the Tensor \(LM\)-norm are given below.
**Proposition 2**.: _For \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the Tensor \(LM\)-norm is defined in (10) satisfies the following properties:_
_(a) Non-negativity_: _The Tensor_ \(LM\)_-norm is non-negative, i.e.,_ \(\|\mathcal{X}\|_{LM}\geqslant 0\)_. The equality holds if and only if_ \(\mathcal{X}\) _is the null tensor._
_(b) Concavity_: \(\|\mathcal{X}\|_{LM}\) _is concave in the modulus of the elements of_ \(\mathcal{X}\)_._
_(c) Orthogonal invariance_: _The Tensor_ \(LM\)_-norm is orthogonal invariant, i.e.,_ \(\|\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\|_{LM}=\|\mathcal{X}\|_{LM}\)_, for any orthogonal tensor_ \(\mathcal{U}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) _and_ \(\mathcal{V}\in\mathbb{R}^{I_{2}\times I_{2}\times I_{3}}\)_._
Proof.: Let \(p(\mathcal{X})=\|\mathcal{X}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1 }^{R}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)}))\).
(a) Since \(p(\mathcal{X})\) is the sum of non-negative functions, \(\|\mathcal{X}\|_{LM}\geqslant 0\). The equality holds if \(\mathcal{X}=\mathbf{0}\).
(b) The function \(p(\mathcal{X})\) is separable of \(\mathcal{X}\), i.e.,
\[p(\mathcal{X})=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R}p( \sigma_{j}(\bar{\mathcal{X}}^{(i)})).\]
Since \(p\) is an affine function in \(\sigma_{j}(\bar{\mathcal{X}}^{(i)}),\) we can write, for \(0\leqslant\alpha\leqslant 1,\) that
\[\|(\alpha\mathcal{X}_{1}+(1-\alpha)\mathcal{X}_{2})\|_{LM}\] \[=p(\alpha\mathcal{X}_{1}+(1-\alpha)\mathcal{X}_{2})\] \[\geqslant\alpha p(\mathcal{X}_{1})+(1-\alpha)p(\mathcal{X}_{2})\] \[=\alpha\|\mathcal{X}_{1}\|_{LM}+(1-\alpha)\|\mathcal{X}_{2}\|_{ LM}.\]
Hence, \(\|\mathcal{X}\|_{LM}\) is concave in the modulus of the singular values of \(\mathcal{X}.\)
(c) Suppose \(\mathcal{X}\) has t-SVD \(\mathcal{P}*\mathcal{S}*\mathcal{Q}^{H},\) where \(\mathcal{P},\mathcal{Q}\) are orthogonal and \(\mathcal{S}\) is f-diagonal, we have
\[\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}=\mathcal{U}*\mathcal{P}*\mathcal{S}* \mathcal{Q}^{H}*\mathcal{V}^{H}=(\mathcal{U}*\mathcal{P})*\mathcal{S}*( \mathcal{V}*\mathcal{Q})^{H}.\]
Since
\[(\mathcal{U}*\mathcal{P})*(\mathcal{U}*\mathcal{P})^{H}=\mathcal{U}*\mathcal{ P}*\mathcal{P}^{H}*\mathcal{U}^{H}=\mathcal{I},\]
then \(\mathcal{U}*\mathcal{P}\) is an orthogonal tensor. The same is true for \(\mathcal{V}*\mathcal{Q}\). Thus \((\mathcal{U}*\mathcal{P})*\mathcal{S}*(\mathcal{V}*\mathcal{Q})^{H}\) is the t-SVD of \(\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\). Therefore,
\[\|\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_ {3}}\sum_{j=1}^{R}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)}))=\|\mathcal{X}\| _{LM}.\]
**Definition 6** (Weighted tensor \(LM\)-norm).: The weighted tensor \(LM\)-norm of \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}},\) denoted by \(\|\mathcal{X}\|_{\omega-LM},\) is defined as follows:
\[\|\mathcal{X}\|_{\omega-LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{\mathcal{ X}}^{(i)}\|_{\omega-LM}=\frac{1}{I_{3}}\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R }\omega_{j,i}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)})). \tag{11}\]
where \(R=\min(I_{1},I_{2}).\)
**Theorem 2** (Proximal operator for the LM function).: _Consider the LM function given in (5). Its proximal operator denoted by \(S_{LM}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\), \(\lambda>0,\gamma>0,\) and \(\varepsilon>0,\rho>0\), and defined as follows:_
\[S_{LM}(y)=\arg\min_{x}\{\frac{\rho}{2}(x-y)^{2}+f_{LM}(x)\}, \tag{12}\]
_is given by_
\[S_{LM}(y)=\left\{\begin{array}{ll}x^{\star},\;x\leqslant\lambda\gamma,\\ y\;,\;\;x>\lambda\gamma.\end{array}\right. \tag{13}\]
Proof.: Let
\[g(x) =\frac{\rho}{2}(x-y)^{2}+f_{LM}(x),\] \[=\left\{\begin{array}{ll}\frac{\rho}{2}(x-y)^{2}+\log\left( \frac{\lambda x-\frac{x^{2}}{2\gamma}}{\varepsilon}+1\right),\;x\leqslant \lambda\gamma,\\ \frac{\rho}{2}(x-y)^{2}+\log\left(\frac{\gamma\lambda^{2}}{2\varepsilon}+1 \right)\;\;\;\;\;,\;x>\lambda\gamma.\end{array}\right.\]
According to the definition of \(g(x)\), when \(x>\lambda\gamma\), \(S_{LM}(y)=y\).
Next, we consider the case \(x\leqslant\lambda\gamma\). Let the derivatives of the objective function \(g(x)\) with respect to \(x\) be zero. Therefore, we have
\[\rho(x-y)+\frac{2\lambda\gamma-2x}{2\lambda\gamma x-x^{2}+2\gamma \varepsilon}=0. \tag{14}\]
Due to \(2\lambda\gamma x-x^{2}+2\gamma\varepsilon>0,\rho>0\), equation 14 can be transformed into the following form:
\[-x^{3}+(2\lambda\gamma+y)x^{2}+(-\frac{2}{\rho}+2\gamma\varepsilon -2\lambda\gamma y)x+(\frac{2\lambda\gamma}{\rho}-2\gamma\varepsilon y)=0. \tag{15}\]
Let \(a=-1,b=2\lambda\gamma+y,c=-\frac{2}{\rho}+2\gamma\varepsilon-2\lambda\gamma y,d=\frac{2\lambda\gamma}{\rho}-2\gamma\varepsilon y,A=b^{2}-3ac,B=bc-9ad,C=c^ {2}-3bd,\Delta=B^{2}-4AC\), \(x_{1},x_{2}\), and \(x_{3}\) are the three solutions of equation 15. These results are derived by considering different values of \(A,B\), and \(\Delta\).
1) Case-1: \(A=B=0\). The solution to equation 15 in this case are \(x_{1}=x_{2}=x_{3}=-\frac{c}{b}\).
2) Case-2: \(\Delta>0\). The solution to equation 15 in this case are as follows:
\[x_{1} =\frac{-b-(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})}{3a},\] \[x_{2} =\frac{-b+0.5(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})+0.5\sqrt[3]{3}( \sqrt[3]{K_{1}}-\sqrt[3]{K_{2}})i}{3a},\] \[x_{3} =\frac{-b+0.5(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})-0.5\sqrt[3]{3}( \sqrt[3]{K_{1}}-\sqrt[3]{K_{2}})i}{3a},\]
where \(K_{1}=Ab+1.5a(-B+\sqrt{B^{2}-4AC}),\ K_{2}=Ab+1.5a(-B-\sqrt{B^{2}-4AC})\). Since equation 12 is in the real number domain, only \(x_{1}\) is retained as the solution in this case.
3) Case-3: \(\Delta=0\). The solution to equation 15 in this case are \(x_{1}=\frac{B}{A}-\frac{b}{a},\ x_{2}=x_{3}=-\frac{B}{2A}\).
4) Case-4: \(\Delta<0\). The solution to equation 15 in this case are as follows:
\[x_{1} =\frac{-b-2\sqrt{A}\cos\frac{\theta}{3}}{3a},\] \[x_{2} =\frac{-b+\sqrt{A}(\cos\frac{\theta}{3}-\sqrt{3}\sin\frac{\theta} {3})}{3a},\] \[x_{3} =\frac{-b+\sqrt{A}(\cos\frac{\theta}{3}-\sqrt{3}\sin\frac{\theta} {3})}{3a},\]
where \(\theta=\arccos T,T=\frac{2Ab-3aB}{2A\sqrt{A}}\).
In addition, we need to further consider whether \(x_{i}\) (\(i=1,2,3\)) are within the domain, as well as what the optimal solution is. Therefore, we will take the following steps:
Step 1: \(x_{i}=\min\{\max\{x_{i},0\},\lambda\gamma\}\) (\(i=1,2,3\)). Step 2: \(g(x^{\star})=\min\{g(x_{i}),g(0),g(\lambda\gamma)\}\) (\(i=1,2,3\)).
The optimal solution at this moment is as follows:
\[x^{\star}=\left\{\begin{array}{l}x_{i},g(x^{\star})=g(x_{i}),\\ 0,g(x^{\star})=g(0),\\ y,g(x^{\star})=g(\lambda\gamma).\end{array}\right. \tag{16}\]
To sum up:
\[S_{LM}(y)=\left\{\begin{array}{l}x^{*},\;x\leqslant\lambda\gamma,\\ y\;,\;\;x>\lambda\gamma.\end{array}\right.\]
To observe \(S_{LM}(y)\) more clearly, Fig. 2 illustrates the variations of \(S_{LM}(y)\) with \(x\) under different parameters. The results in the figure indicate that \(S_{LM}(y)\) is significantly influenced by parameters \(\lambda\) and \(\varepsilon\). Therefore, in experiments with the same data but different values of \(\lambda\) and \(\varepsilon\), the optimal parameters for \(\lambda\) and \(\varepsilon\) do not undergo substantial changes. On the other hand, parameter \(\gamma\) has a relatively minor impact on \(S_{LM}(y)\). Thus, even though parameter \(\gamma\) varies greatly in the experiment, it does not have a significant effect on \(S_{LM}(y)\).
**Theorem 3** (Proximal operator for weighted tensor \(Lm\)-norm).: _Consider weighted tensor \(LM\)-norm given in (11). Its proximal operator denoted by \(S:\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\rightarrow\mathbb{R}^{I_{1} \times I_{2}\times I_{3}}\), \(\lambda>0,\gamma>0,\) and \(\varepsilon>0\), \(R=\min\{I_{1},I_{2}\}\) and defined as follows:_
\[S(\mathcal{X})=\arg\min_{\mathcal{L}}\{\frac{\rho}{2}\|\mathcal{L}-\mathcal{Y }\|_{F}^{2}+\|\mathcal{L}\|_{\omega-LM}\}, \tag{17}\]
_is given by_
\[S=\mathcal{U}*\mathcal{S}_{1}*\mathcal{V}^{H}, \tag{18}\]
_where \(\mathcal{U}\) and \(\mathcal{V}\) are derived from the t-SVD of \(\mathcal{Y}=\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\). More importantly, the \(i\)th front slice of DFT of \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), i.e., \(\bar{\mathcal{S}}_{1}^{(i)}=\sigma(\bar{\mathcal{L}}^{(i)})\) and \(\bar{\mathcal{S}}_{2}^{(i)}=\sigma(\bar{\mathcal{Y}}^{(i)})\), has the following relationship \(\sigma_{j}(\bar{\mathcal{L}}^{(i)})=S_{LM}(\sigma_{j}(\bar{\mathcal{Y}}^{(i) }))\)._
Figure 2: Visual comparison of \(S_{LM}(y)\) on different parameters.
Proof.: Let \(\mathcal{Y}=\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\) and \(\mathcal{L}=\mathcal{W}*\mathcal{S}_{1}*\mathcal{R}^{H}\) be the t-SVD of \(\mathcal{Y}\) and \(\mathcal{L}\), respectively. Consider
\[S(\mathcal{Y}) =\arg\min_{\mathcal{L}}\frac{1}{2}\|\mathcal{L}-\mathcal{Y}\|_{F}^ {2}+\|\mathcal{L}\|_{\omega-LM}\] \[=\arg\min_{\mathcal{L}}\frac{1}{2}\|\mathcal{W}*\mathcal{S}_{1}* \mathcal{R}^{H}-\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\|_{F}^{2}+\| \mathcal{L}\|_{\omega-LM}\] \[=\arg\min_{\mathcal{L}}\frac{1}{I_{3}}(\sum_{i=1}^{I_{3}}\frac{1 }{2}\|\bar{\mathcal{W}}^{(i)}*\bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{ (i)H}-\bar{\mathcal{U}}^{(i)}*\bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{ (i)H}\|_{F}^{2}+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}). \tag{19}\]
It can be found that (19) is separable and can be divided into \(I_{3}\) sub-problems. For the \(i\)th sub-problem:
\[\arg\min_{\mathcal{L}^{(i)}}\frac{1}{2}\|\bar{\mathcal{W}}^{(i)}* \bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{(i)H}-\bar{\mathcal{U}}^{(i)}* \bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{(i)H}\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}\] \[=\arg\min_{\bar{\mathcal{L}}^{(i)}}\frac{1}{2}Tr(\bar{\mathcal{S }_{1}}^{(i)}\bar{\mathcal{S}_{1}}^{(i)H})+\frac{1}{2}Tr(\bar{\mathcal{S}_{2}}^{ (i)}\bar{\mathcal{S}_{2}}^{(i)H})+Tr(\bar{\mathcal{L}}^{(i)H}\bar{\mathcal{Y}}^ {(i)})+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}.\]
Invoking von Neumann's trace inequality [22], we can write
\[\arg\min_{\mathcal{L}^{(i)}}\frac{1}{2}\|\bar{\mathcal{W}}^{(i)}* \bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{(i)H}-\bar{\mathcal{U}}^{(i)}* \bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{(i)H}\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}\] \[\geq\arg\min_{\bar{\mathcal{S}_{1}}^{(i)}}\frac{1}{2}Tr(\bar{ \mathcal{S}_{1}}^{(i)}\mathcal{S_{1}}^{(i)H})+\frac{1}{2}Tr(\bar{\mathcal{S}_{ 2}}^{(i)}\mathcal{S_{2}}^{(i)H})+Tr(\bar{\mathcal{S}_{2}}^{(i)}\mathcal{S_{1} }^{(i)H})+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}\] \[=\arg\min_{\sigma(\mathcal{L}^{(i)})}\frac{1}{2}\|\sigma(\bar{ \mathcal{L}}^{(i)})-\sigma(\bar{\mathcal{Y}}^{(i)})\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}.\] \[=\sum_{j=1}^{R}\arg\min_{\sigma_{j}(\bar{\mathcal{L}}^{(i)})} \frac{1}{2\omega_{j,i}}(\sigma_{j}(\bar{\mathcal{L}}^{(i)})-\sigma_{j}(\bar{ \mathcal{Y}}^{(i)}))^{2}+f_{LM}(\sigma_{j}(\bar{\mathcal{L}}^{(i)})). \tag{20}\]
The equality holds when \(\bar{\mathcal{W}}^{(i)}=\bar{\mathcal{U}}^{(i)}\) and \(\bar{\mathcal{R}}^{(i)}=\bar{\mathcal{V}}^{(i)}\). Hence, the optimal solution to (19) is obtained by solving the problem below: \(\sigma_{j}(\bar{\mathcal{L}}^{(i)})=S_{LM}(\sigma_{j}(\bar{\mathcal{Y}}^{(i)}))\).
## 4 TLM-based models and solving algorithms
In this section, we apply the weighted tensor \(LM\)-norm to low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA) and propose two new TLM-based models.
### TLM-based LRTC model
Low-rank tensor completion aims at estimating the missing elements from an incomplete observation tensor. Considering an third-order tensor \(\mathcal{T}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the proposed TLM-based LRTC model is formulated as follow
\[\min_{\mathcal{X}}~{}\|\mathcal{X}\|_{\omega-LM}~{}~{}s.t.~{}~{} \mathcal{P}_{\Omega}(\mathcal{T}-\mathcal{X})=\mathbf{0}. \tag{21}\]
First, we introduce a auxiliary tensor \(\mathcal{Z}=\mathcal{X}\) and transform optimization problem (21), in its augmented Lagrangian form, as follows:
\[L(\mathcal{X},\mathcal{Z},\mathcal{Q};\mu) =\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2}\|\mathcal{X}-\mathcal{ Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle \tag{22}\] \[s.t.~{}\mathcal{P}_{\Omega}(\mathcal{T}-\mathcal{X})=\mathbf{0},\]
where \(\mathcal{Q}\) is Lagrangian multiplier; \(\mu>0\) are the augmented Lagrangian parameters. Similarly, we denote the variable updated by the iteration as \((\cdot)^{+}\), and omit the specific number of iterations. The update equations are derived in the following.
**Update \(\mathcal{X}\)**: The closed form of \(\mathcal{X}\) can be derived by setting the derivative of (22) to zero. We can now update \(\mathcal{X}\) by the following equation:
\[\mathcal{X}^{+}=\mathcal{P}_{\Omega^{c}}(\mathcal{Z}-\frac{\mathcal{Q}}{\mu} )+\mathcal{P}_{\Omega}(\mathcal{T}). \tag{23}\]
**Update \(\mathcal{Z}\)**: Fix other variables, and the corresponding optimization is as follows:
\[\mathcal{Z}^{+} =\arg\min_{\mathcal{Z}}\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2} \|\mathcal{X}^{+}-\mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}^{+}-\mathcal{Z}, \mathcal{Q}\rangle. \tag{24}\] \[=\arg\min_{\mathcal{Z}}\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2} \|\mathcal{X}^{+}-\mathcal{Z}+\frac{\mathcal{Q}}{\mu}\|_{F}^{2}.\]
Recalling Theorem 3, the solution to the above optimization is given by:
\[\mathcal{Z}^{+}=S(\mathcal{X}^{+}+\frac{\mathcal{Q}}{\mu}), \tag{25}\]
where \(S\) denotes the proximal operator defined in (18).
**Update \(\mathcal{Q}\)**: Finally, multiplier \(\mathcal{Q}\) is updated as follows:
\[\mathcal{Q}^{+}=\mathcal{Q}+\mu(\mathcal{X}^{+}-\mathcal{Z}^{+}). \tag{26}\]
The optimization steps of TLM-LRTC formulation are listed in Algorithm 1. The main cost lies in the update of \(\mathcal{Z}\), which requires computing t-SVD. The per-iteration complexity is \(O(I_{1}I_{2}I_{3}[log(I_{3})+\min(I_{1},I_{2})])\).
### TLM-based TRPCA model
Tensor robust principal component analysis (TRPCA) aims to recover the tensor from grossly corrupted observations. Using the proposed weighted tensor \(LM\)-norm, we can get the following TLM-based TRPCA model:
\[\min_{\mathcal{X},\mathcal{E}}\|\mathcal{X}\|_{\omega-LM}+\tau_{1}\| \mathcal{E}\|_{1}\ s.t.\ \mathcal{T}=\mathcal{X}+\mathcal{E}, \tag{27}\]
Under the framework of the ADMM, the easy-to-implement optimization strategy could be provided to solve (27). We introduce tensor \(\mathcal{Z}=\mathcal{X}\) and transform optimization problem (27), in its augmented Lagrangian form, as follows:
\[L(\mathcal{X},\mathcal{Z},\mathcal{E},\mathcal{Q},\mathcal{G}; \mu,\rho) =\|\mathcal{Z}\|_{\omega-LM}+\tau_{1}\|\mathcal{E}\|_{1}+\frac{\mu}{2}\| \mathcal{X}-\mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle\] \[\quad+\frac{\rho}{2}\|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F }^{2}+\langle\mathcal{T}-(\mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{28}\]
where \(\mathcal{Q},\mathcal{G}\) are Lagrangian multipliers; \(\mu,\rho>0\) are the augmented Lagrangian parameters. Besides, variables \(\mathcal{X},\mathcal{Z},\mathcal{E},\mathcal{Q},\mathcal{G}\) are updated alternately in the order of \(\mathcal{X}\rightarrow\mathcal{Z}\rightarrow\mathcal{E}\rightarrow\mathcal{Q} \rightarrow\mathcal{G}\). Since the update of variable \(\mathcal{Z}\) is consistent with the TLM-based LRTC model, it is omitted here and will not be repeated. We denote the variable updated by the iteration as \((\cdot)^{+}\), and omit the specific number of iterations. The update equations are derived in the following.
**Update**\(\mathcal{X}\): Fix other variables, and the corresponding optimization are as follows:
\[\mathcal{X}^{+}=\arg\min_{\mathcal{X}}\frac{\mu}{2}\|\mathcal{X}- \mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle+\frac {\rho}{2}\|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F}^{2}+\langle\mathcal{T} -(\mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{29}\]
The closed form of \(\mathcal{X}\) can be derived by setting the derivative of (29) to zero. We can now update \(\mathcal{X}\) by the following equation:
\[\mathcal{X}^{+}=\frac{\mu\mathcal{Z}-\mathcal{Q}+\rho(\mathcal{T}- \mathcal{E})+\mathcal{G}}{\mu+\rho}, \tag{30}\]
**Update**\(\mathcal{E}\): Now, let's solve \(\mathcal{E}\). The minimization problem of \(\mathcal{E}\) is as follows:
\[\arg\min_{\mathcal{E}}\tau_{1}\|\mathcal{E}\|_{1}+\frac{\rho}{2} \|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F}^{2}+\langle\mathcal{T}-( \mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{31}\]
Problem (31) has the following closed-form solution:
\[\mathcal{E}^{+}=S_{\frac{\tau_{1}}{\rho}}(\mathcal{T}-\mathcal{X}^{+}+\frac{ \mathcal{G}}{\rho}), \tag{32}\]
where \(S_{\lambda}(\cdot)\) is the soft thresholding operator [23]:
\[S_{\lambda}(x)=\left\{\begin{array}{ll}0,&if\quad|x|\leqslant \lambda,\\ sign(x)(|x|-\lambda),&if\quad|x|>\lambda.\end{array}\right. \tag{33}\]
**Update \(\mathcal{Q}\) and \(\mathcal{G}\)**: Finally, multipliers \(\mathcal{Q}\) and \(\mathcal{G}\) are updated as follows:
\[\mathcal{Q}^{+} =\mathcal{Q}+\mu(\mathcal{X}^{+}-\mathcal{Z}^{+}). \tag{34}\] \[\mathcal{G}^{+} =\mathcal{G}+\rho(\mathcal{T}-\mathcal{X}^{+}-\mathcal{E}^{+}). \tag{35}\]
The optimization steps of TLM formulation are listed in Algorithm 2. The main cost lies in the update of \(\mathcal{Z}\), which requires computing t-SVD. The per-iteration complexity is \(O(I_{1}I_{2}I_{3}[log(I_{3})+\min(I_{1},I_{2})])\).
```
Input: The corrupted observation tensor \(\mathcal{T}\), convergence criteria \(\epsilon\), maximum iteration number \(K\). Initialization:\(\mathcal{X}^{0}=\mathcal{T}\), \(\mathcal{Z}^{0}=\mathcal{X}^{0}\), \(\rho>0\), \(\mu>0\), \(\eta>1\). while not converged and \(k<K\)do Updating \(\mathcal{X}^{k}\) via (30); Updating \(\mathcal{Z}^{k}\) via (25); Updating \(\mathcal{E}^{k}\) via (32); Updating the multipliers \(\mathcal{Q}^{k}\) and \(\mathcal{G}^{k}\) via (34) and (35); \(\mu^{k}=\eta\mu^{k-1}\), \(\rho^{k}=\eta\rho^{k-1}\), \(k=k+1\); Check the convergence conditions \(\|\mathcal{X}^{k+1}-\mathcal{X}^{k}\|_{F}^{2}/\|\mathcal{X}^{k}\|_{F}^{2}\leq\epsilon\). endwhile return\(\mathcal{X}^{k+1}\) and \(\mathcal{E}^{k+1}\). Output:\(\mathcal{X}\) and \(\mathcal{E}\).
```
**Algorithm 2** TLM-TRPCA
## 5 Convergence analysis
The convergence analysis of algorithm 1 is similar to that of algorithm 2. Here, we provide the convergence analysis of algorithm 2, while omitting the convergence analysis of algorithm 1. To prove the convergence of the proposed algorithm 2, we first have the following two lemmas.
**Lemma 1**: _The sequence \(\{\mathcal{Q}^{K}\}\) and \(\{\mathcal{G}^{k}\}\) are bounded._
First, The optimal \(\mathcal{Z}^{k+1}\) needs to satisfy the first-order optimality condition, that is,
\[0 \in\partial_{\mathcal{Z}}L(\mathcal{X}^{k+1},\mathcal{Z}^{k}, \mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k}-\mu^{k}(\mathcal{X}^{k+1}-\mathcal{Z}^{k+1})\] \[=\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k+1},\]
In terms of the analysis in proposition 1, the derivation of \(f_{LM}\) is bounded, and thereby, \(\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}\) is bounded. Then, it is seen that \(\{\mathcal{Q}^{k}\}\) is bounded.
Then, the optimal \(\mathcal{E}^{k+1}\) needs to satisfy the first-order optimality condition, that is,
\[0 \in\partial_{\mathcal{E}}L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1}, \mathcal{E}^{k+1},\mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k}-\rho^{k}(\mathcal{T}-(\mathcal{X}^{k+1}+\mathcal{E}^{k+1}))\] \[=\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k+1},\]
It can easily be proved that \(\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}\) is bounded [24], Thus, \(\{\mathcal{G}^{k}\}\) is bounded.
**Lemma 2**.: _Sequences \(\{\mathcal{X}^{k}\},\{\mathcal{Z}^{k}\}\), and \(\{\mathcal{E}^{k}\}\) are bounded if \(\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1}}{(\mu^{j-1})^{2}}<\infty\) and \(\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{j-1}}{(\rho^{j-1})^{2}}<\infty\)._
Proof.: By simple manipulation, we can get,
\[L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\langle\mathcal{Q}^{k}-\mathcal{Q}^{ k-1},\mathcal{X}^{k}-\mathcal{Z}^{k}\rangle\] \[\quad+\frac{\mu^{k}-\mu^{k-1}}{2}\|\mathcal{X}^{k}-\mathcal{Z}^{ k}\|_{F}^{2}+\langle\mathcal{G}^{k}-\mathcal{G}^{k-1},\mathcal{T}-\mathcal{X}^{k}- \mathcal{E}^{k}\rangle\] \[\quad+\frac{\rho^{k}-\rho^{k-1}}{2}\|\mathcal{T}-\mathcal{X}^{k} -\mathcal{E}^{k}\|_{F}^{2}\] \[=L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\frac{\mu^{k}+\mu^{k-1}}{2(\mu^{k-1})^{2}}\| \mathcal{Q}^{k}-\mathcal{Q}^{k-1}\|_{F}^{2}\] \[\quad+\frac{\rho^{k}+\rho^{k-1}}{2(\rho^{k-1})^{2}}\|\mathcal{G} ^{k}-\mathcal{G}^{k-1}\|_{F}^{2}.\]
Then, it follows that,
\[L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{ Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[\quad\leqslant L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k}, \mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[\quad\leqslant L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k}, \mathcal{Q}^{k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\frac{\mu^{k}+\mu^{k -1}}{2(\mu^{k-1})^{2}}\|\mathcal{Q}^{k}-\mathcal{Q}^{k-1}\|_{F}^{2}\] \[\quad+\frac{\rho^{k}+\rho^{k-1}}{2(\rho^{k-1})^{2}}\|\mathcal{G} ^{k}-\mathcal{G}^{k-1}\|_{F}^{2}\] \[\quad\leqslant L(\mathcal{X}^{1},\mathcal{Z}^{1},\mathcal{E}^{1}, \mathcal{Q}^{0},\mathcal{G}^{0};\mu^{0},\rho^{0})+\sum_{j=1}^{k}\frac{\mu^{j}+ \mu^{j-1}}{2(\mu^{j-1})^{2}}\|\mathcal{Q}^{j}-\mathcal{Q}^{j-1}\|_{F}^{2}\] \[\quad+\sum_{j=1}^{k}\frac{\rho^{j}+\rho^{j-1}}{2(\rho^{j-1})^{2}} \|\mathcal{G}^{j}-\mathcal{G}^{j-1}\|_{F}^{2}.\]
By the bounded property of \(\|\mathcal{Q}^{j}-\mathcal{Q}^{j-1}\|_{F}^{2}\) and \(\|\mathcal{G}^{j}-\mathcal{G}^{j-1}\|_{F}^{2}\), as well as under the given condition on \(\{\mu^{k}\}\) and \(\{\rho^{k}\}\), the right hand side of the inequality is bounded, so \(L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{Q}^{k}, \mathcal{G}^{k};\mu^{k},\rho^{k})\)
is bounded.
\[L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{Q }^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})+\frac{1}{2\mu^{k}}\|\mathcal{Q}^{k}\|_{F }^{2}+\frac{1}{2\rho^{k}}\|\mathcal{G}^{k}\|_{F}^{2}\] \[\quad=\|\mathcal{Z}^{k+1}\|_{\omega-LM}+\tau_{1}\|\mathcal{E}^{k+ 1}\|_{1}+\frac{\mu^{k}}{2}\|\mathcal{X}^{k+1}-\mathcal{Z}^{k+1}+\frac{\mathcal{ Q}^{k}}{\mu^{k}}\|_{F}^{2}\] \[\quad\quad+\frac{\rho^{k}}{2}\|\mathcal{T}-(\mathcal{X}^{k+1}+ \mathcal{E}^{k+1})+\frac{\mathcal{G}^{k}}{\rho^{k}}\|_{F}^{2}.\]
The terms on the right side of the equation are nonnegative and the terms on the left side of the equation are bounded, so \(\mathcal{Z}^{k}\) and \(\mathcal{E}^{k}\) are bounded. By observing the last regular term on the right side of the equation, \(\mathcal{X}^{k}\) is bounded. Therefore, \(\{\mathcal{X}^{k}\},\{\mathcal{Z}^{k}\}\), and \(\{\mathcal{E}^{k}\}\) are all bounded. The proof is completed.
**Theorem 4**.: _The sequence \(L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) generated by algorithm 2 has at least one accumulation point \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Then, \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star})\) is is a stationary point of optimization problem (27) under the condition that \(\lim\limits_{k\rightarrow\infty}\mu^{k}(\mathcal{Z}^{k+1}-\mathcal{Z}^{k})=0\), \(\lim\limits_{k\rightarrow\infty}\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k})=0\), \(\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1}}{(\mu^{j-1})^{2}}<\infty\) and \(\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{j-1}}{(\rho^{j-1})^{2}}<\infty\)._
Proof. The sequence \(L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) generated by algorithm 2 is bounded as proven in lemma 2. By the Bolzano-Weierstrass theorem, the sequence has at least one accumulation point \((\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Without loss of generality, we can assume that the sequence \((\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) converges to \((\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Actually, as \(\sum_{j=1}^{\infty}\frac{1}{\mu^{j}}<\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1 }}{(\mu^{j-1})^{2}}<\infty\), it follows from the update rule of \(\mathcal{G}^{k}\) that \(\lim\limits_{k\rightarrow\infty}\mathcal{T}-\mathcal{X}^{k}-\mathcal{E}^{k}= \lim\limits_{k\rightarrow\infty}(\mathcal{G}^{k}-\mathcal{G}^{k-1})/\mu^{k-1 }=0\), that is, \(\mathcal{T}=\mathcal{X}^{\star}+\mathcal{E}^{\star}\). Similarly, \(\sum_{j=1}^{\infty}\frac{1}{\rho^{j}}<\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{ j-1}}{(\rho^{j-1})^{2}}<\infty\) also satisfies. It follows the date rule of \(\mathcal{Q}^{k}\) that \(\lim\limits_{k\rightarrow\infty}\mathcal{X}^{k}-\mathcal{Z}^{k}=\lim\limits_{ k\rightarrow\infty}(\mathcal{Q}^{k}-\mathcal{Q}^{k-1})/\rho^{k-1}=0\), that is, \(\mathcal{X}^{\star}=\mathcal{Z}^{\star}\). Therefore, the feasible condition is satisfied.
First, we list the Karush-Kuhn-Tucker (KKT) conditions the \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\) satisfies,
\[\left\{\begin{array}{l}\mathcal{Q}^{\star}-\mathcal{G}^{\star}=0,\\ 0\in\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{ \mathcal{Z}^{\star}}-\mathcal{Q}^{\star},\\ 0\in\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{ \star}}-\mathcal{G}^{\star}.\end{array}\right. \tag{36}\]
For \(\mathcal{X}^{k+1}\), it is noted that,
\[\nabla_{\mathcal{X}}L(\mathcal{X},\mathcal{Z}^{k},\mathcal{E}^{ k},\mathcal{Q}^{k},\mathcal{G}^{k})|_{\mathcal{X}^{k+1}}\] \[\quad=\mathcal{Q}^{k}-\mathcal{G}^{k}+\mu^{k}(\mathcal{X}^{k+1}- \mathcal{Z}^{k})-\rho^{k}(\mathcal{T}-\mathcal{X}^{k+1}-\mathcal{E}^{k})\] \[\quad=\mathcal{Q}^{k+1}-\mathcal{G}^{k+1}+\mu^{k}(\mathcal{Z}^{k+1 }-\mathcal{Z}^{k})-\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k}).\]
In terms of provided condition \(\lim\limits_{k\rightarrow\infty}\mu^{k}(\mathcal{Z}^{k+1}-\mathcal{Z}^{k})=0\) and \(\lim\limits_{k\rightarrow\infty}\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k})=0\), and the bounded properties of \(\mathcal{Q}^{k}\) and \(\mathcal{G}^{k}\), we can obtain that \(\mathcal{Q}^{\star}-\mathcal{G}^{\star}=0\).
Similarly, for \(\mathcal{Z}^{k+1}\), we have
\[\frac{\partial L(\mathcal{X}^{k+1},\mathcal{Z},\mathcal{E}^{k},\mathcal{Q}^{k}, \mathcal{G}^{k})}{\partial\mathcal{Z}}|_{\mathcal{Z}^{k+1}}=\frac{\partial(\| \mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k+1}.\]
By the bounded condition of \(\mathcal{Z}^{k+1},\mathcal{Q}^{k+1}\), and \(\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{\mathcal{Z }^{*}}\), and the formula of \(\partial(\|\mathcal{Z}\|_{\omega-LM})\), we can obtain that \(0\in\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{ \mathcal{Z}^{*}}-\mathcal{Q}^{*}\).
Additionally, for \(\mathcal{E}^{k+1}\), we have
\[\frac{\partial L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E},\mathcal{Q}^ {k},\mathcal{G}^{k})}{\partial\mathcal{E}}|_{\mathcal{E}^{k+1}}=\frac{ \partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k+1}.\]
By the bounded condition of \(\mathcal{E}^{k+1},\mathcal{G}^{k+1}\), and \(\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{*}}\), and the formula of \(\partial(\|\mathcal{E}\|_{1})\), we have \(0\in\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{* }}-\mathcal{G}^{*}\). Through the above-mentioned, \(L(\mathcal{X}^{*},\mathcal{Z}^{*},\mathcal{E}^{*},\mathcal{Q}^{*},\mathcal{G}^ {*})\) satisfies the KKT conditions of problem (27). Therefore, \(L(\mathcal{X}^{*},\mathcal{Z}^{*},\mathcal{E}^{*})\) is a stationary point of optimization problem (27). This completes our proof.
## 6 Experiments
We evaluate the performance of the proposed TLM-based LRTC and TRPCA methods. We employ the peak signal-to-noise rate (PSNR) value, the structural similarity (SSIM) value [25], the feature similarity (FSIM) value [26], and erreur relative globale adimensionnelle de synth\(\grave{e}\)se (ERGAS) value [27] to measure the quality of the recovered results. The PSNR, SSIM, and FSIM values are the bigger the better, and the ERGAS value is the smaller the better. All tests are implemented on the Windows 11 platform and MATLAB (R2019a) with an 13th Gen Intel Core i5-13600K 3.50 GHz and 32 GB of RAM.
### Low-rank tensor completion
In this section, we test three kinds of real-world data: MSI, MRI, and Video. The method for sampling the data is purely random sampling. The comparative LRTC methods are as follows: HaLRTC [28], TNN [29] and PSTNN [30] methods.
**MSI completion:** We test nine MSIs in the dataset CAVE1. All testing data are of size \(256\times 256\times 31\). In Fig.3, we select six from nine MSIs, bringing the different sampling rates and band visible results. The individual MSI names and their corresponding bands are written in the caption of Fig.3. As shown from Fig.3, the visual effect of the proposed method is better than the contrast method at all three sampling rates. To further highlight the superiority of our method, the average quantitative results of nine MSIs are listed in Table 1. It can be seen that the proposed method has a great improvement compared to the suboptimal method. The PSNR value at both 10% sampling
rate is at least 3.1dB higher than the suboptimal TNN method, and even reaches 5dB at 5% sampling rate.
**MRI completion:** We test the performance of the proposed method and the comparative method on MRI2 data with the size of \(181\times 217\times 181\). First, we demonstrate the visual effect recovered by MRI data at sampling rates of 5%, 10% and 20% in Fig.4. Our method is clearly superior to the comparative methods. Then, we list the average quantitative results of frontal slice of MRI restored by all methods at different sampling rates in Table 2. At the sampling rate of 5% and 10%, the PSNR value of the proposed method is at least 0.9db higher than that of the suboptimal PSTNN method, and the values of SSIM, FSIM, and ERGAS are also better than the suboptimal PSTNN method.
Footnote 2: [http://brainweb.bic.mni.mcgill.ca/brainweb/selection_normal.html](http://brainweb.bic.mni.mcgill.ca/brainweb/selection_normal.html)
**Video completion:** We test nine Videos3(respectively named news, akiyo, hall, highway, foreman, container, coastguard, suzie, carphone) of size \(144\times 176\times 50\). Firstly, we demonstrate the visual results in our experiment in Fig.5. It is not hard to see from the Fig.5 that the recovery of our method on the vision effect is more better. Furthermore, we list the average quantitative results of nine Videos in Table 3. At this time, the suboptimal method is the suboptimal PSTNN method. When the sampling rate is 5%, the PSNR value of the proposed method is 0.8dB higher than it. In addition, at the sampling rate of 10% and 20%, the PSNR value of the proposed method is at least 0.9db higher than
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 15.000 & 0.140 & 0.636 & 846.663 & 15.237 & 0.178 & 0.634 & 823.947 & 15.745 & 0.246 & 0.634 & 777.072 \\ HaLRTC & 25.415 & 0.756 & 0.823 & 288.187 & 29.649 & 0.824 & 0.875 & 196.784 & 34.459 & 0.900 & 0.931 & 122.055 \\ TNN & 27.158 & 0.742 & 0.837 & 243.140 & 33.530 & 0.880 & 0.918 & 129.112 & 39.012 & 0.954 & 0.966 & 69.571 \\ PSTNN & 21.313 & 0.582 & 0.725 & 458.662 & 31.542 & 0.835 & 0.884 & 188.951 & 39.986 & 0.951 & 0.963 & 64.395 \\ TLM & **31.724** & **0.825** & **0.882** & **153.363** & **36.151** & **0.908** & **0.935** & **97.124** & **41.043** & **0.960** & **0.970** & **57.365** \\ \hline \end{tabular}
\end{table}
Table 1: The average PSNR, SSIM, FSIM and ERGAS values for nine MSIs tested by observed and the four utilized LRTC methods.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 11.399 & 0.310 & 0.530 & 1021.103 & 11.632 & 0.323 & 0.565 & 994.042 & 12.145 & 0.350 & 0.613 & 937.038 \\ HaLRTC & 17.295 & 0.298 & 0.636 & 537.363 & 20.094 & 0.438 & 0.725 & 391.416 & 24.430 & 0.659 & 0.829 & 236.047 \\ TNN & 22.707 & 0.472 & 0.743 & 302.903 & 26.047 & 0.641 & 0.812 & 205.793 & 29.960 & 0.798 & 0.882 & 130.952 \\ PSTNN & 23.253 & 0.497 & 0.753 & 283.311 & 25.956 & 0.637 & 0.810 & 207.873 & 29.953 & 0.798 & 0.882 & 131.061 \\ TLM & **24.149** & **0.518** & **0.758** & **260.818** & **27.536** & **0.681** & **0.828** & **176.650** & **31.739** & **0.832** & **0.897** & **107.914** \\ \hline \end{tabular}
\end{table}
Table 2: The PSNR, SSIM, FSIM and ERGAS values for MRI tested by observed and the four utilized LRTC methods.
that of the suboptimal PSTNN method.
### Tensor robust principal component analysis
In this section, we evaluate the performance of the proposed TRPCA method through HSI noise denoising. The comparative TRPCA methods include the SNN [31], TNN [15] methods. In this paper, the pepper and salt noise is obtained randomly, and its noise level is \(\nu\). We test the Pavia City Center
Figure 3: Visual results for MSI. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top two rows are 5%, middle two rows are 10% and last two rows are 20%. The rows of MSIs are in order: stuffed_toys, photo_and_face, glass_tiles, fake_and_real_strawberries, fake_and_real_beers, chart_and_stuffed_toy. The corresponding bands in each row are: 15, 15, 20, 20, 25, 25.
data sets and Washington DC data sets. Pavia City Center data size is \(200\times 200\times 80\), where the spatial resolution is \(200\times 200\) and the spectral resolution is 80. Washington DC data size is \(256\times 256\times 150\), where the spatial resolution is \(256\times 256\) and the spectral resolution is 150. In Table 4, we list the quantitative numerical results of Pavia City Center and Washington DC Data under three noise level (NL) of the pepper and salt noise noise, respectively. According to Table 4, it can be seen that under the \(\nu=0.4\), the PSNR value result of the TLM method is 3.3 dB higher than that of the TNN method for the Pavia City Center data. It can be seen that under the \(\nu=0.2\), the proposed method achieves a PSNR value that is 1.2 dB higher than that of the suboptimal TNN method for Washington DC data. In Figs. 6-7, we display the visualization results of the two datasets in the order of noise level. From figure, it is easy to observe that our method outperforms the comparison methods in terms of denoising effectiveness.
Figure 4: Visual results for MRI. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top row is 5%, middle row is 10% and last row is 20%. MRI slice in the order: 30, 60, 90.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 6.045 & 0.011 & 0.433 & 1167.724 & 6.280 & 0.018 & 0.425 & 1136.581 & 6.790 & 0.030 & 0.416 & 1071.805 \\ HaLRTC & 20.348 & 0.583 & 0.754 & 233.873 & 23.049 & 0.686 & 0.813 & 170.243 & 26.395 & 0.808 & 0.884 & 115.282 \\ TNN & 26.643 & 0.757 & 0.877 & 113.835 & 29.130 & 0.826 & 0.912 & 87.016 & 32.109 & 0.889 & 0.944 & 63.349 \\ PSTNN & 26.827 & 0.761 & 0.880 & 111.723 & 29.136 & 0.826 & 0.912 & 86.997 & 32.109 & 0.889 & 0.944 & 63.345 \\ TLM & **27.837** & **0.776** & **0.894** & **101.185** & **30.290** & **0.840** & **0.922** & **77.778** & **33.185** & **0.897** & **0.949** & **57.400** \\ \hline \end{tabular}
\end{table}
Table 3: The average PSNR, SSIM, FSIM and ERGAS values for nine videos tested by observed and the four utilized LRTC methods.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline & \multicolumn{2}{c|}{NL} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.4} \\ \hline & HSI & Method & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM \\ \hline & Observed & 11.813 & 0.125 & 0.565 & 10.055 & 0.074 & 0.479 & 8.797 & 0.048 & 0.422 \\ & SNN & 30.702 & 0.932 & 0.950 & 29.173 & 0.897 & 0.925 & 27.549 & 0.841 & 0.889 \\ Pavia City Center & TNN & 46.092 & 0.989 & 0.992 & 43.203 & 0.986 & 0.990 & 38.610 & 0.974 & 0.983 \\ & TLM & **52.118** & **0.992** & **0.994** & **46.456** & **0.989** & **0.992** & **41.949** & **0.982** & **0.987** \\ \hline & Observed & 11.429 & 0.122 & 0.553 & 9.664 & 0.073 & 0.467 & 8.418 & 0.048 & 0.412 \\ & SNN & 31.473 & 0.928 & 0.951 & 29.863 & 0.895 & 0.930 & 28.220 & 0.849 & 0.902 \\ Washington DC & TNN & 43.834 & 0.992 & 0.994 & 40.925 & 0.986 & 0.991 & 35.817 & 0.953 & 0.974 \\ & TLM & **45.952** & **0.995** & **0.996** & **42.556** & **0.989** & **0.993** & **38.740** & **0.977** & **0.985** \\ \hline \end{tabular}
\end{table}
Table 4: The PSNR, SSIM and FSIM values for 2 HSIs tested by observed and the three utilized TRPCA methods.
Figure 5: Visual results for videos. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top two rows are 5%, middle two rows are 10% and last two rows are 20%. Video frame in the order: 5, 10, 25, 30, 35, 40.
### Discussions
#### 6.3.1 Compared with EMLCP Method
To further highlight the advantages of the LM function, we compared it with the EMLCP method based on MLCP function [21]. To ensure a fair comparison, we also modified the EMLCP method to use a tubal rank model instead of the original N-tubal rank model. First, we compared the EMLCP and TLM LRTC methods using video data. Table 5 presents the quantitative results for the news, foreman, and container videos. According to the results in the table, it can be observed that the TLM method outperforms the EMLCP method and exhibits a significant improvement in terms of time. Next, we compared the EMLCP and TLM TRPCA methods for HSI denoising. Table 6 presents the quantitative denoising results for the Pavia City Center data and Washington DC data. Under different noise levels, the proposed TLM method significantly outperforms the EMLCP method, achieving better results while requiring much less computation time than the EMLCP method. In conclusion, the newly proposed LM function not only outperforms the MLCP function in terms of singular value manipulation but also exhibits faster computational speed in the corresponding algorithms.
Figure 6: Visual results for Pavia City Center. NL: top row is 0.2, middle row is 0.3 and last row is 0.4. HSI band in the order: 20, 40, 60.
#### 6.3.2 Parameters Setting
For the proposed TLM-based LRTC method, Table 7 shows the parameters of the \(\lambda\), \(\gamma\), and \(\varepsilon\) on different experiments. The weights are set to: \(\omega_{j,i}=\frac{1}{c+e^{-w_{N}-j+1,i}}\), where \(c=0.8,N=\min\{I_{1},I_{2}\}\), \(w_{N-j+1,i}=\frac{N\times\sigma_{j}(\bar{\mathcal{W}}^{(i)})}{m_{i}}\), \(\sigma_{j}(\bar{\mathcal{W}}^{(i)})\) is the \((j,j,i)\)-th singular value of \(\bar{\mathcal{W}},\mathcal{W}=\mathcal{X}+\frac{\mathcal{O}}{\mu}\) and \(m_{i}=\max\{\sigma_{j}(\bar{\mathcal{W}}^{(i)}),j=1,2,\ldots,N\}\). Besides, \(\mu_{0}=1/100000,\eta=1.1\).
For the proposed TLM-based TRPCA method, Table 8 shows the parameters of the \(\lambda\), \(\gamma\), \(\varepsilon\), and
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c} \hline & SR & \multicolumn{3}{c|}{5\%} & \multicolumn{3}{c|}{10\%} & \multicolumn{3}{c}{20\%} \\ \hline Video & Method & Observed & EMLCP & TLM & Observed & EMLCP & TLM & Observed & EMLCP & TLM \\ \hline \multirow{4}{*}{akiyo} & PSNR & 7.601 & 30.996 & **31.691** & 7.836 & 34.078 & **34.553** & 8.344 & 37.567 & **37.903** \\ & SSIM & 0.014 & 0.903 & **0.920** & 0.023 & 0.949 & **0.953** & 0.037 & 0.975 & **0.976** \\ & FSIM & 0.466 & 0.950 & **0.961** & 0.454 & 0.972 & **0.976** & 0.434 & 0.986 & **0.987** \\ & ERGAS & 1076.219 & 73.178 & **67.788** & 1047.496 & 51.827 & **49.331** & 988.013 & 35.510 & **34.426** \\ \hline \multirow{4}{*}{container} & PSNR & 4.600 & 28.569 & **29.342** & 4.835 & 32.522 & **32.945** & 5.347 & 37.226 & **37.652** \\ & SSIM & 0.007 & 0.870 & **0.895** & 0.011 & 0.930 & **0.940** & 0.021 & 0.966 & **0.970** \\ \cline{1-1} & FSIM & 0.395 & 0.927 & **0.944** & 0.391 & 0.963 & **0.968** & 0.393 & 0.983 & **0.985** \\ \cline{1-1} & ERGAS & 1239.987 & 79.861 & **73.724** & 1206.866 & 53.018 & **51.186** & 1137.888 & 33.976 & **33.136** \\ \hline \end{tabular}
\end{table}
Table 5: The PSNR, SSIM, FSIM, ERGAS and TIME values for three videos tested by observed and the EMLCP and the TLM LRTC methods.
Figure 7: Visual results for Washington DC. NL: top row is 0.2, middle row is 0.3 and last row is 0.4. HSI band in the order: 40, 80, 120.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline & \multicolumn{2}{c|}{NL} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.4} \\ \hline HSI & Method & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM \\ \hline \multirow{4}{*}{Pavia City Center} & Observed & 11.813 & 0.125 & 0.565 & 10.055 & 0.074 & 0.479 & 8.797 & 0.048 & 0.422 \\ & EMLCP & 50.406 & 0.990 & 0.993 & 45.594 & 0.988 & 0.991 & 41.081 & 0.979 & 0.985 \\ & TLM & **52.118** & **0.992** & **0.994** & **46.456** & **0.989** & **0.992** & **41.949** & **0.982** & **0.987** \\ \hline \multirow{4}{*}{Washington DC} & Observed & 11.429 & 0.122 & 0.553 & 9.664 & 0.073 & 0.467 & 8.418 & 0.048 & 0.412 \\ & EMLCP & 45.458 & 0.994 & 0.996 & 41.474 & 0.986 & 0.991 & 37.063 & 0.966 & 0.977 \\ \cline{1-1} & TLM & **45.952** & **0.995** & **0.996** & **42.556** & **0.989** & **0.993** & **38.740** & **0.977** & **0.985** \\ \hline \end{tabular}
\end{table}
Table 6: The PSNR, SSIM and FSIM values for two HSIs tested by observed and the EMLCP and the TLM TRPCA methods.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(\lambda\)} & \multicolumn{3}{c|}{\(\gamma\)} & \multicolumn{3}{c}{\(\varepsilon\)} \\ \hline \multirow{2}{*}{Data SR} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} \\ & & & & & & & & & \\ \hline fake\_and\_real\_beers & 0.1 & 0.3 & 0.3 & 20000 & 10000 & 10000 & 5 & 5 & 5 \\ face & 0.1 & 0.3 & 0.3 & 60000 & 10000 & 10000 & 5 & 5 & 5 \\ egyptian\_statue & 0.1 & 0.3 & 0.3 & 60000 & 20000 & 10000 & 5 & 5 & 5 \\ cloth & 0.1 & 0.1 & 0.1 & 60000 & 20000 & 10000 & 25 & 10 & 5 \\ clay & 0.1 & 0.3 & 0.3 & 100000 & 20000 & 10000 & 5 & 5 & 5 \\ chart\_and\_stuffed\_toy & 0.1 & 0.1 & 0.3 & 60000 & 10000 & 10000 & 10 & 5 & 5 \\ beads & 0.1 & 0.1 & 0.1 & 100000 & 20000 & 20000 & 30 & 10 & 10 \\ balloons & 0.1 & 0.3 & 0.3 & 140000 & 10000 & 10000 & 5 & 5 & 5 \\ cd & 0.1 & 0.1 & 0.2 & 200000 & 10000 & 10000 & 10 & 10 & 5 \\ MRI & 0.1 & 0.1 & 0.1 & 900000 & 60000 & 20000 & 30 & 30 & 20 \\ news & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 40000 & 20 & 10 & 5 \\ akiyo & 0.1 & 0.1 & 0.1 & 80000 & 40000 & 20000 & 30 & 10 & 5 \\ hall & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 20 & 10 \\ highway & 0.1 & 0.1 & 0.1 & 100000 & 100000 & 20000 & 30 & 30 & 30 \\ foreman & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 30 & 25 \\ container & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 20 & 15 & 5 \\ coastguard & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 30 & 15 \\ suzie & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 20 & 15 \\ carphone & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 25 & 20 & 20 \\ \hline \end{tabular}
\end{table}
Table 7: Parameters under different experiments of LRTC.
\(\tau_{1}\) on different experiments, where \(\tau_{\lambda}=\frac{1}{\sqrt{\max(I_{1},I_{2})I_{3}}}\). The weights are set to: \(\omega_{j,i}=\frac{1}{c+e^{-\omega_{N}-j+1,i}}\), where \(c=1.2,N=\min\{I_{1},I_{2}\}\), \(w_{N-j+1,i}=\frac{N\times\sigma_{j}(\bar{\mathcal{W}}^{(i)})}{m_{i}}\), \(\sigma_{j}(\bar{\mathcal{W}}^{(i)})\) is the \((j,j,i)\)-th singular value of \(\bar{\mathcal{W}},\mathcal{W}=\mathcal{X}+\frac{\mathcal{O}}{\mu}\) and \(m_{i}=\max\{\sigma_{j}(\bar{\mathcal{W}}^{(i)}),j=1,2,\ldots,N\}\). Besides, \(\mu_{0}=1/10000,\eta=1.2\).
## 7 Conclusion
This paper proposes a new non-convex function called the LM function, which not only preserves large singular values but also further increases the penalty on small singular values. Based on this, we propose the weighted tensor \(LM\)-norm as the smooth relaxation for tensor rank approximation. Two main applications of tensor recovery are considered: the first is the low-rank tensor completion (LRTC) problem, and the second is the tensor robust principal component analysis (TRPCA) problem. For each application, we propose the TLM-based model along with corresponding solution algorithms.
Our main conclusions are:
\(\bullet\) The experiments demonstrate that the proposed methods achieve good visual results and high numerical performance on various datasets. The parameters of the proposed methods are influenced by the image data. As the sampling rate decreases or the noise level increases, the optimal parameters \(\lambda\gamma\) generally need to be increased.
\(\bullet\) The selection of weights in the proposed weighted tensor LM norm is inversely proportional to the singular values, which enhances sensitivity to different singular values. Compared with other methods, our proposed weighted tensor LM norm method is more effective in approximating tensor rank.
\(\bullet\) The EMLCP method represents the state-of-the-art method, and further improving its performance is challenging. Through comparison with the results of the EMLCP method, we further validate that the LM function outperforms the MLCP function in handling singular values and demonstrates the efficiency of the proposed methods.
In the various experiments conducted in this paper, the selection of weights depends on the problem type, and the optimal parameter selection may depend on the characteristics of test data. In fact,
\begin{table}
\begin{tabular}{c|c|c c c c} \hline HSI & NL & \(\lambda\) & \(\gamma\) & \(\varepsilon\) & \(\tau_{1}\) \\ \hline \multirow{4}{*}{Pavia City Center} & 0.2 & 0.02 & 4000 & 1 & 0.015\(\tau_{\lambda}\) \\ & 0.3 & 0.02 & 15000 & 1 & 0.014\(\tau_{\lambda}\) \\ & 0.4 & 0.02 & 50000 & 1 & 0.011\(\tau_{\lambda}\) \\ \hline \multirow{4}{*}{Washington DC} & 0.2 & 0.02 & 10000 & 1 & 0.016\(\tau_{\lambda}\) \\ & 0.3 & 0.02 & 13000 & 1 & 0.011\(\tau_{\lambda}\) \\ \cline{1-1} & 0.4 & 0.02 & 400000 & 1 & 0.008\(\tau_{\lambda}\) \\ \hline \end{tabular}
\end{table}
Table 8: Parameters under different experiments of TRPCA.
finding the optimal parameters for different datasets is a complex engineering task. Therefore, how to construct adaptive parameter selection methods becomes an interesting research topic. Moreover, in this paper, we utilized fast Fourier transform, and it is also worthwhile to explore the relevant theories and potential improvements under other transforms, such as discrete cosine transform transform [32], unitary transform [33], framelet transform [34], group-tube transform [35] and even nonlinear transform [36]. In the future, we will focus on three aspects: developing methods for constructing adaptive parameters, investigating the performance of non-convex functions under different transforms, and exploring more effective approaches for handling singular values.
|
2309.11567 | On the Hagedorn behavior of the superstring propating in a cosmological
time dependent background | In this work the LvN quantization of the type IIB superstring is carried on
in a time dependent plane wave background with a constant self-dual
Ramond-Ramond 5-form and a linear dilaton in the light-like direction. Such an
endeavour allows us to define an invariant density matrix and study important
issues in real time string thermodynamics. In particular, the Hagendorn
temperature is calculated as function of the thermalization time. | Daniel Luiz Nedel | 2023-09-20T18:10:46Z | http://arxiv.org/abs/2309.11567v1 | # On the Hagedorn behavior of the superstring
###### Abstract
In this work the LvN quantization of the type IIB superstring is carried on in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. Such an endeavour allows us to define an invariant density matrix and study important issues in real time string thermodynamics. In particular, the Hagedorn temperature is calculated as function of the thermalization time.
Keywords:Superstrings and Heterotic Strings, Sigma Model, Spacetime Singularities, Thermal Field Theory
ArXiv ePrint: 0000.0000
+
Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-1105, USA
## 1 Introducion
The formulation of superstring theory at finite temperature and the study of the superstring sigma model for time dependent backgrounds are topics of constant and renewed interest in the literature, in view of the structural role superstring theory plays in the construction of theoretic frameworks for fundamental interactions and quantum gravitation. In particular, the study of thermal effects when the string propagates in cosmological time dependent geometries can shed light on important questions related to quantum cosmology and may help to understand the nature of space-like singularities[1].
One outstanding feature of string theory at finite temperature is the exponential growth of states as function of energy. Due to this behavior, the partition function becomes ill defined for temperatures above the so-called Hagedorn temperature. If the Hagedorn behavior works in string theory as it works in hadrons physics, then the true degrees of freedom of the theory at high temperature may be others than those of the perturbative string. However, in spite of many works about finite temperature string theory, a precise understanding of the Hagedorn temperature and the true degrees of freedom at higher temperatures is still lacking. Many of the advances made in understanding the Hagedorn temperature stem from a specific equilibrium finite temperature field theory formalism: the imaginary time formalism. In this case, the thermal state is described by compactifying Euclidean time on a circle, the thermal circle. The radius of the thermal circle is equal to the inverse temperature in natural units. For string theory applications this formalism entails two complications. The first one arises from the simple fact that string theory
contains gravity: for theories containing gravity, the radius of the thermal circle becomes a dynamic field, which makes the very notion of thermal equilibrium non-trivial [2]. In addiction, for closed strings one needs to take into account the winding modes around the thermal circle. Above the Hagedorn temperature these modes become tachyonic and it is precisely these tachyonic excitations that encodes the Hagedorn divergence and the long/short string transition discussed in [3], [4], [5],[6]. However, when the superstring propagates in a time dependent background, the mass and coupling parameters of the superstring sigma model depend explicitly on time. Therefore, the study of thermal effects in this time dependent superstring sigma model requires a real time formalism. Actually, from a worldsheet perspective it's an open system. So, a non equilibrium formalism must be taken into account.
In general, the non equilibrium quantization of a determined system is carried on using the Schwinger and Keldysh formalism. In this formalism a closed time path integral is introduced to treat properly the non equilibrium evolution of quantum fields from their initial thermal equilibrium. Here another approach is used: it is the so called Liouville-von Neumann (LvN) approach [7; 8; 9].
The LvN approach is a canonical method that unifies the usual methodology to study the evolution of pure states, given by the functional Schrodinger equation, with the usual approach used to study the evolution of mixed states, described by the density matrix (which in turn obeys the LvN equation). Note that even though the density matrix depends on time, it still satisfies the LvN equation. Hence, the LvN method treats the time-dependent nonequilibrium system exactly in the same way as the time-independent one.
In the present work the LvN approach is used to study thermal effects in the light cone superstring for the superstring propagating in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. This background keeps sixteen supersymmetries and the sigma model was canonically quantized in [10], where it was shown that the Hamiltonian is time-dependent with vanishing zero-point energy and has a supersymmetric spectrum. As shown in [10] the background is geodesically incomplete and hence admits a null cosmology interpretation. However, the dilaton diverges close to the cosmological singularity and so one needs a non-perturbative description to study the string dynamics close to null singularity.In the sigma model studied in [10], the sign that the theory is not well defined at the singularity appears as a divergence in the time-dependent Hamiltonian when it is evaluated close to the singularity. On the other hand, it was shown in [11] that as the string evolves towards the singularity, the vacuum seen by asymptotically flat observers is a left/right entanglement state. Hence a left/right superstring entanglement entropy appears, dynamically generated by the background. It was shown that, at the singularity, the left/right string entanglement is finite an then could be a useful tool to probe the singularity. Furthermore, it was shown that, at the singularity, the left/right entanglement state is in fact a thermal state and the worldsheet entanglement entropy becomes the thermodynamic entropy for a 2d free supersymmetric gas, which implies that near the singularity the string thermalizes at a finite temperature. Here, in order to study more carefully the superstring thermalization in this background, the superstring canonical quantization is carried on in the Liouville picture,
which allows us to calculate the Hagedorn temperature in the adiabatic approximation as function of time, where time means the time where thermalization takes place. In fact, the Hagedorn temperature is shown to increase as the string evolves from the asymptotically flat time to the singularity time. The present work is divided as follows: in the first section the LvN approach is presented. The time dependent background studied here is presented in section 3. In section 4 the bosonic and fermionic sectors of the light cone superstring time dependent sigma model are quantized in the LvN picture and the invariant creation/annihilation operators are constructed. The density matrix and the adiabatic approximation are discussed in section 5. As an application, the non equilibrium thermal two point function is calculated in the adiabatic approximation. Finally in section 6 the Hagedorn temperature is calculated as function of time.
## 2 The Liouville-von Neumann (LvN) method
The core of the LvN approach lies on the definition of invariant operators and the fact that quantum LvN equation provides all the quantum and statistical information of non equilibrium systems. Given an operator O and a time dependent evolution operator \(U(t)\), an invariant operator \(O_{L}(t)\) is defined by \(O_{L}(t)=U(t)O_{S}U^{\dagger}(t)\), where \(O_{S}\) is the operator \(O\) in Schrodinger picture. This relation also defines the so called Liouville picture. Comparing to the Heisenberg picture, the operator \(O_{L}(t)\) evolves backward in the same way as the density operator. So \(O_{L}\) also satisfies the LvN equation
\[i\frac{\partial O_{L}}{\partial t}+[O_{L},H]=0 \tag{1}\]
where the Hamiltonian H can be time dependent. The Lewis-Riesenfeld invariant theorem states that an operator satisfying (1) has time-dependent eigenstates and time-independent eigenvalues. So the spectrum of the invariant operators yields quantum states for the time dependent system.
In order to study the nonequilibrium evolution exactly in the same way as the equilibrium one, it is necessary to find an invariant operator \(O_{L}\) such that a time dependent density matrix satisfying LvN equation can be written as \(\rho_{L}(t)=Z^{-1}e^{-\beta O_{L}(t)}\), where Z is the trace of \(\rho_{L}(t)\). Here it is assumed that the system reaches a thermodynamic equilibrium point characterized by the temperature \(1/\beta\). At the time \(t_{0}\) at which equilibrium is reached, \(\rho_{L}(t_{0})\) is the usual equilibrium density matrix. As an example, suppose we have an oscillator interacting with a thermal bath and, as a consequence of this interaction, we have a time-dependent mass. The Hamiltonian will be time dependent and there will be modes creation(or particle creation in a quantum field scenario). One could naively construct a thermal density matrix defined by the time dependent Hamiltonian
\[\rho_{H}=\frac{1}{Z}e^{-\beta H(t)}\,. \tag{2}\]
This density matrix does not satisfy the quantum Liouville-von Neumann (LvN) equation and it is not possible to relate \(1/\beta\) to the equilibrium temperature. If the system starts in the initial thermal equilibrium state, its final state can be far away from the initial one.
Actually, owing to particle production, the final state can be unitarily inequivalent to the initial one. The strategy of the LvN approach is to define time dependent oscillators \(a_{L},a_{L}^{\dagger}\) that satisfy the equation (1).
\[i\frac{\partial a_{L}}{\partial\tau}+[a_{L},H]=0\,. \tag{3}\]
The linearity of the LvN equation allows us to use \(a_{L}(t)\) and \(a_{L}^{\dagger}(t)\) to construct operators that also satisfy equation (1); in particular, the number operator \(N_{L}=a_{L}{}^{\dagger}(t)a_{L}(t)\). By using the Lewis-Riesenfeld invariant theorem, one finds the Fock space consisting of the time dependent number states such that
\[N_{L}(t)|n,t\rangle=n|n,t\rangle\,. \tag{4}\]
With the invariant oscilators, a density matrix which satisfies the LvN equation can be defined as
\[\rho_{\rm T}=\frac{1}{Z_{N}}e^{\beta\omega_{0}a^{\dagger}(t)a(t)}\,, \tag{5}\]
where \(\beta\) and \(\omega_{0}\) are free parameters and the trace that appears in the definition of \(Z_{N}\) is taken over the states defined in (4). Now, the system is characterized by the density matrix (5) in the same way of a time independent one. The key point is to find solutions for equation (3) such that, in the adiabatic regime, the density matrix (5) is equal to the density matrix (2) evaluated at the thermalization time. Thus, the unitary real time evolution of the system can be studied until it reaches thermal equilibrium, characterized by \(\beta\).
## 3 The Background
In this section, the time dependent background studied here is presented. Consider the following time dependent background with Ramond-Ramond flux
\[ds^{2}=-2dx^{+}dx^{-}-\lambda(x^{+})\,x_{I}^{2}\,dx^{+}dx^{+}+dx ^{I}dx^{I}\,,\] \[\phi=\phi(x^{+})\,,\qquad(F_{5})_{+1234}=(F_{5})_{+5678}=2f, \tag{6}\]
where \(\phi\) is the dilaton and \(F_{5}\) the Ramond-Ramond field. As usual for a generic plane wave, the supersymmetry preserved by the background is reduced from maximal (32 supercharges) to sixteen supercharges. When type IIB Green-Schwarz (GS) superstring propagates in this background, conformal invariance of the worldsheet demands
\[R_{\mu\nu}=-2D_{\mu}D_{\nu}\phi+\frac{1}{24}e^{2\phi}(F_{5}^{2})_{\mu\nu}\,, \tag{7}\]
and the only non zero component of the Ricci curvature tensor \(R_{\mu\nu}\) is
\[R_{++}=8\lambda(x^{+}). \tag{8}\]
Putting (3.1) into (3.2) gives
\[\lambda=-\frac{1}{4}\phi^{\prime\prime}+f^{2}e^{2\phi}\,. \tag{3.4}\]
In the reference [10], a solution of (3.2) with non zero constant Ramond-Ramond field (\(f=f_{0}\)) is studied. It has the form
\[\phi=-cx^{+},\ \lambda=f_{0}^{2}e^{-2cx^{+}}, \tag{3.5}\]
for any constant \(c\). In this case, the metric admits a null cosmology interpretation and the cosmological singularity is located in \(x^{+}=-\infty\). Note that in this model the string coupling \(g=e^{-\phi}\) diverges at the singularity. As discussed previously, interaction of the string with this kind of backround makes the parameters of the sigma model time-dependent. In the next sections the quantization of the superstring sigma model for this background is carried on in the Liouville picture.
## 4 Superstring in LvN picture.
In this section the Liouville-von Neumann method is used to study the quantum dynamics of the superstring propagating in the backgound (3.1). This implies defining a superstring Hilbert space constructed with creation/annihilation operators that are LvN invariant, that is, operators which satisfy equation (2.1).
### Bosonic Sector
Let us start with the bosonic sector. Although the gauge fixing has already been discussed in [10], it is useful to include a review here in order to fix notation. The bosonic part of the superstring sigma model for the background (3.1) is
\[S=\frac{1}{4\pi\alpha^{\prime}}\int d^{2}\sigma g^{ab}\ G_{\mu \nu}\partial_{a}X^{\mu}\partial_{b}X^{\nu}\] \[=\frac{1}{4\pi\alpha^{\prime}}\int d^{2}\sigma g^{ab}\ \left(-2 \partial_{a}X^{+}\partial_{b}X^{-}+\partial_{a}X^{I}\partial_{b}X^{I}-m^{2}(X ^{+})X_{I}^{2}\partial_{a}X^{+}\partial_{b}X^{+}\right),\]
where \(g_{ab}\) is the worldsheet metric, \(\sigma^{a}=(\tau,\sigma)\) are the worldsheet coordinates, \(I=1,2,\cdots,8\) and \(m(X^{+})=fe^{-cX^{+}}\). As usual, the RR fluxes do not appear in the bosonic action. The bosonic worldsheet gauge symmetry is fixed using the light cone gauge
\[\sqrt{-g}g^{ab} = \eta^{ab}\,\ \ \ \ -\eta_{\tau\tau}=\eta_{\sigma\sigma}=1\] \[X^{+} = \alpha^{\prime}p^{+}\tau\,\ \ \ \ p^{+}>0. \tag{4.2}\]
In this gauge all the dynamics is determined by \(X^{I}\)'s through the constraints resulting from [12]
\[\frac{\delta{\cal L}}{\delta g_{\tau\sigma}}=0\,\ \ \ \frac{\delta{\cal L}}{ \delta g_{\tau\tau}}=\frac{\delta{\cal L}}{\delta g_{\sigma\sigma}}=0 \tag{4.3}\]
After setting \(-g_{\tau\tau}=g_{\sigma\sigma}=1\), the constraints (4.3) allow us to write \(\partial_{\sigma}X^{-}\) and \(\partial_{\tau}X^{-}\)in terms of \(X^{I}\)
\[\partial_{\sigma}X^{-}=\frac{1}{\alpha^{\prime}p^{+}}\ \partial_{ \sigma}X^{I}\partial_{\tau}X^{I}\, \tag{4.4}\] \[\partial_{\tau}X^{-}=\frac{1}{2\alpha^{\prime}p^{+}}\,\left( \partial_{\tau}X^{I}\partial_{\tau}X^{I}+\partial_{\sigma}X^{I}\partial_{ \sigma}X^{I}-(m(\tau)\alpha^{\prime}p^{+})^{2}X^{I}X^{I}\right). \tag{4.5}\]
Choosing \(c=\frac{1}{\alpha^{\prime}p^{+}}\), the light cone bosonic action can be written as
\[S^{bos.}_{l.c.}=\frac{1}{4\pi\alpha^{\prime}}\int d\tau\int_{0}^{2\pi\alpha^{ \prime}p^{+}}d\sigma\left[\partial_{\tau}X^{I}\partial_{\tau}X^{I}-\partial_ {\sigma}X^{I}\partial_{\sigma}X^{I}-m^{2}(\tau)X_{I}^{2}\right]\,, \tag{4.6}\]
where \(\tau\) and \(\sigma\) were re-scaled by \(\alpha^{\prime}p^{+}\) and \(m(\tau)=f_{0}e^{-\tau}\). Since the bosonic sector of the theory is \(SO(8)\) invariant, the \(I\) index will be frequently omitted.
In order to quantize the theory in the LvN approach, the string coordinate \(X(\sigma)\) and momentum density \(P(\sigma)=\frac{X}{2\pi\alpha^{\prime}}\) are expanded as
\[X^{I}(\sigma) = x_{0}^{I}+\sqrt{2}\sum_{n=1}^{\infty}\left(x_{n}^{I}\cos\frac{n \sigma}{\alpha}+x_{-n}^{I}\sin\frac{n\sigma}{\alpha}\right)\,,\] \[P^{I}(\sigma) = \frac{1}{2\pi\alpha}\left[p_{0}^{I}+\sqrt{2}\sum_{n=1}^{\infty} \left(p_{n}^{I}\cos\frac{n\sigma}{\alpha}+p_{-n}^{I}\sin\frac{n\sigma}{\alpha} \right)\right]\,, \tag{4.7}\]
where it was defined \(\alpha=p^{+}\alpha^{\prime}\). In this notation(usual in pp waves backgrounds) all of the string oscillations--left-movers, right-movers, and the zero modes--can be treated on an equal footing. Note that the form of the expansion for the worldsheet fields \(X^{I}(\sigma)\) and \(P^{I}(\sigma)\) allows us to associate the Fourier modes to Hermitian operators \(x_{n}\) and \(p_{n}\). This will be very useful for writing the thermal density matrix in the position representation. In general the Fourier mode operators \(x_{n}\) and \(p_{n}\) can be time dependent. However, in the LvN picture they are Schrodinger operators. They can be chosen such that the expansion (4.7) represents \(X\) and \(P\) at a given fixed time.
The normalization is chosen so that the canonical commutation relation
\[[X^{I}(\sigma),P^{J}(\sigma^{\prime})]=i\delta^{IJ}\delta(\sigma-\sigma^{ \prime}) \tag{4.8}\]
follows from imposing
\[[x_{m}^{I},p_{n}^{J}]=i\delta^{IJ}\delta_{mn}. \tag{4.9}\]
Next, the light cone Hamiltonian is written as
\[H^{bos.}_{l.c.}=\frac{1}{4\pi\alpha^{\prime}}\int_{0}^{2\pi\alpha^{\prime}p^{+ }}d\sigma\,\left[(2\pi\alpha^{\prime})^{2}P_{I}^{2}+(\partial_{\sigma}X^{I})^ {2}+m^{2}(\tau)X_{I}^{2}\right]. \tag{4.10}\]
In order to proceed with the LvN quantization, equation (4.7) is used to write the light cone Hamiltonian in terms of the Fourier mode operators( omitting \(SO(8)\) indices):
\[H^{bos.}_{l.c.}=\frac{1}{2\alpha}\sum_{n=-\infty}^{\infty}\left[p_{n}^{2}+\omega_{ n}^{2}(\tau)x_{n}^{2}\right], \tag{30}\]
where \(\omega_{n}(\tau)=\sqrt{n^{2}+\alpha^{2}m^{2}(\tau)}\). Now, the LvN invariant operators can be can be found. Following the LvN approach, it can be defined a set of bosonic rising and lowering operators
\[\left[\alpha_{n}^{I}(\tau),\alpha_{m}^{\dagger\,J}(\tau)\right]=\delta^{IJ} \delta_{nm} \tag{31}\]
satisfying the quantum light cone LvN equation
\[\frac{i}{\alpha^{\prime}p^{+}}\frac{\partial}{\partial\tau}\alpha_{n}^{I}( \tau)+[\alpha_{n}^{I}(\tau),H^{bos.}_{l.c.}]=0,n\in\mathbb{Z}. \tag{32}\]
In order to find the invariant bosonic string oscillators, the operators \(\alpha_{n}(\tau)\),\(\alpha_{m}^{\dagger}(\tau)\) are defined in terms of the Fourier mode operators \(x_{n}\), \(p_{n}\)
\[\alpha_{n}^{I}(\tau) = i\left(\phi_{n}^{*}(\tau)p_{n}^{I}-\dot{\phi_{n}^{*}}(\tau)x_{n }^{I}\right)\] \[\alpha_{n}^{\dagger\,I}(\tau) = -i\left(\phi_{n}(\tau)p_{n}^{I}-\dot{\phi_{n}}(\tau)x_{n}^{I} \right)\,, \tag{33}\]
where \(\phi\) and \(\dot{\phi}\) must satisfy the Wronskian
\[\dot{\phi}_{n}^{*}(\tau)\phi_{m}(\tau)-\dot{\phi}(\tau)\phi^{*}(\tau)=i\delta _{mn} \tag{34}\]
to ensure that the relations (30) are satisfied. Now all work boils down to finding the functions \(\phi_{n}(t)\) such that \(\alpha_{n}^{I}(\tau)\) satisfy (32). Plugging (33) into (32) results in the following equation for \(\phi_{n}(t)\)
\[\ddot{\phi}_{n}+\omega_{n}^{2}(\tau)\phi_{n}=0\,. \tag{35}\]
A solution that satisfies (34) can be written in terms of Bessel Functions:
\[\phi_{n}(\tau)=\sqrt{\left(\frac{\tilde{f}}{2}\right)}\Gamma(1+in)J_{in}\left( z(\tau)\right)\,, \tag{36}\]
where \(\tilde{f}=\alpha f_{0}\), \(z(\tau)=\tilde{f}e^{-\tau}\) and \(J_{m}\) is a Bessel function of the first kind. The relations (34) follow from the Gamma and Bessel function properties
\[\Gamma(1+in)\,\Gamma(1-in)=\frac{n\pi}{\sinh n\pi}\,\] \[J_{\nu}(z)J_{-\nu}^{\prime}(z)-J_{-\nu}(z)J_{\nu}^{\prime}(z)=- \frac{2\sin\nu\pi}{\pi z}. \tag{37}\]
Here we are interested in the adiabatic limit given by
\[|\frac{\dot{\omega}_{n}(\tau)}{\omega(\tau)}|=\tilde{f}^{2}\frac{\tilde{f}^{2 }e^{-2\tau}}{\tilde{f}^{2}e^{-2\tau}+n^{2}}\ll 1 \tag{38}\]
Note that, even close to the null singularity (\(\tau\rightarrow-\infty\)), the adiabatic regime is controlled by Ramond-Ramond field. So, in the adiabatic regime (\(\alpha f_{0}<<1\)), the solution can be approximated by
\[\phi_{n}(\tau)\approx\phi_{n}^{ad}(\tau)=\frac{1}{\sqrt{2\omega_{n}(\tau)}}e^{-i \int\omega_{n}(\tau)}, \tag{30}\]
The adiabatic solution is important to study the thermalization process in the LvN approach. It will be shown that in this regime the invariant density matrix approaches the thermal equilibrium density matrix, calculated at the instant of time when the system enters thermodynamic equilibrium. Once the invariant creation and annihilation operators are defined, one uses the Lewis-Riesenfeld theorem and finds a base for the bosonic fock space defined by time-dependent number states:
\[N_{n}^{LvN}(\tau)|n,\tau)_{b}=n|n,\tau)_{b} \tag{31}\]
where \(N_{n}^{LvN}(\tau)\) is defined in the usual way:
\[N_{n}^{LvN}(\tau)=\delta^{IJ}\alpha_{n}^{\dagger I}(\tau)\alpha_{n}^{J}(\tau). \tag{32}\]
In the next section, the invariant number operator will be used to define a LvN invariant density matrix. Suppose the system thermalizes adiabatically at time \(\tau_{0}\). Then, close to \(\tau_{0}\) the Hamiltonian can be written as
\[H_{l.c.}^{bos.}\approx H_{LvN}^{bos.}=\frac{1}{\alpha}\left[\sum_{n=-\infty} ^{\infty}\omega_{n}(\tau_{0})N_{n}^{LvN}(\tau)+4\right]. \tag{33}\]
The position and momentum modes operators that appear in equation (27)can be also defined in the Heisenberg representation. Let us define \(p_{n}(\tau)\) and \(x_{n}(\tau)\) as the momentum and position modes operators in the Heisenberg representation, and write the non invariant operators
\[a_{n}^{I}(\tau) = \frac{\sqrt{2\omega_{n}(\tau)}}{2}\left[\frac{p_{n}^{I}}{\omega_{ n}}-ix_{n}^{I}\right]\] \[a_{n}^{\dagger I}(\tau) = \frac{\sqrt{2\omega_{n}(\tau)}}{2}\left[\frac{p_{n}^{I}}{\omega_ {n}}+ix_{n}^{I}\right]\,, \tag{34}\]
which obey
\[[a_{m}^{I}(\tau),a_{n}^{\dagger J}(\tau)]=\delta_{mn}\delta^{IJ}. \tag{35}\]
In terms of the non-invariant creation and annihilation operators, the time dependent bosonic Hamiltonian may be written as
\[H_{l.c.}^{bos.}(\tau)=\frac{1}{\alpha}\left[\sum_{n=-\infty}^{\infty}\omega_{ n}(\tau)a_{n}^{\dagger}(\tau)a_{n}(\tau)+4\right]\,, \tag{36}\]
which is identical to the bosonic part of the Hamiltonian derived in [10]. The Hamiltonian is diagonal; however, it cannot be used to define the thermal density matrix since it is not LvN invariant.
### Fermionic Sector
In this subsection the quantization of the fermionic sector is worked out in the LvN approach. As in the bosonic case, for the sake of self-containedness a review of the light cone gauge fixing process, without further details, will be carried out. The fermionic part of the type two B superstring in this background can be written as
\[S^{fer.}=-\frac{i}{2\pi\alpha^{\prime}}\int d^{2}\sigma(\sqrt{-g}g^{ab}\delta_{ AB}-\epsilon^{ab}\sigma_{3AB})\,\partial_{a}x^{\mu}\,\bar{\theta}^{A}\Gamma_{\mu}( \hat{D}_{b}\theta)^{B}+{\cal O}(\theta^{3})\,, \tag{4.27}\]
\[\sigma_{3}={\rm diag}(1,-1)\,,\] \[\hat{D}_{b}=\partial_{b}+\Omega_{\nu}\,\partial_{b}x^{\nu}\,, \tag{4.28}\]
with \(\hat{D}_{b}\) being the pull-back of the covariant derivative to the worldsheet. The indices \(a,b\) are worldsheet indices; \(A,B=1,2\) and \(\mu\) is the spacetime index. The spin connection \(\Omega_{\nu}\) is defined by
\[\Omega_{-} = 0,\] \[\Omega_{\,I} = \frac{ie^{\phi}}{4}f\,\Gamma^{+}(\Pi+\Pi^{\prime})\,\Gamma_{I}\, \sigma_{2},\] \[\Omega_{+} = -\frac{1}{2}\lambda\,x^{I}\Gamma^{+I}{\bf 1}+\frac{ie^{\phi}}{4 }f\,\Gamma^{+}(\Pi+\Pi^{\prime})\,\Gamma_{+}\sigma_{2}\,, \tag{4.29}\]
where \(\Gamma^{\pm}=(\Gamma^{0}\pm\Gamma^{9})/\sqrt{2}\), \(\sigma_{2}\) is the Pauli matrix and \(\Pi\) is symmetric, traceless and squares to one 1. The fermionic fields \(\theta^{A}\) are 10d spinors and in equation (4.27) the space time spinor indices where omitted( actually \(\theta^{A}=\theta^{A}_{\alpha}\) with \(\alpha=1,2,\ldots,16\), and \(A=1,2\)). Higher orders in theta will not be taken into account because they do not contribute in the light-cone gauge [13],[14]. The representation of \(\Gamma\)-matrices chosen is such that \(\Gamma^{0}\) is the 10d charge conjugation; therefore, the components of \(\theta^{A}\) are all real. The gauge symmetries are fixed choosing light-cone gauge:
Footnote 1: The following representation will be used : \(\Pi=\Gamma^{1}\Gamma^{2}\Gamma^{3}\Gamma^{4}={\rm diag}({\bf 1}_{4},-{\bf 1}_{4})\), \(\Pi^{\prime}=\Gamma^{5}\Gamma^{6}\Gamma^{7}\Gamma^{8}\)
\[x^{+}=\alpha^{\prime}p^{+}\tau\,,\ \ p^{+}>0\;.\] \[\Gamma^{+}\theta^{A}=0\;, \tag{4.30}\]
The kappa symmetry (\(\Gamma^{+}\theta^{A}=0\)) implies
\[(\theta^{A})^{T}\Gamma^{I}\theta^{B}=0,\ \ \forall A,B\,,\] \[(\Omega_{I})^{A}_{\ B}\theta^{B}=0\,,\] \[\Pi\theta^{A}=\Pi^{\prime}\theta^{A}\,. \tag{4.31}\]
After fixing the kappa symmetry the ten dimensional fermions are reduced to \(SO(8)\) representation. In ref.[10] the light cone fermionic action is written in terms of the real fields
\(\theta_{a}^{1}\) and \(\theta_{a}^{2}\); here complex fields will be used. Since \(\theta_{a}^{1}\) and \(\theta_{a}^{2}\) have the same chirality, we can define the complex positive chirality SO(8) spinor (\(\theta^{a}\), \(a=1,\ldots,8\)) by
\[\theta_{a}=e^{-i\frac{\tau}{4}}\left(\theta_{a}^{1}+i\theta_{a}^{2}\right),\ \ \bar{\theta}_{a}=e^{i\frac{\tau}{4}}\left(\theta_{a}^{1}-i\theta_{a}^{2}\right) \tag{4.32}\]
From this point onwards the \(SO(8)\) spinor indices will be often omitted. Finally, the light cone fermionic action can be written as
\[S_{l.c.}^{fer.}=\frac{1}{4\pi\alpha^{\prime}}\int d\tau\int_{0}^{2\pi\alpha}d \sigma\ [i(\bar{\theta}\partial_{\tau}\theta+\theta\partial_{\tau}\bar{ \theta})-\theta\partial_{\sigma}\theta+\bar{\theta}\partial_{\sigma}\bar{ \theta}-2m(\tau)\bar{\theta}\mathrm{I}\theta]. \tag{4.33}\]
where \(\tau\) and \(\sigma\) were re-scaled as in the bosonic case. The last term in the action is a time dependent mass term resulting from the RR five-form flux. The time dependent mass \(m(\tau)\) is the same as the bosonic sector. Again, in order to quantize the theory using the LvN approach, we expand \(\theta\) and its conjugate momentum \(\Lambda\equiv\frac{i}{2\pi\alpha^{\prime}}\bar{\theta}\) as
\[\theta(\sigma) = \vartheta_{0}+\frac{1}{\sqrt{2}}\sum_{n\neq 0}(\vartheta_{|n|} -ie(n)\vartheta_{-|n|})e^{in\sigma/\alpha^{\prime}p^{+}},\] \[\Lambda(\sigma) = \frac{i}{2\pi\alpha}\left[\lambda_{0}+\frac{1}{\sqrt{2}}\sum_{n \neq 0}(\lambda_{|n|}-ie(n)\lambda_{-|n|})e^{in\sigma/\alpha^{\prime}p^{+}} \right], \tag{4.34}\]
such that the anticommutation relation
\[\{\theta^{a}(\sigma),\Lambda^{b}(\sigma^{\prime})\}=i\delta^{ab}\delta( \sigma-\sigma^{\prime}) \tag{4.35}\]
follows from
\[\{\vartheta_{m}^{a},\lambda_{n}^{b}\}=\delta^{ab}\delta_{mn}. \tag{4.36}\]
In equation (4.34), \(e(n)\) is the signal of \(n\). Note that the Fourier modes satisfy \(\lambda_{n}=\frac{\alpha^{\prime}p^{+}}{2}\bar{\vartheta_{n}}\). For the sake of simplicity, let us define \(\lambda=-i\Lambda\). In terms of \(\lambda\), the fermionic Hamiltonian is
\[H_{l.c.}^{fer.}=\frac{1}{2}\int_{0}^{2\pi\alpha^{\prime}p^{+}}d\sigma\ \left[-4\pi\lambda\partial_{\sigma}\lambda+\frac{1}{4\pi}\theta\partial_{ \sigma}\theta+2m(\tau)(\lambda\Pi\theta)\right]. \tag{4.37}\]
In terms of the Fourier mode operators, we have
\[H_{l.c.}^{fer.}=\frac{1}{2}\sum_{n=-\infty}^{\infty}\left[\frac{n}{2}\left( \frac{4}{(\alpha^{\prime}p^{+})2}\lambda_{-n}\lambda_{n}-\hat{\vartheta}_{-n} \vartheta_{n}\right)+2m(\tau)\lambda_{n}\Pi\vartheta_{n}\right]. \tag{4.38}\]
If this Hamiltonian is used to solve the LvN equation in order to find the invariant fermionic operators, a set of coupled equations that are difficult to solve will emerge. The equations become simpler if we perform the following Bogoliubov transformation
\[\lambda_{n} = \frac{\sqrt{\alpha}}{2}\left[\hat{\lambda}_{n}+e(-n)\hat{\vartheta }_{-n}\right],\ n\neq 0\] \[\vartheta_{n} = \frac{1}{\sqrt{\alpha}}\left[\hat{\vartheta}_{n}+e(-n)\hat{ \lambda}_{-n}\right],\ n\neq 0\] \[\lambda_{0} = \hat{\lambda}_{0},\ \ \vartheta_{0}=\hat{\vartheta}_{0}, \tag{4.39}\]
such that
\[\{\hat{\lambda}_{n},\hat{\vartheta}_{m}\}=\delta_{nm},\,\,\,n\in\mathbb{Z} \tag{4.40}\]
In terms of the hat operators, the Hamiltonian is written as
\[H^{fer.}_{l.c.}=H^{fer.}_{0}+\sum_{n=1}^{\infty}\left[\frac{n}{\alpha}\left( \hat{\vartheta}_{-n}\hat{\lambda}_{-n}-\hat{\lambda}_{n}\hat{\vartheta}_{n} \right)+m(\tau)\left(\hat{\lambda}_{-n}\Pi\lambda_{n}+\hat{\vartheta}_{n}\Pi \hat{\vartheta}_{-n}\right)\right], \tag{4.41}\]
where
\[H^{fer.}_{0}=\tilde{f}^{2}e^{-2\tau}\hat{\lambda}_{0}\Pi\hat{\vartheta}_{0}. \tag{4.42}\]
Now, it can be defined a set of fermionic LvN invariant operators satisfying
\[\{\beta_{m}(\tau),\beta_{n}^{\dagger}(\tau)\}=\delta_{mn}\,,n\in\mathbb{Z}\,. \tag{4.43}\]
Let's write \(\beta_{n}(\tau)\) and \(\beta_{n}^{\dagger}(\tau)\) as
\[\beta_{n}(\tau)=F(\tau)\hat{\lambda}_{n}+G(\tau)\hat{\vartheta}_ {-n}\] \[\beta_{n}^{\dagger}(\tau)=F(\tau)^{*}\hat{\vartheta}_{n}+G(\tau) ^{*}\hat{\lambda}_{-n}, \tag{4.44}\]
where the functions \(F(\tau)\) and \(G(\tau)\) must satisfy
\[|F(\tau)|^{2}+|G(\tau)|^{2}=1. \tag{4.45}\]
The equations (2.1) for \(\beta_{n}\) result in the following system of coupled first order equations
\[i\dot{F}+nF+\alpha m(\tau)\Pi G = 0\] \[i\dot{F}-nG+\alpha m(\tau)\Pi F = 0. \tag{4.46}\]
By using \(\Pi^{2}=1\), equations (4.46) results in the following decoupled second order equations :
\[\ddot{G}+\dot{G}+(n^{2}+in+\tilde{f}^{2}e^{-2\tau})G = 0\] \[\ddot{F}+\dot{F}+(n^{2}-in+\tilde{f}^{2}e^{-2\tau})F = 0 \tag{4.47}\]
By defining again \(z(\tau)=\tilde{f}e^{-\tau}\), the solutions that satisfy the conditions (4.45) are
\[F(\tau) = \sqrt{\frac{z(\tau)}{2}}\Gamma(\frac{1}{2}+in)J_{\frac{1}{2}+in} \left(z(\tau)\right)\] \[G(\tau) = \sqrt{\frac{z(\tau)}{2}}\Gamma(\frac{1}{2}+in)J_{-\frac{1}{2}+in }\left(z(\tau)\right), \tag{4.48}\]
where it was used the following properties
\[\Gamma\left(\frac{1}{2}+in\right)\,\Gamma\left(\frac{1}{2}-in \right)=\frac{\pi}{\cosh n\pi}\,\,, \tag{4.49}\] \[J_{-\frac{1}{2}+in}(z)J_{-\frac{1}{2}-in}(z)+J_{\frac{1}{2}+in} (z)J_{\frac{1}{2}-in}(z)=\frac{2\cosh n\pi}{\pi z}. \tag{4.50}\]
An adiabatic solution can be found and it has the same structure as that found in the bosonic case. So, we have constructed a set of fermionic rising and lowering invariant operators that can be used to define the fermionic number operators in the usual way. The superstring vacuum is defined by
\[\alpha^{I}_{n}(\tau)|0,t\rangle=0,\;\;\;\beta_{n}(\tau)|0,\tau\rangle=0 \tag{63}\]
The states created from \(\alpha^{\dagger}_{n},\beta^{\dagger}_{n}\) in general depend on time, but the Lewis-Riesenfeld theorem guarantees that their eigenvalues (occupation numbers) do not depend on time.
As in the bosonic case, it can also be found a set of non-invariant time dependent fermionic rising and lowering/ operators, satisfying
\[\{b_{m}(\tau),b^{\dagger}_{n}(\tau)\}=\delta_{mn},n\in\mathbb{Z}. \tag{64}\]
However, change of basis is far more complicated than the one used in the bososic sector. Going back to base (62), let's write \(\vartheta_{n}\) and \(\lambda_{n}\) as the fermionic mode operators in the Heisenberg representation. It can be defined the following time dependent operators:
\[b_{n}(\tau) = \frac{1}{2}\left[\sqrt{\alpha}A^{+}_{n}(\tau)\vartheta_{n}+ \frac{2}{\sqrt{\alpha}}e(n)A^{-}_{n}(\tau)\lambda_{-n}\right]\] \[b^{\dagger}_{n}(\tau) = \frac{1}{2}\left[\frac{2}{\sqrt{\alpha}}A^{+}_{n}(\tau)\lambda_ {n}+\sqrt{\alpha}e(n)A^{-}_{n}(\tau)\vartheta_{-n}\right], \tag{65}\]
where the time dependent matrices \(A^{\pm}_{n}(\tau)\) are defined by
\[A^{\pm}_{n}(\tau)=\frac{1}{1+\gamma_{n}(\tau)}\left(1+\gamma_{n}(\tau)\Pi \right),\;\gamma_{n}(\tau)=\frac{\omega_{n}(\tau)-|n|}{\alpha m(\tau)}. \tag{66}\]
The change of basis is similar to the one used in the time independent case [15]. Note that the relations (65) also breaks the \(SO(8)\) symmetry to \(SO(4)\times SO(4)\). The matrices \(A^{\pm}_{n}(\tau)\) were chosen in such a way that in terms of \(b_{n}\), the fermionic time dependent Hamiltonian(non invariant) is diagonal and takes the simple form
\[H^{ferm.}_{l.c.}(\tau)=\frac{1}{\alpha^{\prime}p^{+}}\left[\sum_{n=-\infty}^{ \infty}\omega_{n}(\tau)\left(b^{\dagger}_{n}b_{n}-4\right)\right], \tag{67}\]
such that the total time dependent superstring Hamiltonian, written in terms of \(a_{n}\), \(b_{n}\), has the same form of the one found in [10] using a different method. Note that the zero-point energy of the non invariant Hamiltonian exactly cancels between the bosons and the fermions.
## 5 The invariant string density matrix
In the previous section the superstring was quantized in LvN picture, which made it possible to find LvN invariant creation/annihilation operators. These invariant operators can be used to define a density matrix that satisfies the light cone LvN equation:
\[\frac{i}{\alpha}\frac{\partial}{\partial\tau}\rho_{LvN}+[\rho_{LvN},H_{l.c.}( \tau)] \tag{68}\]
where \(H_{l.c.}=H_{l.c.}^{bos.}+H_{l.c.}^{fer.}\). Note that, in terms of the non invariant oscillators, the Hamiltonian is:
\[H_{l.c.}(\tau)=\frac{1}{\alpha}\sum_{n=-\infty}^{\infty}\omega_{n}(\tau)\left(a _{n}^{\dagger}(\tau)a_{n}(\tau)+b_{n}^{\dagger}(\tau)b_{n}(\tau)\right). \tag{30}\]
This Hamiltonian is diagonal and has a time dependent supersymetric spectrum. However, as explained before, this Hamiltonian cannot be used to define a thermal density matrix because it is not LvN invariant. The main goal of this section is to show that, if the thermalization occurs adiabatically at time \(\tau_{0}\), this Hamiltonian can be used to define an instantaneous thermal density matrix, defined at time \(\tau_{0}\). To this end, it will be defined first a density matrix that satisfies equation (31) an then will be shown that, in the adiabatic limit, this density matrix approaches the one obtained with the instantaneous Hamiltonian \(H_{l.c.}(\tau_{0})\). With this result, we can calculate the thermal partition function at an equilibrium temperature T, defined in \(\tau_{0}\) in terms of the instantaneous diagonal Hamiltonian. As an application of the invariant density matrix, the non equilibrium worldsheet two point function is calculated. For the sake of simplicity, only the bosonic sector are going to be taken into account. The generalization for the fermionic sector is straightforward.
In order to calculate the invariant light cone density matrix, we need to take into account the time like killing vectors. In flat space, the time like killing vector is \(\frac{1}{\sqrt{2}}\left[\frac{\partial}{\partial x^{\tau}}+\frac{\partial}{ \partial x^{-}}\right]\). Here, owing to the non trivial dilaton dependency on \(x^{+}\), the timelike killing vector is just \(\frac{\partial}{\partial x^{-}}\). However, we want to define an invariant density matrix such that, in the asymptotically flat limit, it reduces to the standard expression for the light cone flat space string density matrix. So, the invariant density matrix is defined as
\[\rho_{LvN}=\frac{1}{Z_{LvN}}e^{-\tilde{\beta}\left(p^{+}+H_{LvN}\right)}\,, \tag{31}\]
where \(\tilde{\beta}=\frac{\beta}{\sqrt{2}}\) and \(H_{LvN}\) is given in terms of the invariant creation/annihilation operators
\[H_{LvN}==\frac{1}{\alpha^{\prime}p_{+}}\sum_{n=1}^{\infty}\tilde{\omega}_{n} \left(\delta_{IJ}\alpha_{n}^{I\dagger}\alpha_{n}^{J}+\delta_{ab}\beta_{n}^{a \dagger}\beta_{n}^{b}\right). \tag{32}\]
The normalization factor \(Z_{LvN}\)(the string partition function) is the trace
\[Z_{LvN}=Tre^{-\tilde{\beta}\left(p^{+}+H_{LvN}\right)}. \tag{33}\]
In general, \(\tilde{\omega}_{n}\) and \(\beta\) are free parameters. It is assumed that the system thermalizes at a time \(\tau_{0}\), with equilibrium temperature \(1/\beta\). The parameter \(\tilde{\omega}_{n}\) will be related to \(\omega_{n}(\tau_{0})\), as it will be clear soon.
By using the Lewis-Riesenfeld invariant theorem, the Hamiltonian can be written on a time dependent number basis and the density matrix can be written as
\[\rho_{LvN}(\beta,\tau)=\frac{1}{Z_{LvN}}\sum_{\{n_{i}^{I}\}}\exp-\tilde{\beta }\left[\sum_{I,i}\tilde{\omega}_{i}n_{i}^{I}+p^{+}\right]\lvert\{n_{i}^{I}\}, p^{+},\tau\rangle\langle\{n_{i}^{I}\},p^{+},\tau\rvert, \tag{34}\]
where \(\{n_{i}^{I}\}=\{n_{i}^{I}\}_{i=-\infty}^{\infty}=n_{-\infty}^{I},\ldots,n_{\infty} ^{I}\). In order to simplify the notation, space-time indices will not be taken into account in the next steps. As the background is symmetrical with respect to the different transverse coordinates, one can calculate the contribution of one dimension and then take into account the other transverse dimensions. Now let us take advantage of the notation used in (4.7) to write the density matrix in position representation. In terms of the Fourier modes it takes the form
\[\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{ LvN}=\frac{1}{Z_{LvN}}\langle x_{1},x_{2},...|\rho_{LvN}(\tilde{\beta})|x_{1}^ {\prime},x_{2}^{\prime},...\rangle, \tag{5.7}\]
where index \(1\) in the density matrix indicates that the contribution of only one transversal dimension is being taken into account. To simplify the notation, \(\rho^{1}(x,x^{\prime},\beta)_{LvN}\) will be used instead of \(\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}\). The next step is to write the string number state in position representation. By writing \(\alpha_{n}(\tau)\) in the position representation, the LvN vacuum state is defined by
\[i\left[\phi_{n}^{*}\frac{\partial}{i\partial x_{n}}-\dot{\phi}_{n}x_{n} \right]\Psi_{0}=0,\ \ n\in\mathbb{Z}\,. \tag{5.8}\]
The normalized solution is
\[\Psi_{0}=\prod_{j\in\mathbb{Z}}\left(\frac{1}{2\phi_{j}}\right)^{1/4}e^{\frac{ i}{2}\frac{\dot{\phi}_{j}^{*}}{\phi_{j}}x_{j}^{2}}\,. \tag{5.9}\]
The other states are constructed in the usual way by applying \(\alpha_{n}^{\dagger}(\tau)\). So, the string number state is written in the coordinate representation as
\[\Psi_{n}=\prod_{j}\frac{1}{\sqrt{2\pi\phi_{j}^{*}\phi_{j}}}\frac{1}{\sqrt{2^{n _{j}}n_{j}!}}\left(\frac{\phi_{j}}{\phi_{j}^{*}}\right)H_{n}(q_{j})e^{\frac{i }{2}\frac{\dot{\phi}_{j}^{*}}{\phi_{j}}x_{j}^{2}}\,, \tag{5.10}\]
where \(H_{n}(q_{j})\) are the Hermite polynomials and
\[q_{j}=\frac{x_{j}}{\sqrt{2\phi_{j}^{*}\phi_{j}}}\,. \tag{5.11}\]
Using (5.10), the density matrix for one transversal coordinate \(\rho^{1}(x,x^{\prime},\beta)\) is given by
\[\frac{e^{-\tilde{\beta}p^{+}}}{Z_{LvN}}\sum_{\{n_{j}\}}\prod_{j\in\mathbb{Z}} \frac{H_{n_{j}}(q_{j})H_{n_{j}}(q_{j}^{\prime})}{2\pi\phi_{j}^{*}\phi_{j}2^{n _{j}}n_{j}!}e^{-\tilde{\beta}\omega_{j}(n_{j}+\frac{1}{2})}e^{i[\frac{\phi_{j} ^{*}}{\phi_{j}}x_{j}^{2}-\frac{\dot{\phi}_{j}}{\phi_{j}^{*}}x_{j}^{\prime}{}^ {2}]}\,. \tag{5.12}\]
Following the method developed in [16], the density matrix can be simplified using the following integral representation for the Hermite polynomials:
\[H_{n_{j}}(q_{j})=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(-2iz)^{n_{j}}e^{- (z_{j}+iq_{j})^{2}}dz. \tag{5.13}\]
Using this identity for each mode, one has
\[\rho^{1}(x,x^{\prime},\beta)=\frac{e^{-\tilde{\beta}p^{+}}}{Z}\prod_{n_{j}} \prod_{j\in\mathbb{Z}}\frac{1}{\sqrt{2\pi\phi_{j}^{*}\phi_{j}}}e^{-\tilde{ \beta}\frac{\omega_{j}}{2}}I_{n_{j}}\;, \tag{5.14}\]
where
\[I_{n_{j}}=\frac{e^{-\tilde{\beta}\omega_{j}n_{j}}}{\pi 2^{n_{j}}n_{j}!}e^{q_{j}^{2}+q_ {j}^{\prime 2}}\int\int dz_{j}dw_{j}(2iz_{j})^{n_{j}}(2iw_{j})^{n_{j}}e^{-z_{j}^{2} +2iz_{j}x_{j}}e^{-w_{j}^{2}+2iw_{j}x_{j}^{\prime}}. \tag{5.15}\]
Now, by defining the following matrices
\[A_{j}=2\begin{bmatrix}1&e^{-\tilde{\beta}\omega_{j}}\\ e^{-\tilde{\beta}\omega_{j}}&1\end{bmatrix},\,\,Y_{j}=\begin{bmatrix}z_{j}\\ w_{j}\end{bmatrix},\,\,B_{j}=-2\begin{bmatrix}x_{j}^{\prime}\\ x_{j}\end{bmatrix}, \tag{5.16}\]
after summing over each \(n_{j}\), the density matrix is
\[\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}=\frac{e ^{-\tilde{\beta}p^{+}}}{Z}\sum_{n_{j}}\prod_{j\in\mathbb{Z}}\frac{1}{\sqrt{2 \pi^{2}\phi_{j}^{*}\phi_{j}}}e^{q_{j}^{2}+q_{j}^{\prime 2}}\int\int exp\left[- \frac{1}{2}Y^{\dagger}\mathbf{A_{j}}Y_{j}+iB_{j}^{\dagger}Y_{j}\right]. \tag{5.17}\]
Finally, one can use the result
\[\int\int e^{\left[-\frac{1}{2}Y^{\dagger}\mathbf{A}Y+iB^{\dagger}Y\right]}= \frac{2\pi}{\sqrt{\det\mathbf{A}}}e^{\left[\frac{1}{2}B^{\dagger}\mathbf{A}^{- 1}B\right]} \tag{5.18}\]
to write the density matrix in the form( after some algebra and taking into account the eight transverse dimensions)
\[\rho(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN} = \frac{e^{-\beta p^{+}}}{Z_{LvN}}\prod_{j\in\mathbb{Z}}\Biggl{[} \frac{1}{2\pi\phi_{j}^{*}(\tau)\phi_{j}(\tau)\sinh\tilde{\beta}\tilde{\omega}_ {j}}\Biggr{]}^{4}\] \[\times \exp\Biggl{[}4i\sum_{n\in\mathbb{Z}}\frac{d}{d\tau}\ln(\phi_{j}^ {*}(\tau)\phi_{j}(\tau))({x_{n}^{\prime}}^{2}-x_{n}^{2})\Biggr{]}\] \[\times \exp\Biggl{[}-\sum_{n\in\mathbb{Z}}\frac{1}{\phi(\tau)_{j}^{*} \phi(\tau)_{j}}\Biggr{\{}(x_{n}^{\prime}+x_{n})^{2}\tanh(\frac{\tilde{\beta} \tilde{\omega}_{j}}{2})+(x_{n}^{\prime}-x_{n})^{2}\coth(\frac{\tilde{\beta} \hbar\tilde{\omega}_{j}}{2})\Biggr{\}}\Biggr{]}.\]
One can now compare this density matrix with the density matrix obtained with the time dependent Hamiltonian (5.2) defined at a time \(\tau_{0}\)
\[\rho_{T}(\tau_{0})=\frac{e^{-\tilde{\beta}\left(p^{+}+H_{lc}^{bos}(\tau_{0}) \right)}}{Z_{T}(\tau_{0})}, \tag{5.20}\]
where \(T=\frac{1}{\beta}\) is the equilibrium temperature and \(Z_{T}\) is the thermal partition function
\[Z_{T}(\tau_{0})=Tre^{-\tilde{\beta}\left(p^{+}+H_{lc}^{bos}(\tau_{0})\right)}\,. \tag{5.21}\]
It is easy to see that when \(\tilde{\omega}_{n}=\omega_{n}(\tau_{0})\), \(Z_{T}=Z_{LvN}\). Following the same steps as before, the instantaneous density matrix is written in the position representation as
\[\rho_{T}(\tau_{0}) = \frac{e^{-\tilde{\beta}p^{+}}}{Z_{T}(\tau_{0})}\prod_{j\in \mathbb{Z}}\Biggl{[}\frac{\omega_{j}(\tau_{0})}{2\pi\sinh(\tilde{\beta}\omega_ {j}(\tau_{0})}\Biggr{]}^{4}\] \[\times\exp\Biggl{[}-2\sum_{n\in\mathbb{Z}}\omega_{n}(\tau_{0}) \Biggr{\{}(x_{n}^{\prime}+x_{n})^{2}\tanh(\frac{\tilde{\beta}\omega_{n}(\tau_{ 0})}{2})+(x_{n}^{\prime}-x_{n})^{2}\coth(\frac{\tilde{\beta}\hbar\omega_{n}( \tau_{0})}{2})\Biggr{\}}\Biggr{]}.\]
In the adiabatic regime
\[\phi_{n}\phi_{n}^{*}\approx\frac{1}{2\omega_{n}},\ \left|\frac{\dot{\omega}_{n}}{ \omega_{n}}\right|<<1, \tag{5.23}\]
one has
\[\rho(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}\approx\rho_{ T}(\tau_{0}), \tag{5.24}\]
if one sets \(\tilde{\omega}_{j}=\omega_{j}(\tau_{0})\). So, close to \(\tau_{0}\), the non equilibrium string thermal state is given by (5.6) and (5.19) an it can be used (5.21) to calculated, for example, the Hagedorn temperature as a function of \(\tau_{0}\). Before, as a direct application of the invariant density matrix, let's calculate the worldsheet time dependent two-point function at finite temperature, which is an important object to study non-equilibrium phenomena.
### The Real Time thermal two point function.
As an application of the invariant density matrix, it can be calculated the time dependent two-point function at finite temperature. Let's start using the LvN invariant operators to evaluated the two-point function at equal times at zero temperature, by taking the expectation value with respect to the vacuum state \(|0,\tau\rangle\) which is annihilated by \(\alpha_{n}^{I}(\tau)\),
\[g^{IJ}(\sigma,\sigma^{\prime})=\langle 0,\tau|X^{I}(\sigma,\tau)X^{J}(\sigma^{ \prime},\tau)|0,\tau\rangle\,. \tag{5.25}\]
This is computed by inverting the relations (4.14 ):
\[x_{n}^{I}=\alpha_{n}^{I}(\tau)\phi_{n}(\tau)+\alpha_{n}^{I}(\tau)\phi_{n}^{*} (\tau). \tag{5.26}\]
In the adiabatic approximation one gets
\[g^{IJ}(\sigma,\sigma^{\prime},\tau)=\alpha^{\prime}\sum_{n\in\mathbb{Z}}\frac{ e^{i(\sigma-\sigma^{\prime})}}{\omega_{n}(\tau)}. \tag{5.27}\]
The time dependent finite temperature two-point function is calculated by taking the expectation value with respect to the thermal state, defined by the invariant density matrix
\[G^{IJ}(\sigma,\sigma^{\prime},t)_{T}=\mathrm{Tr}\left[\rho_{LvN}X^{I}(\sigma, \tau)X^{J}(\sigma^{\prime},\tau)\right]. \tag{5.28}\]
The trace must be taken over the physical states of the closed string. Although the light-cone gauge solves the two worldsheet reparametrization constraints, one single consistency condition remains to be imposed to calculate the trace above, which is related to the circle isometry of the closed string. In order to fix this isometry on the Fock space, the bosonic physical states \(|\Psi\rangle_{b}\) must be annihilated by the \(\sigma\) translation generator \(\mathcal{P}_{b}\), which implies the level-matching constraint
\[\mathcal{P}_{b}|\Psi\rangle_{b}=0 \tag{5.29}\]
where
\[\mathcal{P}_{b}=\sum_{n\in\mathbb{Z}}n\left[\delta_{IJ}\alpha_{n}^{I\dagger}( \tau)\alpha_{n}^{J}(\tau)\right]. \tag{5.30}\]
Then, to ensure that the trace is taken over the physical states, one introduces the projector
\[\int_{-1/2}^{1/2}d\lambda e^{(2\pi i\lambda{\cal P}_{b})}, \tag{108}\]
such that the invariant thermal two point function is
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{1}{Z_{LvN}}\int dp^{+}e^{-\tilde{ \beta}p^{+}}\int_{-1/2}^{1/2}\sum_{\{n_{j}\}}\langle\{n_{j}(\tau)\}|e^{-\tilde {\beta}H_{LvN}}e^{2\pi i\lambda{\cal P}}X^{I}(\sigma,\tau)X^{J}(\sigma^{\prime},\tau)|\{n_{j}(\tau)\}\rangle, \tag{109}\]
where \(H_{LvN}\) is given in (111). By defining
\[k_{j}=\tilde{\beta}\omega_{j}(\tau_{0}),\ \ b_{j}=2\pi\lambda j, \tag{110}\]
the two point function can be written as
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{1}{Z}\int p^{+}\sum_{\{n_{j}\}} \langle n_{j}|\exp\left[-\sum_{j\in\mathbb{Z}}(k_{j}+ib_{j})n_{j}\right]X( \sigma,\tau)X(\sigma^{\prime},\tau)|\{n_{j}\}\rangle. \tag{111}\]
After some algebra, the two point function is written as a zero temperature two point function plus a thermal contribution
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{\alpha^{\prime}}{Z}\int dp^{+} \int_{-1/2}^{1/2}d\lambda|\eta_{m}(\beta,\lambda)|^{8}\left[g^{IJ}(\sigma, \sigma^{\prime},\tau)+2\alpha^{\prime}\delta^{IJ}g(\sigma,\sigma^{\prime},t)_ {T}\right] \tag{112}\]
where \(\eta_{m}(\beta,\lambda)\) is the "massive" eta function
\[\eta_{m}(\beta,\lambda)=\prod_{n\in\mathbb{N}}\frac{1}{1-e^{-\tilde{\beta} \omega_{n}(\tau_{0})+2\pi i\lambda n}} \tag{113}\]
and
\[g(\sigma,\sigma^{\prime},\tau)_{T} = \sum_{n=0}\frac{1}{\omega_{n}}\left[\frac{e^{-(k_{n}-ib_{n})}}{1 -e^{-(k_{n}-ib_{n})}}\cos n\sigma\cos n\sigma^{\prime}+\frac{e^{-(k_{n}+ib_{n })}}{1-e^{-(k_{n}+ib_{n})}}\sin n\sigma\sin n\sigma^{\prime}\right] \tag{114}\] \[= \sum_{p=1,n=0}\frac{e^{-pkn}}{\omega_{n}}\left[e^{ipb_{n}}\cos n \sigma cosn\sigma^{\prime}+e^{-ipb_{n}}\sin n\sigma\sin n\sigma^{\prime}\right]\]
is the thermal correction. In general, in finite temperature quantum field theories the term in \(\eta\) in the numerator does not appear because it is just the Z factor of the denominator. Here these terms are left over due to the integral over \(p^{+}\) and \(\lambda\)(note that \(\omega_{n}\) depends on \(p^{+}\)).
Le's focus on \(g(\sigma,\sigma^{\prime},t)_{T}\). The parity of the eta function in relation to \(\lambda\) can be used to rewrite the thermal contribution to two point function as
\[g(\sigma,\sigma^{\prime},\tau)_{T} = \sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}}\frac{e^{-pK_{n}}}{2 \omega_{n}}\left[e^{in(2\pi p\lambda+(\sigma-\sigma^{\prime}))}+e^{in(2\pi p \lambda-(\sigma-\sigma^{\prime}))}\right]. \tag{115}\]
In order to investigate the leading short-distance behaviour, the Poisson resummation formula is used:
\[\sum_{n\in\mathbb{Z}}F(n)=\sum_{l\in\mathbb{Z}}\int_{-\infty}^{\infty}e^{2\pi iyl }F(y)dy, \tag{102}\]
along with the following representation of the modified Bessel function,
\[K_{0} = \int_{0}^{\infty}\frac{e^{-\beta\sqrt{x^{2}+\gamma^{2}}}}{\sqrt{x^{ 2}+\gamma^{2}}}\cos bxdx=K_{0}(\sqrt{b^{2}+\beta^{2}}), \tag{103}\]
to rewrite the finite temperature two point function as
\[G^{IJ}(\sigma,\sigma^{\prime},\tau) = 2\alpha^{\prime}K_{0}(fe^{-\tau}|\sigma-\sigma^{\prime}|)\] \[+ \frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d\lambda| \eta_{m}(\beta,\lambda)|^{8}\sum_{n\in\mathbb{Z}}K_{0}(fe^{-\tau}|2\pi n \alpha+\sigma-\sigma^{\prime}|)\] \[+ \frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d\lambda| \eta_{m}(\beta,\lambda)|^{8}\sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}}\left[K_{ 0}(fe^{-\tau}\sqrt{(\beta\alpha p)^{2}+b_{n}^{+}(\sigma,\sigma^{\prime}, \lambda)}\right]\right.\] \[+ \left.\frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d \lambda|\eta_{m}(\beta,\lambda)|^{8}\sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}} \left[K_{0}(fe^{-\tau}\sqrt{(\beta\alpha p)^{2}+b_{n}^{-}(\sigma,\sigma^{ \prime},\lambda)}\right],\right.\]
where \(b_{n}^{\pm}(\sigma,\sigma^{\prime},\lambda)=\left[2\pi\alpha(n+\lambda)\pm( \sigma-\sigma^{\prime})\right]^{2}\). We can see that the only term that has singularities when \(\sigma\rightarrow\sigma^{\prime}\) is the first term. In particular, the finite temperature contributions is finite at short-distances. Let's analyze the behaviour of the two point function in two different limits. In the limit \(fe^{\tau}|\sigma-\sigma^{\prime}|<<1\), one can expand the Bessel function as
\[K_{0}\left(z\right)=-\left(\ln\left(\tfrac{1}{2}z\right)+\gamma\right)I_{0} \left(z\right)+\frac{\tfrac{1}{4}z^{2}}{(1!)^{2}}+(1+\tfrac{1}{2})\frac{( \tfrac{1}{4}z^{2})^{2}}{(2!)^{2}}+(1+\tfrac{1}{2}+\tfrac{1}{3})\frac{(\tfrac{1 }{4}z^{2})^{3}}{(3!)^{2}}+\cdots, \tag{104}\]
where \(I_{0}(z)\) is the modified Bessel function of first kind (\(I_{0}(0)=1\)) and \(\gamma\) is the Euler constant. So, the leading short-distance behavior of the two point function is
\[\alpha^{\prime}\ln\frac{fe^{-\tau}}{|\sigma-\sigma^{\prime}|} \tag{105}\]
which has the same leading short-distance logarithmic behavior of the flat space one. On the other hand, in the limit \(e^{-\tau}|\sigma-\sigma^{\prime}|>>1\), the Bessel function has the following asymptotic expansion
\[K_{0}(z)=\sqrt{\frac{\pi}{2x}}e^{-x}\sum_{k=0}^{\infty}\frac{\Gamma(k+1/2)}{k! \Gamma(1/2-k)}(2z)^{-k}\,. \tag{106}\]
In this limit, the thermal two point function has an exponential damping behavior. In particular, the two point function goes to zero near the null singularity. This may corroborate the idea raised in [17], where it was argued that the string gets highly excited and breaks up into bits propagating independently near the singularity.
The high cone Superstring Partition Function and Hagedorn Behavior.
In this section the light cone superstring partition function will be calculated. As previously shown, in the adiabatic regime the density matrix \(\rho_{T}(\tau)\) constructed with the non-invariant operators approaches the invariant density matrix as \(\tau\) approaches \(\tau_{0}\), where \(\tau_{0}\) is the time at which \(\rho_{T}(\tau)\) is evaluated. This result allows us to use the diagonal Hamiltonian (5.2) evaluated at \(\tau_{0}\) to calculate the partition function and, consequently, the Hagedorn temperature as a function of \(\tau_{0}\).
The light cone superstring partition function at time \(\tau_{0}\) is the trace
\[Z(\beta,\tau_{0}) = \text{Tr}e^{-\tilde{\beta}\left(p^{+}+H_{l.c.}(\tau_{0})\right)}. \tag{6.1}\]
Again, in order to fix the \(S^{1}\) isometry on the superstring Fock space and to ensure that the trace is taken over the physical states, one introduces the projector
\[\int d\lambda e^{(2\pi i\lambda\mathcal{P})}, \tag{6.2}\]
where the superstring sigma translation generator \(\mathcal{P}\) is
\[\mathcal{P}=\sum_{n\in\mathbb{Z}}n\left[\delta_{IJ}a_{n}^{I\dagger}a_{n}^{J}+ \delta_{ab}b_{n}^{\dagger a}b_{n}^{b}\right] \tag{6.3}\]
and \((a_{n}^{I\dagger},\,a_{n}^{J},\,b_{n}^{\dagger a},b_{n}^{b})\) are the operators (4.24) defined at time \(\tau_{0}\). The light cone superstring thermal partition function can be written as
\[Z(\beta,\tau_{0})=\int dp^{+}\int d\lambda e^{-\beta p^{+}}\text{Tr}e^{\left( -\tilde{\beta}H_{l.c.}(\tau_{0})+2i\pi\lambda\mathcal{P}\right)}. \tag{6.4}\]
The integrand of this partition function has interesting modular properties, which become apparent by defining the following complex parameter
\[\tau^{\prime}=\lambda+i\frac{\tilde{\beta}}{2\pi\alpha^{\prime}p^{+}}=\tau_{1 }+i\tau_{2}, \tag{6.5}\]
such that the partition function can be written as
\[Z(\beta,\tau_{0})=\int dp^{+}e^{-\beta p^{+}}\text{Tr}\left[e^{-2\pi\tau_{2}H _{l.c.}(\tau_{0})}e^{i2\pi\tau_{1}\mathcal{P}}\right]. \tag{6.6}\]
It is well known that the thermal partition function of the closed close string can be written as a functional integral on the torus [18]. Let's remember here how the torus appears in the density matrix formalism that we are using. Note that the operator \(e^{-2\pi\tau_{2}H_{l.c.}}\) propagates the closed superstring through imaginary light cone time \(-2\pi\tau_{2}\). In turn, the operator \(e^{i2\pi\tau_{1}\mathcal{P}}\) rotates the closed string by an angle \(2\pi\tau_{1}\). So, the trace taken over matrix elements of the form \(\langle i|e^{-2\pi\tau_{2}H_{l.c.}(\tau_{0})}e^{i2\pi\tau_{1}\mathcal{P}}|f\rangle\) can be represented as a path integral on a torus by gluing the ends of the cylinder of length \(2\pi\tau_{2}\) with a relative twist \(2\pi\tau_{1}\). Actually the twist is related to the Dehn twist associated to one of the cycles [19]. We then conclude that \(\tau^{\prime}\) is indeed the modulus of a torus represented by the parallelogram defined in the complex
plane with vertices at \(0\), \(\tau^{\prime}\), \(1\), \(\tau^{\prime}+1\) and identified opposite sides. Furthermore, the thermal density matrix allows observing a kind of torus generalization of the KMS condition that is a consequence of the closed string torus topology [20].
In order to explore the torus modular properties of the partition function, the integral over \(p^{+}\) is rewritten as an integral over the moduli \(\tau_{2}\), such that the UV asymptotic behavior of the partition function is recover in the limit \(\tau_{2}\to 0\). Finally, after taking the trace over the bosonic and fermionic number states, the partition function is
\[Z(\beta,\tau_{0})=\frac{\tilde{\beta}}{2\pi\alpha^{\prime}}\int_{0}^{\infty} \frac{d\tau_{2}}{{\tau_{2}}^{2}}\int d\tau_{1}\exp(-\frac{\beta^{2}}{2\pi\, \alpha^{\prime}\,\tau_{2}})z_{lc}(\tau^{\prime},\tau_{0}), \tag{100}\]
where
\[z_{lc}(\tau^{\prime},\tau_{0})=z_{lc}^{bos.}(\tau^{\prime},\tau_{0})z_{lc}^{ ferm.}(\tau^{\prime},\tau_{0}) \tag{101}\]
is the product of the bosonic and fermionic contributions:
\[z_{lc}^{bos.}(\tau^{\prime},\tau_{0}) = \exp\left[-16\pi\tau_{2}\left(\frac{\tilde{f}e^{-\tau_{0}}}{2}+ \sum_{n=1}^{\infty}\sqrt{n^{2}+\tilde{f}^{2}e^{-2\tau_{0}}}\right)\right] \tag{102}\] \[\left[\prod_{n\in\mathbb{Z}}\left(1-\exp[2\pi(-\tau_{2}\sqrt{n^{2 }+\tilde{f}^{2}e^{-2\tau_{0}}}+i\tau_{1}n)]\right)\right]^{-8},\]
\[z_{lc}^{ferm.}(\tau^{\prime},\tau_{0}) = \exp\left[16\pi\tau_{2}\left(\frac{\tilde{f}e^{-\tau_{0}}}{2}+ \sum_{n=1}^{\infty}\sqrt{n^{2}+\tilde{f}^{2}e^{-2\tau_{0}}}\right)\right]\] \[\left[\prod_{n\in\mathbb{Z}}\left(1+\exp[2\pi(-\tau_{2}\sqrt{n^{ 2}+\tilde{f}^{2}e^{-2\tau_{0}}}+i\tau_{1}n)]\right)\right]^{8}.\]
As usual in pp waves, the partition function is written in terms of generalized " massive" modular functions. Note that the contribution from the Ramond field now depends on the torus moduli space from \(p^{+}\), so
\[\tilde{f}=\frac{\tilde{\beta}}{2\pi\tau_{2}}f_{0}. \tag{103}\]
This does not happen in the time dependent pp wave model studied in [21], hence the partition function UV behavior of the two models are completely different. Actually, owing to scale invariance of the metric, the partition function of the model studied in [21] has the same UV behavior of the string partition function in flat space.
We can now study the behavior of the partition function for each time \(\tau_{0}\) where the thermalization occurs. In particular, it can be studied the UV behavior for each \(\tau_{0}\). Before, let's assume that the thermalization occurs close to the null singularity and try to extrapolate this result to the strong coupling region. As it can be seen, there are no divergences in the partition function (added to those that we should have in the UV limit) due to singularity. This can be easily proven by performing a sequence of steps, which will also be useful for analyzing the UV behavior. Let's start by taking the logarithm of \(z_{lc}(\tau^{\prime},\tau_{0})\)
\[\ln z_{lc}(\tau^{\prime},\tau_{0})=\]
\[8\sum_{n\in{\bf Z}}\left[\log(1-e^{-2\pi\tau_{2}\sqrt{m^{2}+n^{2}}+ 2\pi i\tau_{1}(n)+2\pi ia})-\log(1+e^{-2\pi\tau_{2}\sqrt{m^{2}+(n)^{2}}+2\pi i \tau_{1}(n-1/2)}\right]\] \[=-\sum_{n\in{\bf Z}}\sum_{p=1}^{\infty}\frac{1}{p}\left[e^{-2\pi pr _{2}\left(-2\pi\tau_{2}\sqrt{m^{2}+n^{2}}\right)}F(n,p,\tau_{1})\right]\]
where
\[F(n,p,\tau_{1})=8e^{i2\pi np}(1-\cos\pi p). \tag{113}\]
Next, by making the replacement \(r=p^{2}s\) and using the identity
\[e^{-z}=\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}dr\,r^{-1/2}e^{-r-\frac{z^{2}}{4r}}\,, \tag{114}\]
equation (109) becomes
\[\ln z_{lc}(\tau,\tau_{0}) = \frac{1}{\sqrt{\pi}}\sum_{n\in\mathbb{Z}}\sum_{p=1}^{\infty}\int _{0}^{\infty}ds\,s^{-1/2}e^{-p^{2}s-(2\pi\tau_{2})^{2}\left(f^{2}e^{-2\tau_{0 }}+n^{2}\right)/s}F(n,p,\tau_{1}) \tag{115}\] \[= 2\sum_{n\in\mathbb{Z}}\sum_{p=1}^{\infty}\frac{(f^{2}e^{-2\tau_ {0}}+n^{2})^{1/4}}{\sqrt{\pi p}}K_{1/2}\left(2p\sqrt{f^{2}e^{-2\tau_{0}}+n^{2} }\right)\,,\]
where it was used the following integral representation of the modified Bessel function:
\[K_{\nu}=\int_{0}^{\infty}s^{\nu-1}e^{-\frac{a}{s}-\frac{b}{s}}=2\left(\frac{a }{b}\right)^{\nu/2}K_{\nu}\left(2\sqrt{ab}\right)\,. \tag{116}\]
Now, just by using the asymptotic behavior of the Bessel function
\[\lim_{x\to\infty}K_{\nu}(x)\approx\sqrt{\frac{\pi}{2x}}e^{-x}\,, \tag{117}\]
it can be easily seen that there are no divergences in the partition function arising from the singular behavior of the metric in \(\tau_{0}\to-\infty\). This is not surprising since the partition function of genus one does not depend on string coupling.
Next, the UV behavior of the partition function will be studied. The product that appears in the light cone partition function \(z_{lc}(\tau^{\prime},\tau_{0})\) is a massive generalization of the Theta functions. The modular properties of this kind of "generalized" Theta functions were studied in [22; 23; 24]. Here, the thermalization time \(\tau_{0}\) plays a role in the modular transformations.2 Consider the following generalized modular function
Footnote 2: For the time-independent case, in references[23; 24], the modular properties are studied keeping m independent of \(\tau_{2}\), while in [22], the dependence of m on \(\tau_{2}\) is taken into account.
\[Z_{a,b}(\tau^{\prime},\tau_{0})=\left|\prod_{n=-\infty}^{\infty}(1-e^{-2\pi \tau_{2}\sqrt{m^{2}(\tau_{0})+(n+b)^{2}}+2\pi i\tau_{1}(n+b)+2\pi ia})\right|^ {8} \tag{118}\]
such that \(z_{lc}(\tau^{\prime},\tau_{0})\) can be written as
\[z_{lc}(\tau^{\prime},t_{0})=\frac{Z_{1/2,0}(\tau^{\prime},\tau_{0})}{Z_{0,0}(\tau ^{\prime},\tau_{0})} \tag{6.19}\]
Following a similar strategy used in [23; 24],which in fact consists of the same steps developed in (6.12),(6.14) and (6.15), together with the Poisson resummation formula, it can be shown that
\[\frac{\ln z_{lc}(\tau^{\prime},\tau_{0})}{8} = \ln Z_{0,\frac{1}{2}}(\frac{\tau^{\prime}}{|\tau^{\prime}|^{2}}, \tau_{0}-\ln|\tau^{\prime}|)-\ln Z_{0,0}(\frac{\tau^{\prime}}{|\tau^{\prime}|^{ 2}},\tau_{0}-\ln|\tau^{\prime}|) \tag{6.20}\] \[+ 2\pi\frac{\tau_{2}}{|\tau^{\prime}|^{2}}\left[\Delta_{\frac{1}{2 }}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})-\Delta_{ 0}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})\right]\]
where \(\Delta_{1/2}(\frac{\beta fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})\) and \(\Delta_{0}(\frac{\beta fe^{-\tau_{0}}|\tau|}{2\pi\tau_{2}})\) are defined by3
Footnote 3: Actually, \(\Delta_{b}(t_{0})\) corresponds to the zero-energy of a 2D massive complex scalar boson \(\phi\) with twisted boundary condition \(\phi(\tau,\sigma+\pi)=e^{2\pi ib}\phi(\tau,\sigma)\).
\[\Delta_{b}(m)=-\frac{1}{2\pi^{2}}\sum_{p=1}^{\infty}\cos(2\pi bp)\int_{0}^{ \infty}ds\ e^{-p^{2}s-\frac{\pi^{2}m^{2}}{s}}=-\frac{m}{\pi}\sum_{p=1}^{\infty }\frac{\cos(2\pi bp)}{p}K_{1}\left(2\pi mp\right) \tag{6.21}\]
and \(K_{1}\) is a modified Bessel function of the second kind. Using (6.20) and setting \(\tau_{1}=0\), the leading behavior of \(Z(\beta,\tau_{0})\) as \(\tau_{2}\to 0\) is
\[exp\left\{-\frac{\tilde{\beta}^{2}}{2\pi\alpha^{\prime}\tau_{2}}+\frac{16\pi} {\tau_{2}}\left[\Delta_{\frac{1}{2}}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau|} {2\pi\tau_{2}})-\Delta_{0}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau|}{2\pi\tau_ {2}})\right]\right\} \tag{6.22}\]
Thus, the partition function starts to diverge when the exponent above is zero. Hence, the Hagedorn temperature satisfies the following equation
\[\frac{\beta_{H}}{4\pi\alpha^{\prime}}=16\pi\left[\Delta_{\frac{1}{2}}(\frac{ \beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})-\Delta_{0}(\frac{\beta_{H}fe^{-\tau_{0 }}}{2\pi\sqrt{2}})\right]. \tag{6.23}\]
In the asymptotically flat region: \(fe^{-\tau_{0}}<<1\), on gets
\[T_{H}=\frac{1}{2\pi\sqrt{2\alpha^{\prime}}}\left(1+2\sqrt{\alpha^{\prime}}fe^{ -\tau_{0}}+...\right), \tag{6.24}\]
where dots mean higher order terms in the Ramond field. The first term is just the flat space result for the Hagedorn temperature. Note that as time goes from \(\infty\) to \(-\infty\), the Hagedorn temperature increases as the thermalization time goes towards the singularity. It can be shown that this happens at all instants of time and not necessarily in the approximation used in (6.24). Let's rewrite the difference that appears in (6.23) as
\[\Delta_{\frac{1}{2}}(\frac{\beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})-\Delta_{0} (\frac{\beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})=\frac{1}{2\pi^{2}}\sum_{p=1}^{ \infty}\frac{[1-(-1)^{p}]}{p}\int_{0}^{\infty}ds\ e^{-p^{2}s-\frac{\beta_{H}^{ 2}f^{2}e^{-2\tau_{0}}}{8s}}, \tag{6.25}\]
so, the derivative of \(\beta_{H}\) with respect to \(-\tau_{0}\) is
\[-(2f\beta e^{-\tau_{0}})^{2}\left[\sum_{p=1}^{\infty}\frac{[1-(-1)^{p}]}{p}\int_{ 0}^{\infty}\frac{ds}{s}\ e^{-p^{2}s-\frac{\beta_{H}^{2}f^{2}e^{-2\tau_{0}}}{8s }}\right]<0 \tag{101}\]
and one concludes that the Hagedorn temperature \(T_{H}=\frac{1}{\beta_{H}}\) increases as \(\tau_{0}\) goes to \(-\infty\), that is, as the string approaches the null singularity.
The Hagedorn behavior close to the null singularity can be now cleared up just analyzing the asymptotic behavior of \(\Delta_{b}(x)\) defined in (100). This can be done using the method of Steepest Descents(see for example chapter 12 of [25]),or one just can use (100) to represent \(\Delta_{0}(\tau_{0})\) and \(\Delta_{1/2}(\tau_{0})\) as modified Bessel functions and then use (101). As the higher values of p are exponentially suppressed, the most relevant term in the series (101) is given by taking \(p=1\),
\[\lim_{\tau_{0}\rightarrow-\infty}\beta_{H}(\tau_{0}) = \left[32\pi\alpha^{\prime}\sqrt{\frac{f\sqrt{2}}{\pi}}\right]^{4/ 3}\lim_{\tau_{0}\rightarrow-\infty}e^{-\frac{2\tau_{0}}{3}}\exp\left(-\frac{ 4\beta_{H}fe^{-\tau_{0}}}{3\sqrt{2}}\right) \tag{102}\] \[= 0\,.\]
So, as the singularity is approached, the Hagedorn temperature is pushed toward infinity. It is tempting to conclude that there is no Hagedorn transition at any finite temperature close to singularity. This point will be brought again in the conclusions.
## 7 Conclusions
In the present work, the LvN formulation was used to define an invariant thermal density matrix for a superstring propagating in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. The metric has a cosmological singularity at \(\tau\rightarrow-\infty\) and it is asymptotically flat at \(\tau\rightarrow\infty\).
In the formulation used here, it is assumed that the system enters thermodynamic equilibrium adiabatically at a certain time \(\tau_{0}\). The adiabatic approximation is controlled by the Ramond field. With this assumption, it was possible to use the density matrix to calculate the Hagedorn temperature as a function of \(\tau_{0}\). It has been shown that the Hagedorn temperature increases as string propagates from the asymptotically flat region towards the singularity. In particular, the calculation shows that the Hagedorn temperature diverges at the singularity,which could indicate that, in this background, there is no hagedorn behavior near the singularity. However, we need to be careful in extrapolating the result found here to this region. This is because it is the region of strong coupling, owing to the dependence of the dilaton on time. It is important to keep in mind that the time that appears in the Hagedorn temperature is the time where thermalization occurs. So,in addition to the fact that a free string gas cannot be defined in this region, the very notion of thermodynamic equilibrium is not so simple to assume in the strong coupling
limit. This is just because in non-interacting thermodynamics, one always starts with a weakly interacting gas and then the coupling is adiabatically turned down until to reach the free gas. If one doesn't start with the interacting gas the equilibration processes cannot occur. It is clear how to figure out this process in the small dilaton region, but in the strong coupling limit it is not. Note that the Ramond field also plays a role here, acting as the controller of adiabatic dynamics.
On the other hand, sometimes in string theory the perturbative string presents some window into the non-perturbative sector of the theory. This is the case for example of D branes seen as boundary states in the closed string channel. Perhaps, in view of [11], [26] in this case the entanglement entropy may play this role. Let's clear up this point. It was shown in reference [11] that the left/right entanglement entropy is finite when evaluated at the singularity and thus it can be used to probe the null singularity. This entropy is actually the entropy related to the vacuum state as seen by asymptotic observers; this state is in effect a boundary state. Indeed, in reference [26], for the time dependent pp wave studied in [27], it was shown that the vacuum state, as seen by asymptotic observers, actually represents a D-brane described in the closed string channel. However, for that model, the dilaton remained small near the singularity. It would be interesting to verify this point for the model studied here. Also, it will be interesting to calculate the Hagedorn temperature as function of the dilaton. To this end, the finite temperature string field theory formulation developed in [28] will be extremely useful. This is a work in progress.
Finally,the invariant density matrix also allowed to calculate the two-point thermal function in real time. It was shown that the real time thermal two point function can be written in terms of generalized Theta functions. The modular properties of these functions can be used to study non-equilibrium thermodynamic quantities, such as transport coefficients and the scrambling time. This will be left for a future work.
|
2309.08022 | Empowering Visually Impaired Individuals: A Novel Use of Apple Live
Photos and Android Motion Photos | Numerous applications have been developed to assist visually impaired
individuals that employ a machine learning unit to process visual input.
However, a critical challenge with these applications is the sub-optimal
quality of images captured by the users. Given the complexity of operating a
camera for visually impaired individuals, we advocate for the use of Apple Live
Photos and Android Motion Photos technologies. In this study, we introduce a
straightforward methodology to evaluate and contrast the efficacy of
Live/Motion Photos against traditional image-based approaches. Our findings
reveal that both Live Photos and Motion Photos outperform single-frame images
in common visual assisting tasks, specifically in object classification and
VideoQA. We validate our results through extensive experiments on the ORBIT
dataset, which consists of videos collected by visually impaired individuals.
Furthermore, we conduct a series of ablation studies to delve deeper into the
impact of deblurring and longer temporal crops. | Seyedalireza Khoshsirat, Chandra Kambhamettu | 2023-09-14T20:46:35Z | http://arxiv.org/abs/2309.08022v1 | Empowering Visually Impaired Individuals: A Novel Use of Apple Live Photos and Android Motion Photos
###### Abstract
Numerous applications have been developed to assist visually impaired individuals that employ a machine learning unit to process visual input. However, a critical challenge with these applications is the sub-optimal quality of images captured by the users. Given the complexity of operating a camera for visually impaired individuals, we advocate for the use of Apple Live Photos and Android Motion Photos technologies. In this study, we introduce a straightforward methodology to evaluate and contrast the efficacy of Live/Motion Photos against traditional image-based approaches. Our findings reveal that both Live Photos and Motion Photos outperform single-frame images in common visual assisting tasks, specifically in object classification and VideoQA. We validate our results through extensive experiments on the ORBIT dataset, which consists of videos collected by visually impaired individuals. Furthermore, we conduct a series of ablation studies to delve deeper into the impact of deblurring and longer temporal crops.
**Keywords:** Live Photo, Motion Photo, Deep Learning, Visually Impaired
## 1 Introduction
_Live Photos_ and _Motion Photos_, technologies from Apple and Android, allow a single photo to function as a still image and when activated, a short video with motion and sound. These technologies leverage a background feature that continuously captures images when the Camera app is opened, regardless of whether the shutter button is pressed. When a Live/Motion Photo is taken, the device records this continuous stream of photos, capturing moments before and after the shutter press. These images are stitched into a three-second animation, complemented by optional audio recorded during the same span. Live/Motion Photos surpass video clips due to their ease of capture and standardized format. Figure 1 depicts the main three components of a Live/Motion Photo, and Figure 5 shows screenshots of the Apple iOS environment for capturing and working with Live Photos.
People with visual impairments often rely on assistive devices that provide insights about their surroundings. For instance, people with low vision often rely on magnification tools to better observe the content of interest, or those with low vision and no vision rely on on-demand technologies [1, 23, 13] that deliver answers to submitted visual questions. Two fundamental computer vision tasks in these aids are object classification and video question answering (VideoQA). Object classification, though basic, is a key component of more advanced methods [13]. In contrast, VideoQA accurately responds to inquiries about any video, empowering visually impaired people to access information about real-world or online videos [14].
A significant problem with the current visual assisting technologies is the limitation of the visually impaired people to capture the desired image for these technologies. The images taken by blind people have different quality flaws, such as blurriness, brightness, darkness, obstruction, and so on [12]. Image quality issues may make it difficult for humans and machine learning systems to recognize image content, causing the system to provide set responses, such as "unanswerable". Prior research has indicated that this can
be frustrating for people with visual impairments using accessible applications, requiring extra time and effort to determine what is going wrong and get an answer [2]. Figure 2 shows a recorded video by a visually impaired user where half of the frames cover only a small portion of the object.
We posit that the additional contextual information provided by Live/Motion Photos can significantly enhance the ability of the assistance systems to accurately interpret and analyze the content of the images. Not only does this approach provide multiple frames for analysis, which could increase the chances of capturing a clear shot of the subject, but it also offers temporal information that can be critical for understanding dynamic scenarios. Through the course of this paper, we will present empirical evidence demonstrating how the use of Live/Motion Photos can mitigate the challenges faced by visually impaired individuals in capturing clear images.
Our contributions are as follows:
* We introduce a straightforward approach for comparing Live/Motion Photos to images.
* We evaluate state-of-the-art methods on Live/Motion Photos and images for object classification and VideoQA tasks.
* We conduct ablation studies on the impact of deblurring and varying temporal crop lengths.
## 2 Related Work
A plethora of commercial systems have been developed to empower individuals with visual impairments. These commercial systems are categorized into two distinct types: human-in-the-loop systems and end-to-end (E2E) automated systems. Human-in-the-loop systems are designed to bridge the gap between visually impaired individuals and sighted volunteers or staff members. Through these systems, users can make inquiries or seek assistance with visual tasks. Some notable examples of human-in-the-loop platforms are BeMyEyes, BeSpecular, and Aira [1, 2]. Contrary to human-in-the-loop systems, end-to-end systems rely on artificial intelligence and cloud computing to provide visual assistance to users. These systems do not involve human intermediaries. Examples of E2E systems include TapTapSee and Microsoft's Seeing AI.
A critical factor that determines the efficacy of these systems is the clarity and relevance of the content within the images that are sent for analysis. Given that visually impaired individuals might face challenges in capturing well-composed images, ensuring that the subject matter of the image is clear and discernible is not a trivial task. In this paper, we introduce an innovative approach to alleviate this challenge by utilizing Live Photos or Motion Photos.
Figure 1: Apple Live Photo structure. A Live/Motion Photo consists of a key photo, a three-second-long video, and the optional corresponding audio. The key photo is the middle frame of the video, but it can be changed to another frame.
## 3 Method
Studying Live/Motion Photos poses a significant challenge due to the absence of existing datasets. The process of creating a comprehensive dataset solely from visually impaired users is laborious and complex [16]. To address this issue, we leverage pre-existing video datasets collected by visually impaired individuals or those containing content relevant to the daily experiences of the blind. By extracting three-second temporal crops from these videos, we simulate Live/Motion Photos for tasks such as object classification and VideoQA. This enables us to evaluate and compare the effectiveness of different methods on both simulated Live/Motion Photos and standard images.
### Object Classification
To demonstrate the impact of Live/Motion Photos on object classification accuracy, we conduct experiments using the ORBIT dataset [16]. This dataset is a collection of videos recorded on cell phones by people who are blind or low-vision. The ORBIT dataset consists of 3,822 videos with 486 object categories recorded by 77 blind or low-vision people on their cell phones. Each video is meant to capture one main object, although the object may not be visible in all frames. The videos are captured in various lengths, from one second to two minutes.
To simulate Live/Motion Photos, we create short video clips with the same length as Live/Motion Photos from ORBIT and compare the performance of different image classifiers to video classifiers on these clips. To this aim, we train each image classifier on image frames of the videos and report the average classification accuracy of the frames. To evaluate the video classifiers, we train and test each method on random temporal crops of three seconds. We choose the top-performing image and video classifiers; specifically, ResNet [11], MViTv2 [12], and EfficientNetV2 [13] for image classification, and ViViT [1] and MViTv2 [12] for video classification. We use the same hyper-parameters and setup as in the original implementations, and the input size is fixed across all the methods. Following [16], we use frame accuracy as the evaluation metric for the frame-by-frame classification and video accuracy for the holistic video classification. Frame accuracy is the average number of correct predictions per frame divided by the total number of frames in a video. Video accuracy is the number of correct video-level predictions divided by the total number of videos.
Table 1 reports the object classification accuracy. The highest accuracy using images is 70.9% and achieved by EfficientNetV2-L. The results show that video classification approaches outperform frame-by-frame classification. More specifically, for Live/Motion Photos (videos of three seconds long), MViTv2 achieves an accuracy of 77.1% which is an improvement of 6.2% over EfficientNetV2-L. Since MViTv2 is designed for both image and video classification, it exhibits the benefit of using video clips over images better than other methods. Similarly, ViViT reaches an accuracy of 74.9% which is higher than EfficientNetV2-L by a margin of 4.0%. This
Figure 2: A visually impaired user trying to record a video of a keyboard [16]. Adjusting the camera field of view to cover a whole object is a challenging task for blind users. The frames are uniformly sampled, and the total video length is five seconds.
result strongly supports the effectiveness of Live/Motion Photos over single images.
### Video Question Answering
We investigate the effectiveness of Live/Motion Photos in the VideoQA task. We compare the performance of multiple VQA methods on image frames to the performance of VideoQA methods on video clips with the same length as Live/Motion Photos. While there are numerous video question answering datasets, we choose the ActivityNet-QA dataset [23] since it contains video clips similar to the day-to-day life of people with visual impairments. The ActivityNet-QA dataset adds question-answer pairs to a subset videos of the ActivityNet dataset [1]. The ActivityNet-QA dataset contains 5,800 videos with 58,000 human-annotated question-answer pairs divided as 3,200/1,800/800 videos for train/val/test splits. This dataset contains 200 different types of daily human activities, which is suitable for visual assisting applications.
We train image-based methods on randomly drawn frames with their corresponding question-answer pairs from the ActivityNet-QA dataset. Similarly, we train video-based methods on random temporal crops with the same length as Live/Motion Photos. We employ mPLUG [11] and BEiT-3 [27] as the image-based methods and Just Ask [27] and Singularity [10] as the video-based methods for Live/Motion Photos. These methods achieve state-of-the-art accuracy in the VQA and VideoQA tasks, and their implementation code is publicly available. For each method, we re-use the original hyper-parameters that achieve the best results.
As for the evaluation criteria, we use accuracy, a commonly used criterion to measure the performance of classification tasks. For the QA pairs in the test set with size \(N\), given any testing question \(\mathbf{q}_{i}\in Q\) and its corresponding ground-truth answer \(\mathbf{y}_{i}\in Y\), we denote the predicted answer from the model by \(\mathbf{a}_{i}\). \(\mathbf{a}_{i}\) and \(\mathbf{y}_{i}\) correspond to a sentence that can be seen as a set of words. The accuracy measure is defined as:
\[Accuracy=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}[\mathbf{a}_{i}=\mathbf{y}_{i}] \tag{1}\]
where \(\mathbf{1}[\cdot]\) is an indicator function such that its output is one only if \(\mathbf{a}_{i}\) and \(\mathbf{y}_{i}\) are identical, and zero otherwise [23]. We follow previous evaluation protocols for open-ended settings [27, 23, 24] and use a fixed vocabulary of training answers.
Table 2 reveals the results of our experiments for the VideoQA task. The highest accuracy for image-based approaches is 30.1% and achieved by BEiT-3. Both VideoQA methods outperform the VQA methods. More specifically, using Live/Motion Photos, Singularity achieves the highest accuracy of 38.6%, which is more than 8% higher than BEiT-3 accuracy. Similarly, Just Ask reaches an accuracy of 34.9% which is 4.8% higher than BEiT-3.
The outcomes of our experiments in object classification and VideoQA confirm the benefit of using Live/Motion Photos over images.
\begin{table}
\begin{tabular}{l|c} Method & Accuracy \\ \hline ResNet-152 [11] & 69.2 \\ MViTv2-B [11] & 70.7 \\ EfficientNetV2-L [26] & 70.9 \\ \hline ViViT [2] & 74.9 \\ MViTv2-B [11] & 77.1 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of frame-by-frame to holistic object classification methods on the ORBIT test set. The top three methods use images, and the bottom two use Live/Motion Photos.
\begin{table}
\begin{tabular}{l|c} Method & Accuracy \\ \hline mPLUG [11] & 28.9 \\ BEiT-3 [27] & 30.1 \\ \hline Just Ask [27] & 34.9 \\ Singularity [10] & 38.6 \\ \hline \end{tabular}
\end{table}
Table 2: Results of image-based and video-based methods for the VideoQA task on the ActivityNet-QA test set. mPLUG and BEiT-3 use images, while Just Ask and Singularity use Live/Motion Photos.
## 4 Deblurring Impact
Blurring is a prevalent issue in images and videos captured by individuals with visual impairments [11], and this issue can adversely affect the efficacy of assistive technologies. In this section, we undertake a systematic investigation to discern the potential benefits of deblurring on the accuracy of object classification and VideoQA. For the deblurring process, we employ the FGST method [14], a state-of-the-art video deblurring algorithm that amalgamates Transformer modules with optical flow estimation. We then proceed to apply FGST on two datasets, namely ORBIT and ActivityNet-QA, to deblur the visual content. With the deblurred datasets, we replicate the experiments as outlined in Section 3.1 and Section 3.2.
The outcomes of this investigation are tabulated in Table 3. The table segregates the results into two categories - the upper portion presents the outcomes for object classification, while the lower portion provides the results for VideoQA. The empirical findings demonstrate that the maximum enhancement in accuracy is 2.6%, which is attained by the Just Ask method, whereas the minimum improvement is documented at 1.7% by the MViTv2-B method. Furthermore, for a more illustrative understanding, Figure 3 showcases a selection of frames along with the corresponding model outputs prior to and subsequent to the deblurring process. This visualization facilitates a comparison of the quality and detail in the frames. Additionally, Figure 4 presents a compilation of frames extracted from a deblurred video, providing a visual representation of the enhancements achieved through the deblurring process.
Figure 4: Ten uniformly sampled frames from a random video in ORBIT, before and after deblurring. **Top:** Original video. **Bottom:** After deblurring. Deblurring tends to provide greater benefits to frames containing smaller objects.
Figure 3: Sample video frames from ORBIT dataset with their corresponding model output. **Top:** Original frame. **Bottom:** After deblurring. Deblurring enhances the precision of model predictions.
## 5 Temporal Length Impact
Although Live/Motion Photos are limited to three seconds, it is possible for other applications to implement the same technology but without the capturing limitations. Therefore, in this section, we study the effect of video length on accuracy for object classification and VideoQA tasks. To this aim, we evaluate the video-based methods on three temporal crop size ranges. The 'Short' crop range is the random crops of shorter than 15 seconds, the 'Medium' range is between 15 to 30, and the 'Long' range is longer than 30 seconds. Videos that are shorter than a targeted crop size are not included in that group. Additionally, we evaluate the methods on the whole dataset using all the available frames in the videos. We do not use videos shorter than the required minimum length for the Medium and Long ranges. We use the same setup in Sections 3.1 and 3.2.
For object classification, we employ ViViT [1] and MViTv2-B [11] and evaluate them on the ORBIT dataset [16]. The top two methods in Table 4 report the results for object classification using different video lengths. For MViTv2-B, the lowest accuracy is 77.1%, achieved by using Live/Motion Photos, and the highest accuracy is 79.0%, achieved using the longest video crops. For both methods, adding more frames helps improve the accuracy. The accuracy of using all the frames gets slightly worse due to the addition of shorter videos. Since having more frames reveals more data about an object, the longer crops reach higher accuracies.
For VideoQA, we employ Just Ask [23] and Singularity [11] and train and test them on the ActivityNet-QA dataset [22]. The bottom two methods in Table 4 report the results for different video lengths in VideoQA. The lowest accuracy for Singularity is 38.6% by using Live/Motion Photos, and 41.1% is the highest accuracy by using all the frames.
The findings from our ablation study reveal that while there is a positive correlation between the length of video clips and the enhancement in accuracy, the incremental accuracy attained through longer video clips, as compared to Live/Motion Photos, is not significant in contrast to single images. This implies that Live/Motion Photos, constrained to a duration of three seconds, are capable of furnishing a substantial improvement in accuracy that is deemed sufficient for a majority of applications.
\begin{table}
\begin{tabular}{l|c c c c c} & & \multicolumn{3}{c}{Accuracy} \\ Method & Live/Motion Photo & Short & Medium & Long & \\ & =3s & \textless{}15s & 15s\textgreater{} and \textless{}30s & \textgreater{}30s & All Frames \\ \hline ViViT [1] & 74.9 & 75.8 & 76.6 & 77.1 & 76.7 \\ MViTv2-B [11] & 77.1 & 77.9 & 78.4 & 79.0 & 78.5 \\ \hline Just Ask [23] & 34.9 & 36.0 & 36.9 & 37.8 & 37.0 \\ Singularity [11] & 38.6 & 39.6 & 40.4 & 41.1 & 40.6 \\ \hline \end{tabular}
\end{table}
Table 4: The results of top-performing methods with different temporal crop lengths. **Top:** Object classification on the ORBIT test set. **Bottom:** VideoQA on the ActivityNet-QA test set. Videos shorter than a targeted crop size are not included in that group.
\begin{table}
\begin{tabular}{c|c c} \hline Method & Without Deblurring & With Deblurring \\ \hline ViViT [1] & 74.9 & 76.9 (+2.0) \\ MViTv2-B [11] & 77.1 & 78.8 (+1.7) \\ \hline Just Ask [23] & 34.9 & 37.5 (+2.6) \\ Singularity [11] & 38.6 & 40.9 (+2.3) \\ \hline \end{tabular}
\end{table}
Table 3: The impact of deblurring Live/Motion Photos. **Top:** Object classification on the ORBIT test set. **Bottom:** VideoQA on the ActivityNet-QA test set.
## 6 Conclusion and Future Directions
Despite significant recent developments, visual assistance applications are still in need of improvement. Current machine learning methods designed to help visually impaired people suffer from the low quality of the images taken by the end users.
In this paper, we made multiple contributions to improving existing methods for visual assisting. We introduced a simple way to evaluate the performance of Live/Motion Photos compared to single images. We employed this approach to show that Live/Motion Photos achieve higher accuracy in common visual assisting tasks. Our experiment revealed that Live/Motion Photos perform better than images in object classification and VideoQA tasks. In addition, we further studied the effect of longer temporal crops and showed how deblurring can improve accuracy.
In future research, it is essential to carry out user studies involving visually impaired individuals. This information will guide us in refining our method, ensuring it is not only technically robust but also practically beneficial for the intended users.
|
2301.13546 | Joint Task Offloading and Cache Placement for Energy-Efficient Mobile
Edge Computing Systems | This letter investigates a cache-enabled multiuser mobile edge computing
(MEC) system with dynamic task arrivals, taking into account the impact of
proactive cache placement on the system's overall energy consumption. We
consider that an access point (AP) schedules a wireless device (WD) to offload
computational tasks while executing the tasks of a finite library in the
\emph{task caching} phase, such that the nearby WDs with the same task request
arriving later can directly download the task results in the \emph{task arrival
and execution} phase. We aim for minimizing the system's weighted-sum energy
over a finite-time horizon, by jointly optimizing the task caching decision and
the MEC execution of the AP, and local computing as well as task offloading of
the WDs at each time slot, subject to caching capacity, task causality, and
completion deadline constraints. The formulated design problem is a
mixed-integer nonlinear program. Under the assumption of fully predicable task
arrivals, we first propose a branch-and-bound (BnB) based method to obtain the
optimal offline solution. Next, we propose two low-complexity schemes based on
convex relaxation and task-popularity, respectively. Finally, numerical results
show the benefit of the proposed schemes over existing benchmark schemes. | Jingxuan Liang, Hong Xing, Feng Wang, Vincent K. N. Lau | 2023-01-31T10:47:59Z | http://arxiv.org/abs/2301.13546v1 | # Joint Task Offloading and Cache Placement for Energy-Efficient Mobile Edge Computing Systems
###### Abstract
This letter investigates a cache-enabled multiuser mobile edge computing (MEC) system with dynamic task arrivals, taking into account the impact of proactive cache placement on the system's overall energy consumption. We consider that an access point (AP) schedules a wireless device (WD) to offload computational tasks while executing the tasks of a finite library in the _task caching_ phase, such that the nearby WDs with the same task request arriving later can directly download the task results in the _task arrival and execution_ phase. We aim for minimizing the system's weighted-sum energy over a finite-time horizon, by jointly optimizing the task caching decision and the MEC execution of the AP, and local computing as well as task offloading of the WDs at each time slot, subject to caching capacity, task causality, and completion deadline constraints. The formulated design problem is a mixed-integer nonlinear program. Under the assumption of fully predicable task arrivals, we first propose a branch-and-bound (BnB) based method to obtain the optimal offline solution. Next, we propose two low-complexity schemes based on convex relaxation and task-popularity, respectively. Finally, numerical results show the benefit of the proposed schemes over existing benchmark schemes.
Mobile edge computing, proactive cache placement, computation offloading, branch-and-bound, optimization.
## I Introduction
Various computation-extensive internet of things (IoT) applications (such as extended reality, auto-driving, and tactile networks) call for low-latency communication and computation [1]. By deploying dedicated edge servers at the network edge, mobile edge computing (MEC) has been recognized as an enabling technology to meet the stringent requirement of these delay-sensitive services while addressing the computation/communication resource limitation issue of these wireless devices (WDs) [2, 3, 4]. Leveraging the storage resources of the MEC servers to proactively cache computational tasks for possible reuse, the computation performance of the MEC system can be further enhanced.
Compared to the conventional MEC system designs without caching capabilities [2, 3, 4], cache-enabled MEC system designs encounter several new technical challenges. First, the task caching and offloading decisions need to be jointly made so as to make the best use of the limited caching and computation resources. Second, the task caching and offloading strategies need to be adaptive to task dynamics and the WDs' mobility. Finally, to improve the energy efficiency of the cache-enabled MEC system, it is imperative to jointly optimize the system's computation, caching, and communication resources. In the literature, there exist several works investigating cache-enabled MEC system designs [5, 6, 7, 8, 9]. For example, an adaptive task offloading and caching scheme was proposed to provide high-quality video services to vehicular users [5]. Based on a two-stage dynamic game strategy, [6, 7] investigated joint computation offloading and resource allocation design for cache-enabled MEC systems. The works [8] and [9] proposed the joint service caching and task offloading design in the dense cellular network and single-user scenarios, respectively. Note that most of the above existing works [5, 6, 7, 8, 9] failed to consider the benefit of proactive caching to the overall multiuser MEC systems with WDs' dynamical task arrivals over time slots.
In this letter, we investigate an energy-efficient cache-enabled multiuser MEC system with dynamic task arrivals over a finite-time horizon. The finite-time horizon consists of the _task caching_ phase and the _task arrival and execution_ phase. We consider that the MEC server selects and proactively caches the result of several tasks from a finite library in the task caching phase; at the task arrival and execution phase, the WDs can directly download the task results if their requested tasks have been cached by the MEC server, and perform local computing and task offloading otherwise. We jointly optimize the task cache placement decision and remote computing of the AP, and task offloading as well as local computing of the WDs at each time slot, so as to minimize the system's weighted-sum energy consumption over the horizon. For obtaining a lower-bound benchmark for practical design schemes with dynamic task arrivals but imperfect prediction, we assume that the computational task sequence of each WD is fully predictable. We employ the branch-and-bound (BnB) method to obtain the optimal offline solution. Next, to facilitate the cache-enabled MEC design with low computational complexity, we propose a convex-relaxation based scheme and a task-popularity based scheme, respectively. Finally, numerical results show the benefits of our proposed schemes over existing benchmarks.
## II System Model and Problem Formulation
We consider a cache-enabled multiuser MEC system, which consists of an AP (integrated with an MEC server) and a set \(\mathcal{K}\triangleq\{1,...,K\}\) of single-antenna WDs. These \(K\) WDs need to compute the randomly arrived tasks within a given completion deadline. Denote by \(\mathcal{T}\triangleq\{T_{1},T_{2},...,T_{L}\}\) the computational task set to be possibly processed by the \(K\) WDs. We consider a block-by-block cache-enabled MEC system, where each transmission block is divided into Phase I which is _MEC server's task caching_ and Phase II which is _WDs' task arrival and execution_. Phase I and phase II are, respectively, further composed of \(N_{p}\) and \(N\) equal-duration time slots each
with length \(\tau\). Without loss of generality, we focus on cache-enabled MEC design within one block as shown in Fig. 1. To guarantee a high efficiency of this cache-enabled MEC system, it is assumed that \(N_{p}<N\). For the tasks which are not cached at the AP, the WDs need to perform local computing and/or to offload the tasks to the MEC server for remote execution, i.e., task offloading.
### _Phase I: MEC Server's Task Caching_
#### I-A1 Task Cache Placement
Let \(\alpha_{\ell}\in\{0,1\}\) denote the caching decision for task \(T_{\ell}\) at the MEC server, where the task \(T_{\ell}\) is cached if \(\alpha_{\ell}=1\), and \(\alpha_{\ell}=0\) otherwise, \(\forall\ell=1,...,L\)1. By denoting \(D^{\max}\) the caching capacity of the MEC server, the MEC caching needs to satisfy
Footnote 1: The caching decision variables \(\{\alpha_{\ell}\}_{\ell=1}^{L}\) will be specified by the solution to an optimization problem (P1) detailed in Section III.
\[\sum_{\ell=1}^{L}\alpha_{\ell}D_{\ell}\leq D^{\max}, \tag{1}\]
where \(D_{\ell}\) denotes the number of input-bits for task \(T_{\ell}\).
#### I-A2 Task Offloading for Caching
For facilitating the cache-enabled multiuser MEC system design, we consider the MEC server's cached tasks are all generated and offloaded from one selected WD with the smallest pathloss for task offloading to the AP. Denote by WD-\(k_{o}\) the selected WD, where \(k_{o}\in\mathcal{K}\). In order to spare the MEC server sufficient time to execute the cached tasks in the task caching phase, WD-\(k_{o}\) needs to fully offload a number \(\sum_{\ell=1}^{L}\alpha_{\ell}D_{\ell}\) of task input-bits by the end of the \((N_{p}-1)\)th slot. Within the task caching phase, denote by \(\tilde{d}_{k_{o},i}^{\text{off}}\) the number of task input-bits offloaded from WD-\(k_{o}\) to the AP at the \(i\)th slot, where \(i=1,...,N_{p}-1\). Hence, we have
\[\sum_{i=1}^{N_{p}-1}\tilde{d}_{k_{o},i}^{\text{off}}=\sum_{\ell=1}^{L}\alpha_ {\ell}D_{\ell}. \tag{2}\]
During the task caching phase, the amount of energy consumption of WD-\(k_{o}\) due to task offloading is \(\tilde{E}_{k_{o}}^{\text{off}}=\sum_{i=1}^{N_{p}-1}\frac{\tau a^{2}(2\frac{ \tilde{d}_{k_{o},i}^{\text{off}}(\tau+h_{k_{o},i})-1}{|h_{k_{o},i}|^{2}})}{|h_ {k_{o},i}|^{2}}\), where \(h_{k_{o},i}\) and \(B_{k_{o},i}\) denote the complex-valued channel coefficient and system bandwidth for task offloading from WD-\(k_{o}\) to the AP at the \(i\)th slot of the task caching phase, respectively, and \(\sigma^{2}\) denotes the additive white Gaussian noise (AWGN) power at the AP receiver.
#### I-A3 Cached Task Execution
The AP executes all cached tasks to proactively obtain their results for further reuse. Due to the causality of task execution, the total number of task input-bits to be executed by the MEC server until the \(i\)th slot of the task caching phase _cannot_ exceed that offloaded by WD-\(k_{o}\) before the \((i-1)\)th slot, where \(i=1,...,N_{p}\). Denote by \(\tilde{d}_{i}^{\text{mec}}\) the number of task input-bits executed by the MEC server at the \(i\)th slot of the task caching phase. Accordingly, the task causality constraints in the task caching phase are
\[\sum_{j=1}^{i}\tilde{d}_{j}^{\text{mec}}\leq\sum_{j=1}^{i-1}\tilde{d}_{k_{o},j }^{\text{off}},\ \forall i=1,...,N_{p}, \tag{3}\]
where \(\tilde{d}_{1}^{\text{mec}}=0\) due to the fact that there exists no task to execute yet at the first slot of the task caching phase. In addition, the computation of the offloaded tasks needs to be completed by the MEC server within the task caching phase. Hence, we have the task completion constraint as
\[\sum_{j=1}^{N_{p}}\tilde{d}_{j}^{\text{mec}}=\sum_{j=1}^{N_{p}-1}\tilde{d}_{k_ {o},j}^{\text{off}}. \tag{4}\]
In addition, the amount of energy consumption of the MEC server within the task caching phase is \(\tilde{E}^{\text{mec}}=\sum_{i=1}^{N_{p}}\zeta_{0}C_{0}\tilde{d}_{i}^{\text{ mec}}(\tilde{f}_{i}^{\text{mec}})^{2}=\sum_{i=1}^{N_{p}}\frac{\zeta(C_{0}^{3} \tilde{d}_{i}^{\text{mec}})^{3}}{\tau^{2}}\), where \(\tilde{f}_{i}^{\text{mec}}=\frac{C_{0}^{\text{mec}}}{C_{0}\tilde{d}_{i}^{\text {mec}}}\) denotes the required CPU rate for task execution by the MEC server at the \(i\)th slot of Phase I, \(C_{0}\) denotes the number of required CPU cycles per task input-bit, and \(\zeta_{0}\) denotes the CPU architecture capacitance coefficient of the MEC server.
### _Phase II: WDs' Task Arrival and Execution_
Within this phase, if the results of the task arriving at the beginning of a slot for WD-\(k\) has been cached by the MEC server during Phase I, WD-\(k\) will download the results directly2. Otherwise, this task needs to be executed by local computing at WD-\(k\) and/or task offloading to the MEC server. Let \(\mathbf{s}_{k}\triangleq\{s_{k,1},...,s_{k,N}\}\) denote the sequence of computation tasks for each WD-\(k\), where each task \(s_{k,n}\in\mathcal{T}\) arrives at WD-\(k\) at the beginning of the \(n\)th slot and \(n\in\mathcal{N}\triangleq\{1,...,N\}\).3 Since the arrived tasks are randomly sampled from the task set \(\mathcal{T}\), it is possible that some tasks in the sequence \(\mathbf{s}_{k}\) may be repeated. Therefore, we need to retrieve the task-arrival set from each WD-\(k\)'s task sequence \(\mathbf{s}_{k}\).
Footnote 2: We assume that the number of task-output bits is significantly smaller than that of task-input bits, and therefore the incurred energy cost at the MEC server is negligible [1, 2, 3].
Footnote 3: We assume that the sequence of each WD’s computational tasks is fully predicted by exploiting the historical data _a priori_[4, 5, 6]. Hence, the proposed solution is offline, serving as a performance lower bound for online solutions considering (partially) unknown dynamic task arrivals.
**Definition 1** (Causality Task Set): _For each WD-\(k\), we define \(\mathcal{S}_{k,n}^{\text{CTS}}=\{s_{k,k}\in\mathcal{T}\ |\ i\in\{1,...,n\}\}\) as WD-\(k\)'s causality task set (CTS) till the \(n\)th slot. It follows that \(\mathcal{S}_{k,1}^{\text{CTS}}=\{s_{k,1}\}\) and \(\mathcal{S}_{k,i}^{\text{CTS}}\subseteq\mathcal{S}_{k,j}^{\text{CTS}}\) for \(i<j\in\mathcal{N}\)._
We consider _partial offloading_ policy [11], such that each WD-\(k\) can arbitrarily divide each task into two parts for local computing and computation offloading, respectively.
#### I-B1 Local Computing and Task Offloading of WDs
Let \(d_{k,n}^{\text{dec}}\geq 0\) and \(d_{k,n}^{\text{off}}\geq 0\) denote the number of task input-bits for local computing and computation offloading for each WD-\(k\) at the \(n\)th slot, respectively. For WD-\(k\), the total number of task input-bits executed by both local computing and offloading until the \(n\)th slot must be smaller than those arriving until
Fig. 1: Timeline of the cache-enabled MEC protocol within one block.
the \(n\)th slot, where \(n\in\mathcal{N}\). Therefore, we have the task computation causality constraints as [11]
\[\sum_{j=1}^{n}d_{k,j}^{\text{loc}}+\sum_{j=1}^{n}d_{k,j}^{\text{eff}} \leq\sum_{\ell=1}^{L}\mathbbm{1}_{T_{\ell}\in\mathcal{S}_{k,n}^{\text{res}}}(1 -\alpha_{\ell})D_{\ell},\ n\in\mathcal{N}, \tag{5}\]
where \(k\in\mathcal{K}\), and \(\mathbbm{1}_{A}\) denotes the indicator function with \(\mathbbm{1}_{A}=1\) if the statement \(A\) is true, and \(\mathbbm{1}_{A}=0\) otherwise.
Note that the WDs need to obtain the computed results of the arrived tasks before the end of the \(N\)th slot. Therefore, we have the task computation deadline constraint as
\[\sum_{j=1}^{N}d_{k,j}^{\text{dec}}+\sum_{j=1}^{N}d_{k,j}^{\text{ff}}=\sum_{ \ell=1}^{L}\mathbbm{1}_{T_{\ell}\in\mathcal{S}_{k,N}^{\text{res}}}(1-\alpha_{ \ell})D_{\ell}, \tag{6}\]
where \(k\in\mathcal{K}\). Note that \(d_{k,N}^{\text{eff}}=0\), since there has no time for the MEC server's remote execution at the end of the \(N\)th slot.
Denote by \(C_{k}\) the number of CPU cycles for executing one task input-bit by the local computing of WD-\(k\). We consider that these CPU cycles are locally executed by WD-\(k\) using an identical CPU frequency at the \(n\)th slot, which is determined as \(f_{k,n}=\frac{C_{k}d_{k,n}^{\text{dec}}}{\tau}\), \(\forall k\in\mathcal{K}\), \(n\in\mathcal{N}\)[1, 2]. For WD-\(k\), we assume that the CPU frequency \(f_{k,n}\) is always smaller than the allowable maximum CPU frequency. Denote by \(E_{k}^{\text{loc}}\) the total amount of energy consumption of WD-\(k\) for local computing. Therefore, we have \(E_{k}^{\text{loc}}=\sum_{n=1}^{N}\zeta_{k}C_{k}d_{k,n}^{\text{loc}}f_{k,n}^{2} =\sum_{n=1}^{N}\frac{\zeta_{k}C_{k}^{\text{loc}}(d_{k,n}^{\text{loc}})^{3}}{ \tau^{2}}\), where \(\zeta_{k}\) denotes the CPU architecture capacitance coefficient of WD-\(k\).
Let \(p_{k,n}>0\), \(h_{k,n}\in\mathbb{C}\), and \(B_{k,n}>0\) denote the transmit power, the channel coefficient, and the system bandwidth for task offloading from WD-\(k\) to the AP at the \(n\)th slot of Phase II, respectively. The channel state information \(\{h_{k,n}\}\) is assumed to be perfectly obtained based on channel estimation methods in this letter. As WD-\(k\) needs to offload a number \(d_{k,n}^{\text{eff}}\) of task input-bits to the MEC server, the data rate for offloading from WD-\(k\) to the AP at the \(n\)th slot is \(r_{k,n}=d_{k,n}^{\text{ff}}/\tau\), where \(r_{k,n}\triangleq B_{k,n}\log_{2}(1+\frac{p_{k,n}|h_{k,n}|^{2}}{\sigma^{2}})\). Hence, the amount of energy consumption for WD-\(k\)'s task offloading in Phase II is given by \(E_{k}^{\text{eff}}=\sum_{n=1}^{N-1}p_{k,n}\tau=\sum_{n=1}^{N-1}\frac{\tau \sigma^{2}(2^{\mathbbm{k}_{k,n}^{\text{eff}}/(p_{k,n})}-1)}{|h_{k,n}|^{2}}\).
As a result, the total energy consumption \(E_{k}\) for WD-\(k\) in Phase II is expressed as \(E_{k}=E_{k}^{\text{loc}}+E_{k}^{\text{loc}}\), \(\forall k\in\mathcal{K}\).
#### Ii-B2 Task Execution of MEC Server
The MEC server needs to execute the offloaded tasks from the \(K\) WDs. Denote by \(d_{n}^{\text{me}}\) the number of task input-bits executed by the MEC server at the \(n\)th slot. Due to the task causality conditions, the total number of task input-bits executed by the MEC server until the \(n\)th slot cannot exceed those offloaded from the \(K\) WDs until the previous \((n-1)\)th slot. Therefore, the task causality constraints at the MEC server are expressed as
\[\sum_{j=1}^{n}d_{j}^{\text{mee}}\leq\sum_{j=1}^{n-1}\sum_{k=1}^{K}d_{k,j}^{ \text{off}},\ \forall n\in\mathcal{N}\setminus\{N\}. \tag{7}\]
Note that \(d_{1}^{\text{mee}}=0\), since there exist no offloaded tasks available at the MEC server at the first slot. Again, the computation of these offloaded tasks needs to be completed before the end of the \(N\)th slot of Phase II. Thus, the task computation deadline constraint at the MEC server is
\[\sum_{j=1}^{N}d_{j}^{\text{mee}}=\sum_{j=1}^{N-1}\sum_{k=1}^{K}d_{k,j}^{\text{ off}}. \tag{8}\]
Let \(f_{n}^{\text{mee}}\) denote the CPU frequency of the MEC server at the \(n\)th slot, which is determined as \(f_{n}^{\text{mee}}=\frac{C_{0}d_{n}^{\text{mee}}}{\tau}\). The amount of energy consumption for the MEC server to execute a total of \(\sum_{n=1}^{N}C_{0}d_{n}^{\text{mee}}\) CPU cycles within the \(N\) slots is expressed as \(E^{\text{mee}}=\sum_{n=1}^{N}\frac{\zeta_{0}C_{0}^{3}(d_{n}^{\text{mee}})^{3}}{ \tau^{2}}\).
### _Problem Formulation_
In this letter, we are interested in minimizing the weighted-sum energy consumption of a block for the cache-enabled multiuser MEC system, subject to the MEC server's caching capacity constraint, the task causality constraints, and the task completion deadline constraints. Accordingly, by defining \(\boldsymbol{x}\triangleq(\{a_{\ell}\}_{\ell=1}^{L},\{\tilde{d}_{k,i}^{\text{ off}},\tilde{d}_{k,n}^{\text{dec}}\}_{i=1}^{N},\{d_{k,n}^{\text{dec}},f_{k,n}^{ \text{dec}}\}_{k\in\mathcal{K},n\in\mathcal{N}},\{d_{n}^{\text{mee}}\}_{n=1}^{N})\), the cache-enabled MEC design problem is formulated as
\[\text{(P1)}:\ \underset{\boldsymbol{x}}{\text{minimize}}\ w_{0}(\tilde{E}^{ \text{mee}}+E^{\text{mee}})+w_{1}(\tilde{E}_{k_{a}}^{\text{off}}+\sum_{k=1}^{K}E_ {k})\] (9a) subject to \[\text{(1)}\text{(-8)},\ \alpha_{\ell}\in\{0,1\},\ \forall\ell=1,...,L \tag{9b}\] \[\tilde{d}_{k_{a},i}^{\text{dec}}\geq 0,\ \tilde{d}_{i}^{\text{ mece}}\geq 0,\ \forall i=1,...,N_{p}\] (9c) \[d_{k,n}^{\text{loc}}\geq 0,d_{k,n}^{\text{off}}\geq 0,d_{n}^{\text{ mec}}\geq 0,\ \forall k,\forall n, \tag{9d}\]
where \(w_{0}\geq 0\) and \(w_{1}\geq 0\) denote the energy weights such that \(w_{0}+w_{1}=1\). Note that (P1) is a mixed-integer nonlinear programming (MINLP) problem, which is NP-hard [12, 13].
## III Proposed Offline Solutions to Problem (P1)
In this section, we first employ BnB method to obtain the optimal offline solution to (P1), and then introduce two low-complexity schemes based on task-popularity and convex relaxation, respectively.
### _Optimal Offline Solution Based on BnB Algorithm_
The BnB method is an efficient and powerful tree-search algorithm by maintaining a provable upper and lower bound on the optimal objective value, and terminating with an \(\epsilon\)-optimal solution [13]. Hence, in order to obtain the globally optimal benchmark for practical cache-enabled MEC design schemes, we employ the BnB method to solve problem (P1) in this subsection.
To start with, we define the sets \(\mathcal{L}_{0}\subseteq\mathcal{L}\) and \(\mathcal{L}_{1}\subseteq\mathcal{L}\), where \(\mathcal{L}\triangleq\{1,...,L\}\). Consider an optimization problem as
\[\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}):\ \underset{\boldsymbol{x}}{\text{minimize}}\ w_{0}(\tilde{E}^{ \text{mee}}+E^{\text{mee}})+w_{1}(\tilde{E}_{k_{a}}^{\text{off}}+\sum_{k=1}^{K}E_ {k})\] subject to \[\text{(1)}\text{(-8)},\text{(9c)},\text{(9d)}\] \[\alpha_{\ell}\in\{0,1\},\ \forall\ell\in\mathcal{L}\setminus( \mathcal{L}_{0}\cup\mathcal{L}_{1}),\]
where \(\alpha_{\ell}=0\) for \(\ell\in\mathcal{L}_{0}\) and \(\alpha_{\ell}=1\) for \(\ell\in\mathcal{L}_{1}\). If the sets satisfy \(\mathcal{L}_{0}\cup\mathcal{L}_{1}\neq\mathcal{L}\), then \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) is a mixed Boolean convex problem [13]. Following the BnB approach, we establish a
binary tree with root as \(\text{P}(\emptyset,\emptyset)\), and \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) corresponds to a node at depth \(m\) in the tree, where a number \(|\mathcal{L}_{0}|+|\mathcal{L}_{1}|=m\) of Boolean variables are specified and \(0\leq m\leq L\). Specifically, we obtain a global upper bound and a global lower bound in each iteration of the BnB method, where the optimal value of problem (P1) is guaranteed to be always within the range of the global upper and lower bounds. The detailed BnB procedure is described as follows.
* _Bounding:_ By solving \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) with the Boolean variables being relaxed as continuous variables, we obtain a lower bound of the optimal value of \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\). By rounding \(\alpha_{\ell}\), \(\forall\ell\in\mathcal{L}\setminus(\mathcal{L}_{0}\cup\mathcal{L}_{0})\), to be zero or one, we obtain an upper bound of the optimal value of \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\).
* _Branching:_ By selecting one task index \(\ell\in\mathcal{L}\setminus(\mathcal{L}_{0}\cup\mathcal{L}_{0})\), we obtain two sub-problems as \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) and \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\). Letting the Boolean variables of \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) (or \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\)) be relaxed and fixed, respectively, we obtain a lower and an upper bound of the optimal value of \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) (or \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\)). Then, we update the global lower and upper bounds.
* _Pruning:_ At each iteration, we remove the nodes with lower bounds larger than the current global upper bound from the tree.
The proposed BnB method maintains a provable upper and lower bound on the optimal objective value, and it returns an \(\epsilon\)-optimal solution for problem (P1) [13], where \(\epsilon>0\) denotes the tolerable error. Specifically, a number of \(\mathcal{O}(2^{L+2}-1)\sqrt{N_{p}+KN+N}\log(\frac{(N_{p}+KN+N)/\epsilon^{(0)} }{\epsilon})))\) Newton iterations in the worst case is required to solve (P1), where \(t^{(0)}>0\) denotes the initial barrier parameter of the interior-point method for obtaining the lower and upper bounds for each problem \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\)[12], respectively. This is practically prohibited in the terms of computational complexity, especially when the task library size \(L\) is large. Hence, we propose in the sequel two computationally-efficient solutions by separating caching and computation decisions.
### _Suboptimal Solution with Task-Popularity Caching Policy_
In this subsection, we present a task-popularity caching based design scheme. First, based on the task-popularity scores of the total \(L\) tasks and the MEC server's caching capacity, we determine the task cache placement decision for the task-caching phase. Next, given the cache-placement decisions, we jointly optimize the \(K\) WDs' task offloading decisions and local/remote CPU frequencies within Phase II.
For task \(T_{\ell}\), its task-popularity score \(t_{\ell}\) is defined as the number of occurrences in the \(K\) WDs' task sequences [9, 10], i.e., \(t_{\ell}=\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbb{1}_{s_{k,n}-T_{\ell}}\), where \(s_{k,n}\in\mathcal{S}_{k,N}^{\text{CTS}}\). Based on the popularity scores, these \(L\) tasks are ordered as \(t_{\pi(1)}\geq t_{\pi(2)}\geq...\geq t_{\pi(L)}\), where \(\mathbf{\pi}=[\pi(1),...,\pi(L)]^{T}\) is a permutation of the sequence \(\{1,...,L\}\). Under the caching capacity constraint of the MEC server, we select a number of \(1\leq M\leq L\) tasks with the highest-\(M\) popularity scores4, i.e., \(\{T_{\pi(1)},...,T_{\pi(M)}\}\), to be cached in the MEC server, such that \(\sum_{m=1}^{M}D_{\pi(m)}\leq D^{\max}\) and \(\sum_{m=1}^{M+1}D_{\pi(m)}>D^{\max}\). Accordingly, the sets \(\mathcal{L}_{0}^{\text{pop}}=\{\pi(M+1),...,\pi(L)\}\) and \(\mathcal{L}_{1}^{\text{pop}}=\{\pi(1),...,\pi(M)\}\) are determined, and we have \(\alpha_{i}^{\text{pop}}=0\) for \(i\in\mathcal{L}_{0}^{\text{pop}}\) and \(\alpha_{i}^{\text{pop}}=1\) for \(j\in\mathcal{L}_{1}^{\text{pop}}\). Next, given the determined \(\mathcal{L}_{0}^{\text{pop}}\) and \(\mathcal{L}_{1}^{\text{pop}}\), we solve the convex problem \(\text{P}(\mathcal{L}_{0}^{\text{pop}},\mathcal{L}_{1}^{\text{pop}})\) to obtain its optimal solution \(((\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{pop}},(\widehat{d}_{k_{ \alpha}}^{\text{rec}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{off}})^{ \text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{rec}})^{\text{pop}})\). Now, the task-popularity caching based solution for (P1) is obtained as \((\alpha_{\ell}^{\text{pop}},(\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{ pop}},(\widehat{d}_{i}^{\text{rec}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{ off}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{rec}})^{\text{pop}},( \widehat{d}_{n}^{\text{mc}})^{\text{pop}})\).
Footnote 4: Note that when multiple tasks have the same popularity score, the MEC server selects the task with as large number of task input-bits as possible for energy saving, subject to the MEC server’s cache capacity constraint. In the case when the equally-popular tasks have the same task input-bits, the MEC server equiprobably selects one of these tasks.
### _Suboptimal Solution Based on Convex Relaxation_
In this subsection, we present a convex relaxation based design scheme. Specifically, by relaxing the binary task cache decision variables \(\{\alpha_{\ell}\}_{\ell=1}^{L}\) into continuous ones (i.e., \(0\leq\alpha_{\ell}\leq 1\), \(\forall\ell=1,...,L\)), problem (P1) is transformed into a convex optimization problem, whose optimal solution can thus be efficiently obtained by off-the-shelf convex solvers, e.g., CVX toolbox [12]. Denote by \((a_{\ell}^{*},\widehat{d}_{k_{\alpha},i}^{\text{off*}},\widehat{d}_{i}^{\text{ mec*}},d_{k,n}^{\text{off*}},\widehat{d}_{k_{\alpha},n}^{\text{loc*}},\widehat{d}_{n}^{\text{ mec*}})\) the optimal solution to the convex-relaxed problem (P1). We determine the sets \(\mathcal{L}_{1}^{\text{pel}}=\{\ell|0\leq\alpha_{\ell}^{*}\leq 0.5,\ell\in\mathcal{L}\}\) and \(\mathcal{L}_{1}^{\text{pel}}=\{\ell|0.5<\alpha_{\ell}^{*}\leq 1,\ell\in\mathcal{L}\}\). Hence, we have \(\alpha_{i}^{\text{rel}}=0\) for \(i\in\mathcal{L}_{0}^{\text{pol}}\) and \(\alpha_{i}^{\text{pel}}=1\) for \(j\in\mathcal{L}_{1}^{\text{rel}}\), and the solution \(((\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{pel}},(\widehat{d}_{i}^{\text{ mec}})^{\text{rel}},(\widehat{d}_{k_{\alpha},n}^{\text{off*}})^{\text{rel}},(\widehat{d}_{k_{ \alpha},n}^{\text{loc*}})^{\text{rel}},(\widehat{d}_{n}^{\text{mc*}})^{\text{ rel}})\) for Phase II.
## IV Numerical Results
In this section, we evaluate the effectiveness of the proposed schemes. In simulations, we set \(K=20\), \(N_{p}=5\), \(N=30\), and \(\tau=0.1\) second. The CPU architecture capacitance coefficients are set as \(\zeta_{k}=10^{-28}\) and \(\zeta_{0}=10^{-29}\); the number of CPU cycles for WD-\(k\)'s local computing and MEC server's execution of one task input-bit is \(C_{k}=3\times 10^{3}\) and \(C_{0}=10^{3}\) CPU-cycles/bit, \(k\in\mathcal{K}\), respectively; the energy weights of the AP and the WDs are set as \(w_{0}=0.1\) and \(w_{1}=0.9\), respectively. Denote by \(d_{k}\in[500,1000]\) meters (m) the distance between WD-\(k\) and the AP, where \(d_{k}=500+\frac{500(k-1)}{K^{-1}}\) m, \(k\in\mathcal{K}\). We consider Rician fading channel model [2]: \(h_{k,n}=\sqrt{\frac{\mathcal{X}_{R}\Omega_{0}d_{k}^{-n}}{1+\mathcal{K}_{R}}}h_{0} +\sqrt{\frac{10d_{k}^{-n}}{1+\mathcal{K}_{R}}}h\), \(\forall k,n\), where \(\mathcal{X}_{R}=3\) denotes Rician factor, \(h_{0}=1\) is the line-of-sight (LoS) component, \(\Omega_{0}=-32\) dB corresponds to the pathloss at a reference distance of one meter, \(\alpha=3\) denotes the pathloss exponential, and \(h\sim\mathcal{CN}(0,1)\) denotes
* _Full local computing scheme:_ Each WD only locally executes its tasks, which corresponds to solving (P1) by setting \(d_{k,n}^{\text{eff}}=0\), \(\forall k,n\).
Fig. 2 shows the average weighted-sum energy performance versus the caching capacity \(D^{\max}\), where the noise power is \(\sigma^{2}=10^{-8}\) Watt (W). Except for the benchmark scheme that the MEC server cannot cache tasks, the system weighted-sum energy consumption of the other five schemes decreases with \(D^{\max}\). In Fig. 2(a), compared to the _Full offloading_ scheme, the proposed _Relaxation_ scheme achieves a closer performance to the BnB optimal scheme in the case with a small \(D^{\max}\) value (e.g., \(D^{\max}\leq 60\) Kbits), but it is not true with a large \(D^{\max}\) value. This implies the importance of exploiting both offloading and local computing capabilities for energy saving in the case of a small caching capacity. In Fig. 2(a), the task-popularity based caching scheme performs inferiorly to both the _Relaxation_ scheme and the _Full offloading_ scheme, the _Full local computing_ scheme. In Fig. 2(b), the _Task-popularity caching_ scheme outperforms the _Full offloading_ scheme in the case of a small caching capacity value (e.g., \(D^{\max}\leq 75\) Kbits), but it is not true in the case of a larger caching capacity value. This shows the merit of the _Task-popularity caching_ scheme for energy saving with a large task set size \(L\). Finally, all the schemes consume more energy in Fig. 2(b) than that in Fig. 2(a). This is because the causality task set size increases with the task set size \(L\).
Fig. 3 shows the energy consumption performance of the task caching and task arrival/execution phases, respectively, where the task set size is \(L=40\). It is observed that all the five schemes with MEC caching capability consume almost the same energy during the task caching phase, but it is not true for the task arrival/execution phase. This is because the MEC server prefers to cache computational tasks as many as possible for energy saving. In Fig. 3, the _Task-popularity caching_ scheme performs inferiorly to the _Relaxation_ scheme, and a substantially large performance gap is observed between the _BnB_ and _Relaxation_ scheme. The _Task-popularity caching_ scheme outperforms the _Full offloading_ scheme in Fig. 3(b), but it is not true in Fig. 3(a). This demonstrates that the energy consumption for task offloading becomes dominant in the case with a high noise power.
## V Conclusion
In this letter, we investigated a joint task cache placement and offloading design for cache-enabled MEC systems with dynamic task arrivals. With the objective of minimizing the system's weighted-sum energy consumption in both the task caching and task arrival/execution phases, we jointly optimized the task cache placement, the MEC server's task execution, and local computing as well as task offloading of the WDs, subject to the caching capacity, task causality, and task completion deadline constraints. We first employed the BnB method to obtain the optimal offline solution to characterize a performance lower bound for online schemes considering (partially) unknown dynamic task arrivals, and then proposed two low-complexity caching strategies based on task-popularity and convex relaxation, respectively. As a future work, it is worth investigating the robust task offloading and caching design against predicted errors of the task sequence and reinforcement learning (RL) based joint design for scenarios of partially predictable and fully unknown task-arrival sequences, respectively.
|
2309.07311 | Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and
Simplicity Bias in MLMs | Most interpretability research in NLP focuses on understanding the behavior
and features of a fully trained model. However, certain insights into model
behavior may only be accessible by observing the trajectory of the training
process. We present a case study of syntax acquisition in masked language
models (MLMs) that demonstrates how analyzing the evolution of interpretable
artifacts throughout training deepens our understanding of emergent behavior.
In particular, we study Syntactic Attention Structure (SAS), a naturally
emerging property of MLMs wherein specific Transformer heads tend to focus on
specific syntactic relations. We identify a brief window in pretraining when
models abruptly acquire SAS, concurrent with a steep drop in loss. This
breakthrough precipitates the subsequent acquisition of linguistic
capabilities. We then examine the causal role of SAS by manipulating SAS during
training, and demonstrate that SAS is necessary for the development of
grammatical capabilities. We further find that SAS competes with other
beneficial traits during training, and that briefly suppressing SAS improves
model quality. These findings offer an interpretation of a real-world example
of both simplicity bias and breakthrough training dynamics. | Angelica Chen, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, Naomi Saphra | 2023-09-13T20:57:11Z | http://arxiv.org/abs/2309.07311v5 | # Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
###### Abstract
Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that demonstrates how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in pretraining when models abruptly acquire SAS, concurrent with a steep drop in loss. This breakthrough precipitates the subsequent acquisition of linguistic capabilities. We then examine the causal role of SAS by manipulating SAS during training, and demonstrate that SAS is necessary for the development of grammatical capabilities. We further find that SAS competes with other beneficial traits during training, and that briefly suppressing SAS improves model quality. These findings offer an interpretation of a real-world example of both simplicity bias and breakthrough training dynamics.
## 1 Introduction
While language model training usually leads to smooth improvements in loss over time (Kaplan et al., 2020), not all knowledge emerges uniformly. Instead, language models acquire different capabilities at different points in training. Some capabilities remain fixed (Press et al., 2023), while others decline (McKenzie et al., 2022), as a function of dataset size or model capacity. Certain capabilities even exhibit abrupt improvements--this paper focuses on such discontinuous dynamics, which are often called **breakthroughs**(Srivastava et al., 2022), **emergence**(Wei et al., 2022), **breaks**(Caballero et al., 2023), or **phase transitions**(Olsson et al., 2022). The interpretability literature rarely illuminates how these capabilities emerge, in part because most analyses only examine the final trained model. Instead, we consider _developmental_ analysis as a complementary explanatory lens.
To better understand the role of interpretable artifacts in model development, we analyze and manipulate these artifacts during training. We focus on a case study of **Syntactic Attention Structure** (SAS), a model behavior thought to relate to grammatical structure. By measuring and controlling the emergence of SAS, we deepen our understanding of the relationship between the internal structural traits and extrinsic capabilities of masked language models (MLMs).
SAS occurs when a model learns specialized attention heads that focus on a word's syntactic neighbors. This behavior emerges naturally during conventional MLM pre-training (Clark et al., 2019; Voita et al., 2019; Manning et al., 2020). We observe an abrupt spike in SAS at a consistent point in training, and explore its impact on MLM capabilities by manipulating SAS during training. Our observations paint a picture of how interpretability artifacts may represent simplicity biases that compete with other learning strategies during MLM training. In summary, our main contributions are:
* Monitoring latent syntactic structure (defined in Section 2.1) throughout training, we identify (Section 4.1) a precipitous loss drop composed of multiple phase transitions (defined in Section 2.3) relating to various linguistic abilities. At the onset of this stage (which we call the **structure onset**), SAS spikes. After the spike, the model starts handling complex linguistic phenomena correctly, as signaled by a break in BLiMP score (which we call the **capabilities onset**). The structure onset is associated with increasing functional complexity in the model, whereas the rest of training sees declining complexity.
* We introduce a regularizer to examine the causal role of SAS (defined in Section 2.2) and use it to show that SAS is necessary for handling complex linguistic phenomena (Section 4.2) and that SAS competes with an alternative strategy that exhibits its own break in the loss curve, which we call the **alternative strategy onset**.
* Section 4.3 shows that briefly suppressing SAS improves model quality and accelerates convergence. Suppressing past the alternative strategy onset damages performance and blocks SAS long-term, suggesting this phase transition terminates a critical learning period.
## 2 Methods
### Syntactic Attention Structure
One proposal for interpreting attention is to treat some attention weights as syntactic connections (Manning et al., 2020; Voita et al., 2019; Clark et al., 2019). Our method is based on Clark et al. (2019), who find that some specialized attention heads focus on the target word's dependency relations.
Dependency parses describe latent syntactic structure. Each word in a sentence has a word that it modifies, which is its parent in the syntax tree. Each dependency is labeled--e.g., an adjective modifies a noun through an amod relation in the Universal Dependencies annotation system (Nivre et al., 2017). In the example that follows, when an MLM predicts the word _nests_, it is likely to rely heavily on its syntactic relations _builds_ and _ugly_. One head may attend to adjectival modifiers like _ugly_ while another attends to direct objects like _builds_.
Figure 1: **BERT first learns to focus on syntactic neighbors with specialized attention heads, and then exhibits grammatical capabilities in its MLM objective**. The former (internal) and the latter (external) model behaviors both emerge abruptly, at moments we respectively call the **structure onset** (\(\blacktriangle\)) and **capabilities onset** (\(\blacklozenge\)) (quantified as described in Section 2.3). We separately visualize three runs with different seeds, noting that these seeds differ in the stability of Unlabeled Attachment Score (UAS; see Section 2.1) after the structure onset, but uniformly show that SAS emerges almost entirely in a brief window of time. We show (a) MLM loss, with 95% confidence intervals across samples; (b) internal grammar structure, measured by UAS on the parse induced by the attention distributions; and (c) external grammar capabilities, measured by average BLiMP accuracy with 95% confidence intervals across tasks.
We call this tendency to form heads that specialize in specific syntactic relations Syntactic Attention Structure (SAS). To measure SAS, we follow Clark et al. (2019) in using a simple probe based off the surface-level attention patterns, detailed in Appendix A. The probe provides an implicit parse, with an accuracy measured by **unlabeled attachment score** (UAS).
### Controlling SAS
In addition to training models with BERTBase parameters, we also train models where SAS is promoted or suppressed. The model with SAS promoted throughout training is called BERTSAs, while the model with SAS suppressed throughout training is called BERTSAs.
In order to adjust SAS for these models, we train a BERTBase model through methods that are largely conventional (Section 3.1), with one difference. We add a **syntactic regularizer** that manipulates the structure of the attention distributions using a syntacticity score \(\gamma(x_{i},x_{j})\), equal to the maximum attention weight between syntactically connected words \(i\) and \(j\). We use this regularizer to penalize or reward higher attention weights on a token's syntactic neighbors by adding it to the MLM loss \(L_{\text{MLM}}\). We scale the regularizer by a constant coefficient \(\lambda\) which may be negative to promote SAS or positive to suppress SAS. If we denote \(D(x)\) as the set of all dependents of \(x\), then the new loss is:
\[L(x)=\underbrace{L_{\text{MLM}}(x)}_{\text{Original loss}}+\underbrace{\lambda \sum_{i=1}^{|x|}\sum_{x_{j}\in D(x_{i})}\gamma(x_{i},x_{j})}_{\text{ Syntactic regularization}} \tag{1}\]
### Identifying breakthroughs
This paper studies breakthroughs: sudden changes in model behavior during a brief window of training. What do we consider to be a breakthrough, given a metric \(f\) at some distance (e.g., in timesteps) from initialization \(d\)? We are looking for break point \(d^{*}\) with the sharpest angle in the trajectory of \(f\), as determined by the slope between \(d^{*}\) and \(d^{*}\pm\Delta\) for some distance \(\Delta\). If we have no measurements at the required distance, we infer a value for \(f\) based on the available checkpoints--e.g., if \(d\) is measured in discrete timesteps, we calculate the angle of loss at 50K steps for \(\Delta=5K\) by imputing the loss from checkpoints at 45K and 55K steps to calculate slope.
\[\text{break}(f,\Delta)=\arg\max_{t}\left[f(t+\Delta)-f(t)\right]-\left[f(t)- f(t-\Delta)\right] \tag{2}\]
In other words, \(\text{break}(f,\Delta)\) is the point \(t\) that maximizes the difference between the slope from \(f(t)\) to \(f(t+\Delta)\) and the slope from \(f(t-\Delta)\) to \(f(t)\), approximating the point of maximum acceleration.
## 3 Models and Data
### Architecture and Training
We pre-train BERTBase models using largely the same training set-up and dataset as Sellam et al. (2022). We use the uncased architecture with 12 layers of 768 dimensions each and train with the AdamW optimizer (Loshchilov and Hutter, 2019) for 1M steps with learning rate of 1e-4, 10,000 warm-up steps and training batch size of 256 on a single \(4\times 100\) NVIDIA A100 node. Our results only consider checkpoints that are recorded while pretraining remains numerically stable for all seeds, so we only analyze up to 300K steps.
Our training set-up departs from the original BERT set-up (Devlin et al., 2019) in that we use a fixed sequence length of 512 throughout training, which was shared by Sellam et al. (2022). We also use the same WordPiece-based tokenizer as Devlin et al. (2019) and mask tokens with 15% probability. Unless otherwise stated, all experiments are implemented with the HuggingFace transformers (v4.12.5) (Wolf et al., 2020), Huggingface datasets (v2.7.1) (Lhoest et al., 2021), and Pytorch (v1.11) (Paszke et al., 2019) libraries.
Our pre-training datasets consist of BookCorpus (Zhu et al., 2015) and English Wikipedia (Foundation, 2022). Since we do not have access to the original BERT pre-training dataset, we use a more recent Wikipedia dump from May 2020. For pre-training runs where syntactic regularization is applied, we
use spacey (Honnibal and Montani, 2017) dependency parses on the Wikipedia portion of the dataset as our silver standard labels.
### Finetuning and probing
Fine-tuning on GLUEOur fine-tuning set-up for each GLUE task matches that of the original paper (Wang et al., 2018), with initial learning rate 1e-4, batch size of 32, and 3 total epochs.
Evaluating on BLiMPBLiMP (Warstadt et al., 2020) is a benchmark of minimal pairs for evaluating knowledge of various English grammatical phenomena. We evaluate performance using the MLM scoring function from Salazar et al. (2020) to compute the pseudo-log-likelihood of the sentences in each minimal pair, and counting the MLM as correct when it assigns a higher value to the acceptable sentence in the pair. Further implementation details are in Appendix D.
Evaluating SAS dependency parsingWe measure SAS by evaluating the model's implicit best-head attention parse (Eq. (3), Clark et al., 2019) on a random sample of 1000 documents from the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1999), with silver labels provided by the Stanford Dependencies parser (Schuster and Manning, 2016). We evaluate parse quality using the **Unlabeled Attachment Score** (UAS) computed from the attention map, as described in Eq. (3).
## 4 Results
Often, interpretable artifacts are assumed to be essential to model performance. However, evidence for the importance of SAS exists only at the instance level on a single trained model. We know that specialized heads can predict dependencies (Clark et al., 2019) and that pruning them damages performance more than pruning other heads (Voita et al., 2019). However, these results are only weak evidence that SAS is essential for modeling grammar. Passive observation of a trained model may discover artifacts that occur as a side effect of training without any effect on model capabilities. Causal methods that intervene on particular components at test time, meanwhile, may interact with the rest of the model in complex ways, spuriously implying a component to be essential for performance when it could be removed if it were not so entangled with other features. They also only address whether a component is necessary at test time, and not whether that component is necessary during _learning_. Both test-time approaches--passive observations and causal interventions--are limited.
We begin by confirming the assumption that SAS must be essential to performance. To motivate the case for skepticism of the role of SAS, we note a lack of correlation between SAS metrics and model capabilities across random pretraining seeds (Appendix E). After first strengthening the evidence for SAS as a meaningful phenomenon by taking model development into account, we then draw connections to the literature on phase transitions, simplicity bias, and model complexity.
Figure 2: Metrics during BERTBase training averaged, with 95% confidence intervals, across three seeds. Structure (\(\blacktriangle\)) and capabilities (\(\blacklozenge\)) onsets are marked.
### The Syntax Acquisition Phase
Most work on scaling laws (Kaplan et al., 2020) presents test loss as a quantity that homogeneously responds to the scale of training, declining by a power law relative to the size of the corpus. In the MLM setting, we instead identify a precipitous drop in the loss curve of BERTBase (Fig. 1(a)), consistently spanning 20K-30K timesteps of training across various random seeds. We now show how this rapid learning stage can be interpreted as the composition of two distinct phase transitions.
_The MLM loss drop occurs alongside the acquisition of grammatical capabilities in two consecutive stages_, each distinguished by breaks as defined by Eq. (2). The first stage aligns with the formation of SAS--we call this break in implicit parse UAS the **structure onset**. As seen in Fig. 1(b), the UAS spikes at a consistent time during each run, in tandem with abrupt improvements in MLM loss (Fig. 1(a)) and finetuning metrics (Fig. 2(b)). Immediately following the spike, UAS plateaus, but the loss continues to drop precipitously before leveling off. The second part of this loss drop is associated with a break in the observed grammatical capabilities of the model, as measured by accuracy on BLiMP (Fig. 1(c)). We call the BLiMP break the **capabilities onset**. We show similar trajectories on the MultiBERTs (Sellam et al., 2022) reproductions (Appendix F).
By observing these phase transitions, we can see that the _internal_ representation of grammar, in the form of syntactic attention, precipitates the _external_ observation of grammatical behavior, in the form of correct language modeling judgements on linguistically challenging examples. This is not only a single breakthrough during training, but a sequence of breakthroughs that appear to be dependent on each other. We might compare this to the "checkmate in one" BIG-Bench task, a known breakthrough behavior in autoregressive language models (Srivastava et al., 2022). Only at a large scale can models accurately identify checkmate moves, but further exploration revealed that the model was progressing in a linear fashion at offering consistently valid chess moves before that point. The authors posited that the checkmate capability was dependent on the ability to make valid chess moves, and likewise it seems we have found that grammatical capabilities are dependent on a latent representation of syntactic structure in the form of SAS.
We find that the existence of these phase transitions holds even when using continuous metrics (Appendix H), in contrast to Schaeffer et al. (2023), who found that many abrupt improvements in capabilities are due to the choice of thresholded metrics like accuracy. We also find that the phase transitions hold even when setting the x-axis to some continuous alternative to discrete training timesteps, such as weight norm (Appendix G). Thus both x-axis and y-axis may use non-thresholded scales, and the phase transitions remain present.
#### 4.1.1 Complexity and Compression
According to the Information Bottleneck (IB) theory of deep learning (Shwartz-Ziv and Tishby, 2017), the generalization capabilities of Deep Neural Networks (DNNs) can be understood as a specialized form of representation compression. This theory posits that DNNs achieve generalization by selectively discarding noisy and task-irrelevant information from the input, while preserving key features (Shwartz-Ziv, 2022). Subsequent research has provided generalization bounds that support this theory (Shwartz-Ziv et al., 2018; Kawaguchi et al., 2023). Recently, similar principles have been conjectured to explain the capabilities of language models (Chiang, 2023; Cho, 2023; Sutskever, 2023). Current studies on vision tasks distinguish two phases: an initial _memorization_ phase followed by a protracted representation _compression_ phase (Shwartz-Ziv and Tishby, 2017; Ben-Shaul et al., 2023). During memorization, SGD explores the multidimensional space of possible solutions. After interpolating, the system undergoes a phase transition into a diffusion phase, marked by chaotic behavior and a reduced rate of convergence as the network learns to compress information.
To validate this theory in MLM training, we analyze various complexity metrics as proxies for the level of compression (see Figure Fig. 2(a) for TwoNN intrinsic dimension (Facco et al., 2017), and Appendix K.2 for additional complexity and information metrics). Our results largely agree with the IB theory, showing a prevailing trend toward information compression throughout the MLM training process. However, during the acquisition of SAS, a distinct memorization phase emerges. This phase, which begins with the onset of structural complexity, allows the model to expand its capacity for handling new capabilities. A subsequent decline in complexity coincides with the onset of advanced capabilities, thereby confirming the dual-phase nature postulated by the IB theory.
### Controlling SAS
Having established the natural emergence of SAS, we use our syntacticity regularizer (Section 2.2) to evaluate whether SAS is truly necessary for handling complex grammatical phenomena. We confirm that this regularizer can suppress or accelerate the SAS phase (Fig. 3(b)). As seen in Fig. 3(a), _enhancing_ SAS behavior throughout training (BERT\({}_{\text{SAS+}}\)) leads to early improvements in MLM performance, but hurts later model quality.1 Conversely, _suppressing_ SAS (BERT\({}_{\text{SAS}}\).) damages both early performance and long term performance. Suppressing SAS during training prevents the emergence of linguistically complex capabilities (Fig. 3(c)). In other words, preventing the internal grammar structure onset will also prevent the external grammar capabilities onset that follows it.
Footnote 1: Note that in BERT\({}_{\text{SAS+}}\), we see the capabilities onset is after the structure onset, but before the SAS plateau, suggesting that SAS only needs to hit some threshold to precipitate the capabilities onset, and does not need to stabilize.
However, there exists an early apparent phase transition in the MLM loss (at around 6K steps), which suggests that an alternative strategy emerges that leads to improvements prior to the structure onset. We therefore refer to this early inflection as the **alternative strategy onset**. Our results suggest that SAS is crucial for effectively representing grammar, but the existence of the alternative strategy onset implies that SAS also competes with other useful traits in the network. We explore the alternative strategy onset represented by the phase transition under SAS suppression in Appendix L.
Importantly, the break in the loss curve occurs earlier in training when suppressing SAS. The implication is profound: that the alternative strategy is competing with SAS, and suppressing SAS permits the model to learn the alternative strategy more effectively and earlier. Inspired by this insight, we next ask whether there can be larger advantages to avoiding the natural SAS-based strategy early in training, thus claiming the benefits of the alternative strategy.
### Early-stage SAS Regularization
Because BERT\({}_{\text{SAS}}\). briefly outperforms both BERT\({}_{\text{Base}}\) and BERT\({}_{\text{SAS+}}\), we have argued that suppressing SAS implicitly promotes a competing strategy. This notion of competition between features or strategies is well-documented in the literature on simplicity bias (Shal et al., 2020; Arpit et al., 2017; Hermann and Lampinen, 2020; Pezeshki et al., 2021). Achille et al. (2018) find that some patterns must be acquired early in training in order to be learned at all, so avoiding an overly simplistic strategy can have significant long-term consequences on performance. To test the hypothesis that learning SAS early allows SAS to out-compete other beneficial strategies, this section presents experiments that only suppress the early acquisition of SAS. For multistage regularized models, we first suppress SAS with \(\lambda=0.001\) and then set \(\lambda=0\) after a pre-specified timestep in training. These models are
Figure 3: Metrics over the course of training for baseline and SAS-regularized models (under both suppression and promotion of SAS). Structure (\(\blacktriangle\)) and capabilities (\(\blacklozenge\)) onsets are marked, except on BERT\({}_{\text{SAS}}\), which does not clearly exhibit either onset. Each line is averaged over three random seeds. On y-axis: (a) MLM loss (b) Implicit parse accuracy (c) average BLiMP accuracy.
named after the timestep that SAS is suppressed until, e.g., BERT\({}^{(3k)}_{\text{SAS}}\): is the model where \(\lambda\) is set to 0 at timestep 3000.
We find that suppressing SAS early on improves the effectiveness of training later. Specifically, BERT\({}^{(3k)}_{\text{SAS}}\)- outperforms BERT\({}_{\text{Base}}\) even well after both models pass their structure and capabilities onsets (Fig. 4(a); Table 1), although these advantages cease to be significant after longer training runs (Appendix O). Some multistage models even have more consistent SAS than BERT\({}_{\text{Base}}\) (Fig. 4(b)). We posit that certain associative patterns are learned more quickly while suppressing SAS, and these patterns not only support overall performance but even provide improved features to acquire SAS.
#### 4.3.1 When can we recover the SAS phase transition?
Inspecting the learning curves of the temporarily suppressed models, we find that briefly suppressing SAS can promote performance (Appendix M) and accelerate the structure onset (Fig. 5(a)) while augmenting it (Fig. 5(b)). However, after more prolonged suppression of SAS, it becomes impossible to hit the dramatic spike in implicit parse UAS that we see in BERT\({}_{\text{Base}}\) (Section 4.3). If the SAS phase transition is prevented, MLM performance falls significantly compared to BERT\({}_{\text{Base}}\) and we see no SAS spike (Appendix M). It appears that we must choose between phase transitions; _the model
\begin{table}
\begin{tabular}{l c c c} \hline \hline & MLM Loss \(\downarrow\) & GLUE average \(\uparrow\) & BLiMP average \(\uparrow\) \\ \hline BERT\({}_{\text{Base}}\) & \(1.77\pm 0.01\) & \(\mathbf{0.74\pm 0.01}\) & \(\mathbf{0.74\pm 0.02}\) \\ BERT\({}_{\text{SAS}}\) & \(2.39\pm 0.03\) & \(0.59\pm 0.01\) & \(\mathbf{0.74\pm 0.01}\) \\ BERT\({}_{\text{SAS}}\) & \(2.02\pm 0.01\) & \(0.69\pm 0.02\) & \(0.67\pm 0.03\) \\ BERT\({}^{(3k)}_{\text{SAS}}\) & \(\mathbf{1.75\pm 0.01}\) & \(\mathbf{0.74\pm 0.00}\) & \(\mathbf{0.75\pm 0.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation metrics, with standard error, after training for 100K steps (\(\sim 13\)M tokens), averaged across three random seeds for each regularizer setting. We selected BERT\({}^{(3k)}_{\text{SAS}}\) as the best multistage hyperparameter setting based on MLM test loss at 100K steps. Bolded values significantly outperform non-bolded values in the same column under a 1-sided Welch’s \(t\)-test.
Figure 4: Metrics for the checkpoint at 100k steps, for various models with SAS suppressed early in training. The vertical line marks the BERT\({}_{\text{SAS}}\). alternative strategy onset; note that _model quality is worst when the regularizer is changed during this phase transition_. The x-axis reflects the timestep when regularizer \(\lambda\) is changed from \(0.001\) to \(0\). To control for the length of training time without suppressing SAS, Appendix N presents the same findings measured at a checkpoint exactly 50K timesteps after releasing the regularizer. On y-axis: (a) MLM loss shown with standard error of the mean across batches; (b) Implicit parse accuracy (UAS); (c) GLUE average (Task breakdown in Appendix I); (d) BLiMP average (Task break down in Appendix J).
cannot undergo first the alternative strategy onset and then the structure onset_. In fact, we measure the worst model quality when we switch settings _during_ the alternative strategy transition (Fig. 4).
## 5 Discussion and Conclusions
Our work is a response to the limitations of probes that analyze only a single model checkpoint without regard to its training history (Saphra, 2023). We posit that _developmental_ explanations, which incorporate a model's training history, provide critical perspective and explanatory power. We have used this developmental approach to demonstrate the necessity of SAS for grammatical reasoning in MLMs, and have furthermore used SAS as a case study to shed light on ways of circumventing simplicity bias, the dynamics of model complexity, and the dangers of changing optimization strategies during a phase transition. Our work also guides further understanding of many deep learning phenomena, and may inspire a more rigorous approach to science of deep learning as well. Beyond this discussion, an extended literature review with connections to related work on causal interpretability, simplicity bias, and phase transitions is presented in Appendix C.
### Early dynamics and simplicity bias
Sutton (2019) introduced the machine learning world to the _Bitter Lesson_: models that use informed priors based on domain understanding will always lose to generic models trained on large quantities of data. Our work suggests that we might go further: even generically learned structure can form a disadvantageously strong prior, if that structure reflects human expert models of syntax. In other words, human interpretations of natural phenomena are so simplistic that their presence early in training can serve as a negative signal. If this observation holds in natural language--a modality that has evolved specifically to be human interpretable--how much worse might simplicity bias impact performance on other domains like scientific and physical modeling?
Dependency and competitionWe have found evidence of multiple possible relationships between emergent behaviors. Previous work suggests that model properties can _depend_ on one another, e.g., checkmate-in-one capabilities depend on first learning valid chess moves (Srivastava et al., 2022); or _compete_ with one another, e.g., as sparse and dense representation strategies compete on arithmetic tasks (Merrill et al., 2023). Similarly, we first find evidence of a dependency relationship, based on our evidence that SAS is a prerequisite for many linguistic capabilities as indicated by BLiMP. Then, we identify a competitive relationship, based on our observations that suppressing SAS leads to an alternative strategy that prioritizes context differently. These distinct relationships shed light on how model behaviors interact during training and may suggest training improvements that delay or promote particular behaviors. Existing work in simplicity bias (Shah et al., 2020; Pezeshki et al., 2021) suggests that a preference for simple heuristics might prevent the model from acquiring a more reliable strategy. Our results appear to be evidence of this pitfall in practice.
PretrainingThe benefits of early training without permitting SAS bear an intriguing parallel to pretraining. Just as pretraining removes the particulars of the downstream task by training on generic
Figure 5: If SAS is suppressed only briefly, it accelerates and augments the SAS onset. However, further suppression delays and attenuates the spike in UAS, until it eventually ceases to show a clear inflection. A vertical dotted line marks the \(\text{BERT}_{\text{SAS}}\). alternative strategy onset.
language structure, early SAS suppression removes the particulars of linguistic structure itself. In doing so, we encourage the MLM to treat the entire sequence without regard for proximity to the target word, as a bag-of-words model might. Therefore, the beginning of training is even more unstructured and generic than it would be under the baseline MLM objective.
Curriculum learningWe also offer some insights into why curriculum learning is rarely effective at large scales. Simple data is likely to encourage simplistic strategies, so any curriculum that homogenizes the early distribution could promote a simplistic strategy, helping early performance but harming later performance. Predictably, curricula no longer help at large scales (Wu et al., 2021).
### Phase transitions
Instability at critical pointsAbrupt changes are rarely documented directly at the level of validation loss, but we show that they may be observed--and interpreted--in realistic settings. Smooth improvements in loss may even elide abrupt breakthroughs in specific capabilities, as discussed in Appendix L.1. Our multistage results point to a surprising effect: that the worst time to change the regularization is during a phase transition. When we release the SAS suppression well _before_ the point at which the alternative transition starts during \(\text{BERT}_{\text{SAS}}\), training (i.e., the alternative strategy onset), we find it is possible to recover the SAS transition, preventing damage to GLUE, BLiMP, and MLM loss metrics. Likewise, although releasing the regularization _after_ the alternative transition prevents the recovery of SAS, it nonetheless incurs limited damage to model quality metrics. However, releasing the regularizer _during_ the phase transition leads to a substantially worse model under every metric. These findings suggest that, far from indicating a typical region of the loss surface, the moment of breakthrough constitutes a critical point where an optimizer misstep can damage the performance of the model, possibly even at convergence. This phenomenon may be consequential for future optimization research.
### Interpretability epistemology
While SAS was already known to emerge naturally in MLMs, there were reasons to be skeptical of its necessity. One objection is that raw attention distribution information is not a guaranteed proxy for information flow (Abnar and Zuidema, 2020; Ethayarajh and Jurafsky, 2021). Another thread questions the interpretability of attention by obfuscating the attention weights without damaging model performance (Jain and Wallace, 2019; Serrano and Smith, 2019). If the fundamentally informative nature of attention is subject to extensive debate (Bibal et al., 2022), we must also be skeptical of overstating its connection to syntax. Attention syntacticity is a microcosm of wider failures in the science of deep learning, which has been criticized for a tendency to use anecdotal observations and post-hoc explanations, rather than statistically rigorous correlational or causal tests (Forde and Paganini, 2019).
Prior evidence for the importance of SAS came in two forms, both of which operate post-hoc at the instance level on specific samples: instance-level observation in fully trained networks (Clark et al., 2019) and instance-level causal experiments in fully trained networks (Voita et al., 2019). Observational studies might discover structures that emerge as a side effect of training, rather than those crucial to the operation of the model. Traits that emerge as a side effect of a process but appear crucial to performance are called _spandrels_ in evolutionary biology; possible examples include human chins (Yong, 2016) and enjoyment of music (Pinker, 1997). While instance-level causal experiments like Voita et al. (2019) may be epistemically stronger than the observational studies, the network's failure to recover from a causal intervention does not indicate that it relies on the structure provided. Instead, the network may be more brittle to large distribution shifts on the relevant features, without truly relying on those features (Tucker et al., 2021). One possible scenario is that a behavior may develop early in training and become _vestigial_ (like a human's tailbone (Mukhopadhyay et al., 2012)) but sufficiently integrated into subnetworks that generate and cancel information that the network cannot easily recover from its removal. To support the skeptical case, we find that SAS metrics were not correlated with MLM capabilities across random seed (Fig. 6).
We provide several epistemically strong results in favor of the importance of SAS. First, we study models in development (Section 4.1), finding that the SAS phase transition directly precipitates the emergence of linguistic capabilities. This result supports that blackbox grammatical capabilities
depend on measurable internal structures. Second, we have causal interventions on development (Section 4.2), which again reveal the importance of this head specialization behavior by promoting and suppressing it. Instance-level interpretability methods, at best, offer evidence that a trait emerges and the model cannot recover from its removal; we can now say that certain capabilities depend on this trait--although the model eventually discovers alternative ways to represent some of them. |
2302.14779 | Twisted Drinfeld Centers and Framed String-Nets | We discuss a string-net construction on 2-framed surfaces, taking as
algebraic input a finite, rigid tensor category, which is neither assumed to be
pivotal nor semi-simple. It is shown that circle categories of our framed
string-net construction essentially compute Drinfeld centers twisted by powers
of the double dual functor. | Hannes Knötzele, Christoph Schweigert, Matthias Traube | 2023-02-28T17:24:32Z | http://arxiv.org/abs/2302.14779v3 | # Twisted Drinfeld Centers and Framed String-Nets
###### Abstract.
We discuss a string-net construction on 2-framed surfaces, taking as algebraic input a finite, rigid tensor category, which is neither assumed to be pivotal nor semi-simple. It is shown that circle categories of our framed string-net construction essentially compute Drinfeld centers twisted by powers of the double dual functor.
###### Contents
* 1 Introduction
* 2 Recollections on Finite Tensor Categories
* 2.1 Rigid Monoidal Categories
* 2.2 (Co-)End in Finite Tensor Categories
* 3 Twisted Drinfeld Centers and Monads
* 3.1 Monadicity of Twisted Drinfeld Centers
* 3.2 Kleisli Category and Representable Functors
* 4 Progressive Graphical Calculus for Finite Tensor Categories
* 5 Framed String-Net Construction
* 5.1 Locally Progressive Graphs
* 5.2 Framed String-Net Spaces
* 6 Circle Categories and Twisted Drinfeld-Centers
* 6.1 2-Framings of the Circle and Framed Cylinders
* 6.2 Circle Categories
* 6.3 Circle Category as a Kleisli Category
## 1. **Introduction**
Over the last few decades, topological field theories proved to be a very fruitful research area relating concepts from topology, categorical algebra and mathematical physics. A topological field theory (TFT) in \(n\) dimensions with values in a symmetric monoidal category \(\mathcal{C}\) is a symmetric monoidal functor \(\mathcal{F}:\mathsf{Cob}^{n}\to\mathcal{C}\), where \(\mathsf{Cob}^{n}\)
Introduction
The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold \(\Sigma\) with a smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\).
figure 3). In view of the results in [10], we expect that these circle categories are related to Drinfeld centers twisted by powers of the double dual functor. In fact, twisted Drinfeld centers \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) can be defined for any pair of strong-monoidal functors \(F,G:\mathbb{C}\to\mathbb{C}\): the objects of \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) are pairs \((c,\gamma_{\bullet,c})\) consisting of an object \(c\in\mathbb{C}\) together with a half-braiding \(\gamma_{c,x}:F(c)\otimes x\xrightarrow{\simeq}x\otimes G(c)\).
To identify the circle category for the cylinder \(\mathsf{C}_{n}\) with a twisted Drinfeld center, we use that the twisted Drinfeld center \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) is equivalent to the category of modules for the twisted central monad \({}_{F}T_{G}\) on \(\mathbb{C}\). We show in Theorem 6.3 that the string-net construction gives us the Kleisli category \(\mathbb{C}_{T_{n}}\) of a specific monad \(T_{n}\) where the twisting is by a power of the bidual functor (which is monoidal):
\[\mathsf{Cyl}(\mathsf{C}_{n},\mathbb{C})\simeq\mathbb{C}_{T_{n}}. \tag{1.1}\]
In Theorem 6.4 we show that the twisted Drinfeld center itself can be recovered, as a linear category by taking presheaves on the Kleisli category for which the pullback to a presheaf on \(\mathbb{C}\) is representable:
\[\mathrm{PSh}_{\mathbb{C}}(\mathsf{Cyl}_{n})\simeq\mathsf{Z}_{n}(\mathbb{C}) \tag{1.2}\]
where \(\mathsf{Z}_{n}(\mathbb{C})\) is the Drinfeld center twisted by the appropriate power of the double dual functor depending on \(n\), cf. equation (3.4). This allows us to recover twisted Drinfeld centers from framed string-nets. The comparison with [10, Corollary 3.2.3] shows complete coincidence. This provides a way to obtain twisted Drinfeld centers in the spirit of planar algebras [11]; they are closely related to tube algebras which can be formulated as the annular category [11] of a planar algebra.
This paper is organized as follows. In two preliminary sections, we recall in section 2 some facts and notation about finite tensor categories and in section 3 about twisted Drinfeld centers and monads. In this section, we show in particular in Proposition 3.6 how to obtain the Eilenberg-Moore category of a monad in terms of presheaves on the Kleisli category whose pullback is representable. While this statement is known in the literature, in particular in a general context, we include the proof for the benefit of the reader.
In section 4 we recall the graphical calculus of progressive graphs for monoidal categories that has been introduced in [11]. In section 5, we first show in subsection 5.1 how to globalize the graphical calculus from section 4 to 2-framed surfaces. This allows us to define in subsection 5.2 string-net spaces on 2-framed surfaces, see in particular Definition 5.9.
Section 6 is devoted to the study of circle categories: in subsection 6.1 we very briefly discuss framings of cylinders, before we define framed circle categories in section 6.2 and show in Theorem 6.3 that the circle categories are equivalent to Kleisli categories. Finally, Theorem 6.4 in section 6.3 contains the main result (1.2) and the extension to arbitrary framings in Remark 6.5.
**Acknowledgment:** The authors thank Gustavo Jasso, Ying Hong Tham and Yang Yang for useful discussions. CS and MT are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SCHW1162/6- 1; CS is also supported by the DFG under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. HK acknowledges support by DFG under 460925688 (in the Emmy-Noether group of Sven Moller).
## 2. **Recollections on Finite Tensor Categories**
In this section, we recall some facts about finite tensor categories and at the same time fix notation. Proofs and more detailed information can be found in e.g. [1, 13, 11].
Throughout this paper, \(\mathbb{K}\) will be an algebraically closed field of characteristic zero. All monoidal categories will be assumed to be strict.
### Rigid Monoidal Categories
An abelian monoidal category \((\mathcal{C},\otimes,1)\) is \(\mathbb{K}\)-_linear_ if it is enriched in \(\mathsf{Vect}_{\mathbb{K}}\) and if \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\) is a bilinear functor. A _linear functor_ between \(\mathbb{K}\)-linear categories is an additive functor, i.e. linear on Hom-spaces. For \(\mathbb{K}\)-linear categories \(\mathcal{D}\), \(\mathcal{D}\), we denote the category of linear functors from \(\mathcal{D}\) to \(\mathcal{B}\) by \(\mathsf{Fun}_{\mathbb{K}}(\mathcal{D},\mathcal{D})\). For a category \(\mathcal{C}\), we denote \(\mathcal{C}^{op}\) for the opposite category, i.e. \(\mathcal{C}^{op}\) has the same objects as \(\mathcal{C}\) and \(\operatorname{Hom}_{\in^{op}}(x,y)=\operatorname{Hom}_{\mathcal{C}}(y,x)\). For a monoidal category \((\mathcal{C},\otimes,1)\), its _opposite monoidal category_\(\mathcal{C}^{rev}:=(\mathcal{C}^{op},\otimes^{op},1)\) is the opposite category \(\mathcal{C}^{op}\) endowed with the monoidal structure \(x\otimes^{op}y:=y\otimes x\) for \(x,y\in\mathcal{C}^{op}\).
A monoidal category \(\mathcal{C}\) has _left duals_ if for every object \(x\in\mathcal{C}\), there exists an object \({}^{\vee}x\in\mathcal{C}\), called the _left dual object_ of \(x\), together with a _left coevaluation_\(\operatorname{coev}_{x}:\mathbb{1}\to{}^{\vee}x\otimes x\) and left evaluation \(\operatorname{ev}_{x}:x\otimes{}^{\vee}x\to 1\) satisfying the usual two zig-zag relations. Similarly, \(\mathcal{C}\) has _right duals_ if for \(x\in\mathcal{C}\), there exists an object \(x^{\vee}\in\mathcal{C}\), called the _right dual object_, together with a _right coevaluation_ morphism \(\widetilde{\operatorname{coev}}_{x}:1\to x\otimes x^{\vee}\) and a _evaluation_ morphism \(\widetilde{\operatorname{ev}}_{x}:x^{\vee}\otimes x\to 1\) satisfying again the appropriate two zig-zag relations. Equivalently, we could have defined a right dual object for \(x\in\mathcal{C}\) to be a left dual object for \(x\) in \(\mathcal{C}^{rev}\). A monoidal category is _rigid_ if it has both left and right duals.
Left and right duality can be conveniently expressed in terms of strong monoidal functors \(\mathcal{C}^{rev}\to\mathcal{C}\). To be more precise, the _left dual functor_ is defined as
\[\begin{split}{}^{\vee}(\bullet):\mathcal{C}^{rev}& \to\mathcal{C}\\ x&\mapsto{}^{\vee}x\\ \operatorname{Hom}_{\mathcal{C}^{env}}(x,y)\ni f& \mapsto{}^{\vee}f\in\operatorname{Hom}_{\mathcal{C}}({}^{\vee}x,{}^{\vee}x) \end{split} \tag{2.1}\]
with
\[{}^{\vee}f\coloneqq\left[{}^{\vee}x\xrightarrow{\operatorname{coev}_{y} \operatorname{gid}_{{}^{\vee}x}}{}^{\vee}y\otimes y\otimes{}^{\vee}x \xrightarrow{\operatorname{id}_{{}^{\vee}\otimes f\operatorname{gid}_{{}^{ \vee}x}}}{}^{\vee}y\otimes x\otimes{}^{\vee}x\xrightarrow{\operatorname{id}_{ {}^{\vee}\otimes\operatorname{ev}_{x}}}{}^{\vee}y\right]\quad. \tag{2.2}\]
Analogously, there is a _right duality functor_
\[\begin{split}(\bullet)^{\vee}:\mathcal{C}&\to \mathcal{C}^{rev}\\ x&\mapsto x^{\vee}\\ \operatorname{Hom}_{\mathcal{C}}(x,y)\ni f&\mapsto f^{ \vee}\in\operatorname{Hom}_{\mathcal{C}^{env}}(x^{\vee},y^{\vee}),\end{split} \tag{2.3}\]
where
\[f^{\vee}\coloneqq\left[{}^{y^{\vee}}\xrightarrow{\operatorname{id}_{{}^{ \vee}\otimes\widetilde{\operatorname{ev}}_{x}}}{}^{\vee}y^{\vee}\otimes x \otimes x^{\vee}\xrightarrow{\operatorname{id}_{{}^{\vee}\otimes f \operatorname{gid}_{{}^{\vee}}}}{}^{y^{\vee}}y\otimes y\otimes x^{\vee} \xrightarrow{\widetilde{\operatorname{ev}}_{y}\operatorname{gid}_{{}^{ \vee}}}x^{\vee}\right]\quad. \tag{2.4}\]
It is not hard to show that left and right duality functors are indeed strong monoidal functors. The following coherence result allows us to assume that left and right duality functors are strict and the two functors are inverse functors:
**Lemma 2.1**.: _[_15_, Lemma 5.4]_ _For any rigid monoidal category, there exists a rigid monoidal category \(\mathcal{D}\) such that_
1. \(\mathcal{C}\) _and_ \(\mathcal{D}\) _are equivalent as monoidal categories._
2. \(\mathcal{D}\) _is a strict monoidal category._
3. \({}^{\vee}(\bullet):\mathcal{D}^{rev}\to\mathcal{D}\) _is a strict monoidal functor._
4. \({}^{\vee}(\bullet)\) _and_ \((\bullet)^{\vee}\) _are inverse functors._
_Remark 2.2_.: We could have defined duality functors also with reversed directions, i.e. the left duality functor as functor and the right duality functor \((\bullet)^{\vee}:\mathcal{C}^{rev}\to\mathcal{C}\). From the previous Lemma, we get and. The double dual functors and are monoidal functors; in general they are _not_ naturally isomorphic to the identity functor as monoidal functors. A pivotal structure amounts to the choice of a monoidal isomorphism; in this paper, we do not require the existence of a pivotal structure.
**Definition 2.3**.:
1. A \(\mathbb{K}\)-linear category is _finite_, if it is equivalent to the category \(A-\mathsf{Mod}\) of finite-dimensional modules over a finite-dimensional \(\mathbb{K}\)-algebra \(A\).
2. A _finite tensor category_ is a finite rigid monoidal category.
_Remark 2.4_.:
1. For an equivalent intrinsic characterization of finite linear categories, we refer to [1, section 1.8]. In particular, the morphism spaces of a finite category \(\mathcal{C}\) are finite-dimensional \(\mathbb{K}\)-vector spaces and \(\mathcal{C}\) has a finite set of isomorphism classes of simple objects.
2. A finite tensor category \(\mathcal{C}\) is, in general, neither semi-simple nor pivotal.
A linear functor \(F:\mathcal{C}\to\mathcal{D}\) between \(\mathbb{K}\)-linear categories is not necessarily exact. In case \(\mathcal{C}\) and \(\mathcal{D}\) are finite tensor categories, it turns out that being left (right) exact is equivalent to admitting a left (right) adjoint.
**Theorem 2.5**.: _[_11_, Proposition 1.7]_ _A functor \(F:\mathcal{C}\to\mathcal{D}\) between finite linear categories is left (right) exact if and only if it admits a left (right) adjoint._
We note several consequences: by Lemma 2.1 the duality functors are inverses and thus adjoints. Hence both functors are exact. Due to the existence of left and right duals, the tensor product of a finite tensor category is an exact functor in both elements. Finally, given two finite linear categories \(\mathcal{D},\mathcal{C}\), we denote the category of left exact functors from \(\mathcal{D}\) to \(\mathcal{C}\) by \(\mathsf{Lex}(\mathcal{D},\mathcal{C})\).
### (Co-)End in Finite Tensor Categories
Coends, monads and their module categories will be crucial for relating circle categories obtained from framed string-nets to twisted Drinfeld centers. In this subsection, we recall necessary definitions and results. Most of the results can be found in [13, Chapter VI and IX.6]. Throughout this section \(\mathcal{C}\) will be a finite tensor category. Some of the results hold in greater generality; we refer to [13, Chapter IX.6 and IX.7].
Let \(\mathcal{A}\) be an abelian \(\mathbb{K}\)-linear category, \(H:\mathcal{C}\times\mathcal{C}^{op}\to\mathcal{A}\) a bilinear bifunctor and \(a\in\mathcal{A}\) be an object of \(\mathcal{A}\). A _dinatural transformation from \(H\) to \(a\)_ consists of a family of maps \(\{\psi_{c}:H(c,c)\to a\}_{c\in\mathcal{C}}\), such that \(\psi_{d}\circ H(f,\mathrm{id}_{d})=\psi_{c}\circ H(\mathrm{id}_{c},f)\) for all \(f\in\mathrm{Hom}_{\mathbb{K}}(c,d)\).
**Definition 2.6**.: The _coend of \(H\)_ is an object \(\int^{c\in\mathcal{C}}H(c,c)\), together with a universal dinatural transformation \(\left\{\iota_{c}:H(c,c)\to\int^{c\in\mathcal{C}}H(c,c)\right\}\). This means that for any dinatural
transformation \(\{\psi_{c}:H(c,c)\to a\}\), there exists a unique morphism \(\tau\in\operatorname{Hom}_{a}(\int^{c\in\mathfrak{C}}H(c,c),a)\), such that the following diagram commutes
for all \((c,d)\in\mathfrak{C}\times\mathfrak{C}^{op}\) and \(f:c\to d\).
**Lemma 2.7**.: _[_1_, Corollary 5.1.8]_ _If \(H:\mathfrak{C}\times\mathfrak{C}^{op}\to\mathfrak{A}\) is bilinear functor exact in both arguments, the coend \(\int^{c\in\mathfrak{C}}H(c,c)\) exists._
**Definition 2.8**.: _[_1_]_ _Let \(\mathfrak{D}\), \(\mathfrak{C}\) be finite tensor categories and \(\mathfrak{C}\) a \(\mathbb{K}\)-linear category. Assume that the functor \(H:\mathfrak{D}\times\mathfrak{C}\times\mathfrak{C}^{op}\to\mathfrak{C}\) is left exact in both arguments. The left exact coend of \(H\) is an object \(\oint^{c\in\mathfrak{C}}H(\bullet;c,c)\) in the category \(\mathsf{Lex}(\mathfrak{D},\mathfrak{C})\) of left exact functors, together with a universal dinatural transformation \(\{\iota_{c}:H(\bullet;c,c)\to\oint^{c\in\mathfrak{C}}H(\bullet;c,c)\}\) consisting of morphisms in \(\mathsf{Lex}(\mathfrak{D},\mathfrak{C})\)._
## 3. **Twisted Drinfeld Centers and Monads**
In this section, we introduce twisted Drinfeld centers of monoidal categories and review their description as Eilenberg-Moore categories over monads. String-net constructions do not directly yield Eilenberg-Moore categories; hence we develop an explicit construction of the Eilenberg-Moore category of a monad from its Kleisli category.
### Monadicity of Twisted Drinfeld Centers
As before, \(\mathfrak{C}\) is in this section a finite tensor category.
The _Drinfeld center_\(\mathsf{Z}(\mathfrak{C})\) of a monoidal category \(\mathfrak{C}\) is a categorification of the notion of a center of an algebra. It has as objects pairs \((X,\gamma_{\bullet,x})\), with a natural isomorphism \(\gamma_{\bullet,x}:\bullet\otimes x\xrightarrow{\simeq}x\otimes\bullet\), called the _half-braiding_, such that the identity
\[\gamma_{c\otimes d,x}=(\gamma_{c,x}\otimes\operatorname{id}_{d})\circ( \operatorname{id}_{c}\otimes\gamma_{d,x})\]
holds for all \(c,d\in\mathfrak{C}\). The following generalization is well-known:
**Definition 3.1**.: Let \(F,G:\mathfrak{C}\to\mathfrak{C}\) strict \(\mathbb{K}\)-linear monoidal endofunctors. The _twisted Drinfeld center_\({}_{F}\mathsf{Z}_{G}(\mathfrak{C})\) is the following category:
* _Objects_ are pairs \((x,\gamma_{\bullet,x})\), where (3.1) \[\gamma_{\bullet,x}:F(\bullet)\otimes x\xrightarrow{\simeq}x\otimes G(\bullet)\] is a natural isomorphism satisfying (3.2) \[\gamma_{c\otimes d,x}=(\gamma_{c,x}\otimes\operatorname{id}_{G(d)})\circ( \operatorname{id}_{F(c)}\otimes\gamma_{d,x})\] for all \(c,d\in\mathcal{C}\).
* A _morphism_\(f:(x,\gamma_{\bullet,x})\to(y,\gamma_{\bullet,y})\) is a morphism \(f\in\operatorname{Hom}_{\mathcal{C}}(x,y)\) such that (3.3) \[\left[F(c)\otimes x\xrightarrow{\gamma_{c,x}}x\otimes G(c)\xrightarrow{f \otimes\operatorname{id}}y\otimes G(c)\right]=\left[F(c)\otimes x \xrightarrow{\operatorname{id}\otimes f}F(c)\otimes y\xrightarrow{\gamma_{c,y }}y\otimes G(c)\right]\.\]
The monoidal functors we will be interested in are powers of the double duals. Specifically, we consider the following cases
\[\mathsf{Z}_{n}(\mathcal{C})\coloneqq\begin{cases}_{(\bullet)^{\vee}}\mathsf{ Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C}),&n\in\mathsf{Z}_{>0},\\ _{(\bullet)^{\vee}}\mathsf{Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C} ),&n=0,\\ _{(\bullet)^{\vee}}\mathsf{Z}_{(\vee^{\vee}(\bullet))^{-n}},&n\in\mathsf{Z}_{<0 },\end{cases} \tag{3.4}\]
which include for \(n=1\) the usual Drinfeld center \(\mathsf{Z}(\mathcal{C})\). The category \({}_{(\bullet)^{\vee}}\mathsf{Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C})\) obtained for \(n=0\) is known as the _trace_ of \(\mathcal{C}\), see e.g. [10, Definition 3.1.4].
These categories can be described in terms of monads on \(\mathcal{C}\).
A _monad_ on a category \(\mathcal{C}\) is a triple \((T,\mu,\eta)\) consisting of an endofunctor \(T:\mathcal{C}\to\mathcal{C}\) and natural transformations \(\mu:T^{2}\to T\), \(\eta:\operatorname{id}_{\mathcal{C}}\Rightarrow T\) such that the diagrams
commute for all \(c\in\mathcal{C}\). A _module_ for the monad \((T,\mu,\eta)\) is a pair \((d,\rho)\), consisting of an object \(d\in\mathcal{C}\) and a morphism \(\rho:Td\to d\) such that the diagrams
commute. A _morphism between two \(T\)-modules_\((d_{1},\rho)\), \((d_{2},\lambda)\) is a morphism \(f\in\operatorname{Hom}_{\mathcal{C}}(d_{1},d_{2})\) such that the diagram
commutes.
We denote the category of \(T\)-modules or Eilenberg-Moore category by \(T-\mathsf{Mod}\) or \(\mathcal{C}^{T}\). It comes with a forgetful functor \(U^{T}\) to \(\mathcal{C}\).
Given two exact \(\mathbb{K}\)-linear strict monoidal endofunctors \(F,G\) of a finite tensor category \(\mathcal{C}\), the functor
(3.5) \[Q:\mathcal{C}\times\mathcal{C}^{op} \to\mathsf{Fun}(\mathcal{C},\mathcal{C})\] \[(c,d) \mapsto F(c)\otimes\bullet\otimes G(\operatorname{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text \text
In section 6.3, we need the following result.
**Lemma 3.5**.: _Let \(\mathcal{C}\) be a finite tensor category and \(F,G\in\mathsf{Fun}(\mathcal{C},\mathcal{C})\) exact strict monoidal endofunctors. Let_
\[Q:\mathcal{C}\times\mathcal{C}^{op} \to\mathsf{Lex}(\mathcal{C}\times\mathcal{C}^{op},\mathsf{Vect}_{ \mathbb{K}})\] \[(c,d) \mapsto\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,F(c) \otimes(\bullet)\otimes G(^{\vee}d)). \tag{3.10}\]
_Then the left exact coend \(\oint^{c\in\mathcal{C}}Q(c,c)\) exists and there is an isomorphism_
\[\oint^{c\in\mathcal{C}}Q(c,c)(\bullet,\bullet)\simeq\operatorname{Hom}_{ \mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet)). \tag{3.11}\]
Proof.: Since \({}_{F}T_{G}\) is an exact functor, \(\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet)):\mathcal{C }\times\mathcal{C}^{op}\to\mathsf{Vect}_{\mathbb{K}}\) is left exact. Therefore it suffices to show that \(\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet))\) has the universal property of the left exact coend. This can be proven along the lines of [13, Proposition 9]. Adapting the proof given there to the current situation is not hard and is left as an exercise to the reader.
### Kleisli Category and Representable Functors
The string-net construction will not directly give the twisted center \(\mathsf{Z}_{n}(\mathcal{C})\). Hence we recall that given any monad \((T,\mu,\eta)\), there are several adjunctions giving rise to the same monad. In this subsection, we review this theory for a general monad \(T\) which is not necessarily a twisted central monad; for a textbook account, we refer to [10, Chapter 5].
* As discussed in subsection 3.1, the category of \(T\)-modules \(\mathcal{C}^{T}\) has as objects pairs \((c,\rho)\) with \(c\in\mathcal{C}\) and \(\rho:Tc\to c\) a morphism in \(\mathcal{C}\). The forgetful functor \(U^{T}:\mathcal{C}^{T}\to\mathcal{C}\) assigns to a \(T\)-module \((c,\rho)\) the underlying object \(c\in\mathcal{C}\). Its left adjoint \(I^{T}:\mathcal{C}\to\mathcal{C}^{T}\) assigns to \(c\in\mathcal{C}\) the free module \(Tc\) with action \(\mu_{c}:T^{2}(c)\to Tc\). The monad \(U^{T}\circ I^{T}\) induced on \(\mathcal{C}\) by the adjunction \(I^{T}\dashv U^{T}\) is again \(T\).
* The _Kleisli category_\(\mathcal{C}_{T}\) has as objects the objects of \(\mathcal{C}\); whenever an object \(c\in\mathcal{C}\) is seen as an object of the Kleisli category \(\mathcal{C}_{T}\), it will be denote by \(\overline{c}\). The Hom-spaces of the Kleisli category are \(\operatorname{Hom}_{\mathcal{C}_{T}}(\overline{c},\overline{d})\coloneqq \operatorname{Hom}_{\mathcal{C}}(c,Td)\), for all \(c,d\in\mathcal{C}\). A morphism in \(\mathcal{C}_{T}\) from \(\overline{c}\) to \(\overline{d}\) will be denoted by \(\overline{c}\rightsquigarrow\overline{d}\). The composition of morphisms in the Kleisli category \(\mathcal{C}_{T}\) is (3.12) \[g\circ_{\mathcal{C}_{T}}f:=\mu_{c_{3}}\circ_{\mathcal{C}}T(g)\circ_{\mathcal{C }}f\] for \(g:\overline{c}_{2}\rightsquigarrow\overline{c}_{3}\) and \(f:\overline{c}_{1}\rightsquigarrow\overline{c}_{2}\). The identity morphism \(\overline{c}\rightsquigarrow\overline{c}\) in \(\mathcal{C}_{T}\) is, as a morphism in \(\mathcal{C}\), the component \(\eta_{c}:c\to Tc\) of the unit of \(T\). Define a functor \(I_{T}:\mathcal{C}\to\mathcal{C}_{T}\) which is the identity on objects and sends a morphism \(c_{1}\overset{f}{\to}c_{2}\) in \(\mathcal{C}\) to the morphism \(\overline{c}_{1}\rightsquigarrow\overline{c}_{2}\) given by the morphism \[I_{T}(f):\quad c_{1}\overset{f}{\to}c_{2}\overset{\eta_{c_{2}}}{\to}Tc_{2}\] in \(\mathcal{C}\). Define also a functor \(U_{T}:\mathcal{C}_{T}\to\mathcal{C}\) sending \(\overline{c}\in\mathcal{C}_{T}\) to \(Tc\in\mathcal{C}\) and a morphism \(\overline{h}:\ \overline{c}\rightsquigarrow\overline{d}\) represented by the morphism \(h:c\to Td\) in \(\mathcal{C}\) to \[U_{T}(\overline{h}):\quad Tc\overset{T_{h}}{\to}T^{2}(d)\overset{\mu_{d}}{\to} Td\.\] By [10, Lemma 5.2.11], this gives a pair of adjoint functors, \(I_{T}\dashv U_{T}\), and that the adjunction realizes again the monad \(T\) on \(\mathcal{C}\), i.e. \(U_{T}\circ I_{T}=T\).
* It is also known [11, Proposition 5.2.12] that the Kleisli category \(\mathcal{C}_{T}\) is initial and that the Eilenberg-Moore category \(\mathcal{C}^{T}\) is final in the category of adjunctions realizing the monad \(T\) on \(\mathcal{C}\). Put differently, for any adjunction \(\oplus\xrightarrow{U}\mathcal{C}\) and \(\mathcal{C}\xrightarrow{I}\mathcal{D}\) with \(I\dashv U\) and \(U\circ I=T\), there are unique comparison functors \(K_{\oplus}:\mathcal{C}_{T}\xrightarrow{\mathcal{D}}\) and \(K^{\oplus}:\mathcal{D}\xrightarrow{\mathcal{C}^{T}}\) such that the diagram commutes.
* An adjunction \(I\dashv U\) that induces the monad \(T=U\circ I\) on \(\mathcal{C}\) is called _monadic_, if the comparison functor \(K^{\oplus}\) to the Eilenberg-Moore category \(\mathcal{C}^{T}\) is an equivalence of categories.
From the string-net construction, we will recover in Theorem 6.3 the Kleisli category of the twisted central monads as a circle categories. If \(\mathcal{C}\) is semi-simple, the twisted Drinfeld center can then be recovered as a Karoubification [10] or as presheaves [11]. For non-semi-simple categories, this does not suffice. It is instructive to understand how to explicitly recover the Eilenberg-Moore category from the Kleisli category.
Recall that all categories are linear and all functors are linear functors. Denote by \(\oplus:=\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\) the category of functors \(F:=\mathcal{C}_{T}^{opp}\xrightarrow{\mathsf{Vect}}\) such that the pullback by \(I_{T}\)
\[F\circ I_{T}^{opp}:\ \mathcal{C}^{opp}\xrightarrow{I_{T}^{opp}}\mathcal{C}_{T}^ {opp}\xrightarrow{F}\mathsf{Vect}\]
is representable by some object \(c_{F}\in\mathcal{C}\). We then say that \(F\in\mathcal{D}\) is an \(I_{T}\)-representable presheaf on the Kleisli category \(\mathcal{C}_{T}\). In this way, we obtain a functor \(U:\mathcal{D}\xrightarrow{\mathcal{C}}\) sending the presheaf \(F\) to the \(I_{T}\)-representing object \(c_{F}\in\mathcal{C}\).
We construct its left adjoint: For \(c\in\mathcal{C}\), consider the functor \(\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c):\mathcal{C}_{T}^{opp} \xrightarrow{\mathsf{Vect}}\). The pullback of this functor along \(I_{T}\) is representable, as follows from the equivalences
\[\operatorname{Hom}_{\mathcal{C}_{T}}(I_{T}-,I_{T}c)\cong\operatorname{Hom}_{ \mathcal{C}}(-,U_{T}I_{T}c)\cong\operatorname{Hom}_{\mathcal{C}}(-,Tc)\.\]
Note that the \(I_{T}\)-representing object of \(\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c)\) is \(Tc\in\mathcal{C}\). We thus obtain a functor
\[\begin{array}{rcl}I:\ \mathcal{C}&\to&\mathcal{D}\\ c&\mapsto&\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c)\end{array}\]
We have already seen that \(U\circ I=T\). It remains to see that the functors \(I\) and \(U\) are adjoint,
\[\operatorname{Hom}_{\mathcal{D}}(Ic,F)\cong\operatorname{Hom}_{\mathcal{C}}(c,U(F))\,\]
where \(F\in\mathcal{D}\) is assumed to be \(I_{T}\)-representable by \(c_{F}\in\mathcal{C}\). Hence the right hand side is naturally isomorphic to \(\operatorname{Hom}_{\mathcal{C}}(c,c_{F})\). For the left hand side, we compute
\[\begin{array}{rcl}\operatorname{Hom}_{\mathcal{D}}(Ic,F)&=&\mathsf{Nat}( \operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c),F)\cong F(I_{T}c)\\ &=&\operatorname{Hom}_{\mathcal{C}}(c,c_{F})\end{array}\]
where in the first line we used the Yoneda lemma and in the second line that \(F\circ I_{T}\) is represented by \(c_{F}\in\mathcal{C}\).
We are now ready for the main result of this subsection:
**Proposition 3.6**.: _The adjunction \(I\dashv U\) with \(U:\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\to\mathcal{C}\) and \(I:\mathcal{C}\to\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\) is monadic. As a consequence, the comparison functor \(K:\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\to\mathcal{C}^{T}\) is an equivalence of categories and the Eilenberg-Moore category can be identified with the category of \(I_{T}\)-representable presheaves on the Kleisli category \(\mathcal{C}_{T}\)._
In [10] Proposition 3.6 is proven in a more general setting, using bicategorical methods. The statement of Proposition 3.6 appears as a comment in [14, Exercise 5.2.vii]. For the convenience of the reader, give an explicit proof, using the monadicity theorem [14, Theorem 5.5.1].
Proof.: Recall the short hand \(\mathcal{D}:=\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\). We have to show that \(U:\mathcal{D}\to\mathcal{C}\) creates coequalizers of \(U\)-split pairs. Thus, consider for two \(I_{T}\)-representable functors \(F_{1},F_{2}\in\mathcal{D}\) a parallel pair
of natural transformations and assume that for \(c_{i}:=U(F_{i})\in\mathcal{C}\) and \(n_{i}:=U(\nu_{i})\) for \(i=1,2\) there is a split equalizer in \(\mathcal{C}\) for the parallel pair \(n_{1},n_{2}\):
(3.13)
We have to find a coequalizer \(\operatorname{coeq}(\nu_{1},\nu_{2}):F_{2}\to F_{3}\) in \(\mathcal{D}\) such that \(U(F_{3})=c_{3}\) and the coequalizer is mapped by \(U\) to \(h\). The functors are linear, and natural transformations are vector spaces; hence we can consider the natural transformation \(\nu:=\nu_{1}-\nu_{2}:F_{1}\to F_{2}\) and determine its cokernel in \(\mathcal{D}\). We also introduce the notation \(n:=n_{1}-n_{2}:c_{1}\to c_{2}\).
We start by defining a functor \(F_{3}:\mathcal{C}_{T}^{opp}\to\mathsf{Vect}\) on an object \(\overline{\gamma}\in\mathcal{C}_{T}^{opp}\) as the cokernel of the components of \(\nu\) in the category of vector spaces, so that we have for each \(\overline{\gamma}\in\mathcal{C}_{T}^{opp}\) an exact sequence
in vector spaces. To define the functor \(F_{3}\) on a morphism \(\overline{\gamma}_{1}\stackrel{{ f}}{{\to}}\overline{\gamma}_{2}\) in \(\mathcal{C}_{T}^{opp}\), consider the diagram
which has, by definition, exact rows. The left square commutes because of the naturality of \(\nu\). A standard diagram chase shows that there exists a unique linear map for the dashed arrow which we denote by \(F_{3}(f)\). This completes \(F_{3}\) to a functor
\(\mathcal{G}_{T}^{opp}\to\mathsf{Vect}\) and shows that the components \((q_{\overline{\gamma}})_{\overline{\gamma}\in\mathcal{C}^{T}}\) assemble into a natural transformation \(q:F_{2}\to F_{3}\).
We have to show that the functor \(F_{3}\) is \(I_{T}\)-representable and indeed represented by the object \(c_{3}\) appearing in the split coequalizer (3.13). To this end, consider the two pullbacks
\[\tilde{F}_{i}:=F_{i}\circ I_{T}^{opp}:\;\;\mathcal{G}\;\xrightarrow{}\; \mathcal{G}_{T}^{opp}\;\xrightarrow{}\;\mathsf{Vect}\]
which come with isomorphisms
\[\phi_{i}:\quad\tilde{F}_{i}\xrightarrow{}\operatorname{Hom}_{\mathfrak{C}}(-,c _{i})\]
of functors for \(i=1,2\). For each \(\gamma\in\mathfrak{C}\), we get a commuting diagram
(3.14)
The upper row is exact by construction. The lower row is exact, since \(c_{3}\) was part of a split coequalizer in \(\mathfrak{C}\) and split coequalizers are preserved by all functors. Again, a diagram chase implies the existence of a morphism \((\phi_{3})_{\gamma}:\tilde{F}_{3}(\gamma)\to\operatorname{Hom}_{\mathfrak{C}} (\gamma,c_{3})\) for the dashed arrow which by the nine lemma is an isomorphism.
To show naturality of the morphisms \((\nu_{3})_{\gamma}\), we take a morphism \(\gamma_{1}\xrightarrow{f}\gamma_{2}\) in \(\mathcal{G}^{opp}\) and consider the diagram which consists of two adjacent cubes and four more arrows:
To keep the diagram tidy, we do not provide all labels of the arrows and explain them here: diagonal arrows are labelled by applying the appropriate functor to \(f:\gamma_{1}\to\gamma_{2}\). Vertical arrows are isomorphisms labelled by \(\phi_{i}\). The front and rear squares of the two cubes are just instances of the commuting diagram (3.14) and thus commute. The squares on the top commute because \(\nu\) and \(q\) are natural; similarly, the squares on the bottom commute because \(n_{*}\) and \(h_{*}\) are natural. The left and middle diagonal walls commute because \(\phi_{1}\) and \(\phi_{2}\) are natural. A diagram chase now yields that the rightmost wall commutes as well, which is the naturality of \(\phi_{3}\).
## 4. **Progressive Graphical Calculus for Finite Tensor Categories**
It is standard to introduce a graphical calculus for computations in (strict) finite tensor categories. Following [1], morphisms in a (strict) finite tensor category \(\mathfrak{C}\) can be represented by so-called _progressive graphs_ on a standard rectangle in the \(x-y\)-plane.
A _graph_ is a \(1\)-dimensional, finite CW-complex \(\Gamma\) with a finite, closed subset \(\Gamma_{0}\subset\Gamma\), such that \(\Gamma-\Gamma_{0}\) is a \(1\)-dimensional smooth manifold without boundary. Elements of \(\Gamma_{0}\) are called _nodes_ of the graph. A node \(b\) is a _boundary node_, if for any connected open neighborhood \(b\in U\subset\Gamma\), \(U-\{b\}\) is still connected. The collection of boundary nodes is called the _boundary of \(\Gamma\)_ and is denoted by \(\partial\Gamma\). An _edge_ is a connected component \(e\subset\Gamma-\Gamma_{0}\) homeomorphic to the intervall \((0,1)\). By adjoining its endpoints to \(e\), we get a closed edge \(\hat{e}\). An _oriented edge_ is an edge with an orientation. For an oriented edge \(\hat{e}\) we admit only homeomorphism \(\hat{e}\simeq[0,1]\) preserving orientations. The endpoints of \(\hat{e}\) then are linearly ordered: The preimage of \(0\) in \(\hat{e}\), denoted by \(\hat{e}(0)\), is the source and the preimage \(\hat{e}(1)\) of \(1\) is the target. A graph where every edge is endowed with an orientation is called an oriented graph. For an oriented graph an edge \(e\), adjacent to a node \(v\), is _incoming at \(v\)_, if \(v\) is the target of \(e\) and _outgoing_, if \(v\) is the source of \(e\). This gives two, not necessarily disjoint, subsets in\((v)\) and out\((v)\) of incoming and outgoing edges at \(v\). An oriented graph \(\Gamma\) is _polarized_, if for any \(v\in\Gamma\), in\((v)\) and out\((v)\) are linearly ordered sets.
**Definition 4.1**.: Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be a polarized graph and \((\mathcal{C},\otimes,1)\) a monoidal category. A _\(\mathcal{C}\)-coloring_ of \(\Gamma\) comprises two functions
\[\varphi_{0}:\Gamma-\Gamma_{0}\to\operatorname{ob}(\mathcal{C}),\qquad\varphi_{ 1}:\Gamma_{0}-\partial\Gamma\to\operatorname{mor}(\mathcal{C}) \tag{4.1}\]
associating to any oriented edge of \(\Gamma\) an object of \(\mathcal{C}\) and to any inner node \(v\in\Gamma_{0}-\partial\Gamma\) a morphism in \(\mathcal{C}\), with
\[\varphi_{1}(v):\varphi_{0}(e_{1})\otimes\cdots\otimes\varphi_{0}(e_{n})\to \varphi_{0}(f_{1})\otimes\cdots\otimes\varphi_{0}(f_{m}), \tag{4.2}\]
where \(e_{1}<\cdots<e_{n}\) and \(f_{1}<\cdots<f_{m}\) are the ordered elements of in\((v)\) and out\((v)\), respectively.
**Definition 4.2**.: A _planar_ graph is a graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) together with a smooth embedding \(\iota:\Gamma\to\mathbb{R}^{2}\).
For a planar graph, we will not distinguish in our notation between the abstract graph \(\Gamma\) and its embedding \(\iota(\Gamma)\). Note that a graph has infinitely many realizations as a planar graph, by choosing different embeddings.
**Definition 4.3**.: Let \(a,b\in\mathbb{R}\) with \(a<b\). A _progressive graph_ in \(\mathbb{R}\times[a,b]\) is a planar graph \(\Gamma\subset\mathbb{R}\times[a,b]\) such that
1. All outer nodes are either on \(\mathbb{R}\times\{a\}\) or \(\mathbb{R}\times\{b\}\), i.e. (4.3) \[\partial\Gamma=\Gamma\cap(\mathbb{R}\times\{a,b\})\quad.\]
2. The restriction of the projection to the second component (4.4) \[\operatorname{pr}_{2}:\mathbb{R}\times[a,b]\to[a,b]\] to any connected component of \(\Gamma-\Gamma_{0}\) is an injective map.
_Remark 4.4_.: Using the injective projection to the second component, every progressive graph is oriented. In addition, it is also polarized. For any \(v\in\Gamma_{0}\), we can pick \(u\in[a,\operatorname{pr}_{2}(v))\), such that any element of in\((v)\) intersects \(\mathbb{R}\times\{u\}\). Since the graph is progressive, the intersection points are unique. The intersection points of in\((v)\) with \(\mathbb{R}\times\{u\}\) are linearly ordered by the orientation of \(\mathbb{R}\) and induce a linear order on in\((v)\). Similar, one defines a linear order on out\((v)\) using the intersection with \(\mathbb{R}\times\{w\}\), for \(w\in(\operatorname{pr}_{2}(v),b]\).
_Remark 4.5_.: A progressive graph cannot have cups, caps or circles, since the restriction of \(\operatorname{pr}_{2}\) to these would be non-injective. This mirrors the fact that in a general non-pivotal category left and right duals for an object are not isomorphic and that there are no categorical traces. Thus we should not represent (co-)evaluation morphisms simply by oriented cups and caps, but use explicitly labelled coupons. In addition, in the absence of a categorical trace, we cannot make sense of a circle-shaped diagram.
Since a progressive graph \(\Gamma\) is always polarized, we have a notion of a \(\mathcal{C}\)-coloring for it, where \(\mathcal{C}\) is a monoidal category. Given a \(\mathcal{C}\)-coloring \(\varphi\coloneqq(\varphi_{0},\varphi_{1})\) of \(\Gamma\), we associate to every boundary node \(v\in\partial\Gamma\) the object in \(\mathcal{C}\) of its adjacent edge. The _domain_\(\operatorname{dom}(\Gamma,\varphi)\) of \(\Gamma\) is the linearly ordered set of objects assigned to the boundary node in \(\mathbb{R}\times\{a\}\). Its _codomain_\(\operatorname{codim}(\Gamma,\varphi)\) is the linearly ordered set of objects assigned to the boundary nodes in \(\mathbb{R}\times\{b\}\).
To the pair \((\Gamma,\varphi)\) of a progressive graph \(\Gamma\) with \(\mathcal{C}\)-coloring \(\varphi\) and \(\operatorname{dom}(\Gamma,\varphi)=(X_{1},\cdots,X_{n})\) and \(\operatorname{codim}(\Gamma,\varphi)=(Y_{1},\cdots,Y_{m})\), we can associate a morphism in \(\mathcal{C}\)
\[f_{\Gamma}:X_{1}\otimes\cdots\otimes X_{n}\to Y_{1}\otimes\cdots Y_{m}. \tag{4.5}\]
The full technical details of this construction can be found in [1]. We will discuss it for an example, the general procedure will then be clear.
Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be the following \(\mathcal{C}\)-colored progressive graph:
The graph has ten edges, which are colored by the objects \((X_{1},X_{2},X_{3},X_{4},Z_{1},Z_{2},Z_{3},Y_{1},Y_{2},Y_{3})\), and \(13\) nodes, \(5\) of which are inner nodes colored by morphisms \((f_{1},f_{2},f_{3},f_{4},f_{5})\). It has domain \(\operatorname{dom}(\Gamma)=(X_{1},\cdots,X_{4})\) and codomain \(\operatorname{codom}(\Gamma)=(Y_{1},Y_{2},Y_{3})\). In addition to the graph, we show eight auxiliary dashed lines:
1. Two horizontal ones at \(\mathbb{R}\times\{t_{1}\}\) and \(\mathbb{R}\times\{t_{2}\}\). These are called _regular level lines_ and their levels \(0<t_{1}<t_{2}<1\) are chosen such that \(\mathbb{R}\times\{t_{i}\}\) does not intersect the inner nodes \(\Gamma_{0}-\partial\Gamma\). Cutting \(\Gamma\) at \(\mathbb{R}\times\{t_{1}\}\) and \(\mathbb{R}\times\{t_{2}\}\), we get three consecutive progressive graphs \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\), where \(\Gamma_{1}\) is the progressive graph in \(\mathbb{R}\times[0,t_{1}]\), \(\Gamma_{2}\) is the one in \(\mathbb{R}\times[t_{1},t_{2}]\) and \(\Gamma_{3}\) is the top one in \(\mathbb{R}\times[t_{2},1]\).
2. Six vertical lines, three in \(\Gamma_{1}\), two in \(\Gamma_{2}\) and one in \(\Gamma_{3}\). Each collection of vertical lines gives a _tensor decomposition_ of \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\), respectively. E.g. the three vertical lines in \(\Gamma_{1}\), split it into a disjoint union of four graphs \(\Gamma_{1}^{i}\), \(i=1,\cdots,4\), which are linearly ordered from left to right. Each \(\Gamma_{1}^{i}\) either contains exactly one inner node or does not contain an inner node.
The \(\mathcal{C}\)-coloring of \(\Gamma\) associates to \(\Gamma_{1}^{i}\) a morphism in \(\mathcal{C}\). For the graphs \(\Gamma_{1}^{i}\) these are
\[f_{\Gamma_{1}^{i}}=\operatorname{id}_{X_{1}},\quad f_{\Gamma_{1}^{2}}= \operatorname{id}_{X_{2}},\quad f_{\Gamma_{1}^{3}}=f_{4},\quad f_{\Gamma_{1}^ {4}}=\operatorname{id}_{X_{4}}, \tag{4.6}\]
with \(f_{4}\in\operatorname{Hom}_{\mathcal{C}}(X_{3},Z_{2}\otimes Z_{3})\) as in figure 1. The progressive graph \(\Gamma_{1}\) thus evaluates to the morphism
\[f_{\Gamma_{1}}\coloneqq f_{\Gamma_{1}^{1}}\otimes f_{\Gamma_{1}^{2}}\otimes f _{\Gamma_{1}^{3}}\otimes f_{\Gamma_{1}^{4}}:X_{1}\otimes X_{2}\otimes X_{3} \otimes X_{4}\to X_{1}\otimes X_{2}\otimes Z_{2}\otimes Z_{3}\otimes X_{4}, \tag{4.7}\]
i.e. \(f_{\Gamma_{1}}=\operatorname{id}_{X_{1}}\otimes\operatorname{id}_{X_{2}}\otimes f _{4}\otimes\operatorname{id}_{X_{4}}\). The morphisms \(f_{\Gamma_{2}}\) and \(f_{\Gamma_{3}}\) are defined analogously. The morphism associated to the whole progressive graph is given by
\[f_{\Gamma}\coloneqq f_{\Gamma_{3}}\circ f_{\Gamma_{2}}\circ f_{\Gamma_{1}} \tag{4.8}\]
_Remark 4.6_.: We highlight the two very different roles of the \(x\)-direction and the \(y\)-directions in the plane: The horizontal \(x\)-coordinate corresponds to the monoidal product in \(\mathcal{C}\), whereas the vertical \(y\)-direction corresponds to the composition of morphisms. In other words, the implicitly chosen standard 2-framing on the strip \(\mathbb{R}\times[0,1]\) is essential for evaluating a progressive graph \(\Gamma\) to a morphism in \(\mathcal{C}\).
By one of the main results in [1], morphism \(f_{\Gamma}:\operatorname{dom}(\Gamma,\varphi)\to\operatorname{codim}(\Gamma,\varphi)\) constructed for a \(\mathcal{C}\)-colored progressive graph \(\Gamma\) neither depends on the choice of the regular level lines, nor on the tensor decomposition. Consider two \(\mathcal{C}\)-colored progressive graphs \((\Gamma_{1},\varphi_{1})\), \((\Gamma_{2},\varphi_{2})\) in \(\mathbb{R}\times[0,1]\). We say that \(\Gamma_{1}\)_and \(\Gamma_{2}\) are progressively isotopic_, if there exists an isotopy \(H:[0,1]\times(\mathbb{R}\times[0,1])\) from \(\Gamma_{1}\) to \(\Gamma_{2}\), such that \(H(s,\bullet)|_{\Gamma_{1}}\) is a progressive graph for all \(s\in[0,1]\). The isotopy \(H\) is called a _progressive isotopy_. Invariance of the associated morphism for a \(\mathcal{C}\)-colored progressive graph under the auxiliary decomposition in regular levels and tensor decompositions is then linked to the invariance under progressive isotopies, i.e. if \((\Gamma_{1},\varphi_{1})\) and \((\Gamma_{2},\varphi_{2})\) are progressively isotopic, then \(f_{\Gamma_{1}}=f_{\Gamma_{2}}\).
Conversely, every morphism in \(\mathcal{C}\) can be represented by a \(\mathcal{C}\)-colored graph:
\[f:X_{1}\otimes\cdots\otimes X_{n}\to Y_{1},\ldots,Y_{m}\ \mapsto\]
Obviously, a morphism can have different realizations as a progressive graph. The graph \(\Gamma\) from figure 1 describing the morphism \(f_{\Gamma}\) is topologically very different from the graph with a single inner node colored by \(f_{\Gamma}\) in equation (4.8). As in the oriented case, identifying different graphical realizations of the same morphism will be at the heart of the framed string-net construction.
## 5. **Framed String-Net Construction**
In this section, we define string-nets on 2-framed surfaces. The algebraic input for our string-net construction is a finite tensor category; as output, it produces a vector space for any 2-framed surface. The main point of the construction is to globalize the discussion of progressive graphs from the standard framed plane in section 4 to an arbitrary framed surface.
### Locally Progressive Graphs
**Definition 5.1**.: Let \(\Sigma\) be a smooth surface. \(\Sigma\) is _\(2\)-framed_ if there exist two nowhere vanishing vector fields \(X_{1},X_{2}\in\Gamma(T\Sigma)\), such that \(((X_{1})_{p},(X_{2})_{p})\in T_{p}\Sigma\) is an ordered basis for every \(p\in\Sigma\). The pair \((X_{1},X_{2})\) is a _global ordered frame_ for the tangent bundle \(T\Sigma\) of \(\Sigma\).
To any vector field \(X\) on \(\Sigma\), we can associate its _maximal flow_\(\theta:D\to\Sigma\). The domain is a subset \(D\subset\mathbb{R}\times\Sigma\) where \(D^{(p)}\coloneqq\{t\in\mathbb{R}\,|\,(t,p)\in D\}\) is an open interval. \(D\) is called a _flow domain_. The flow \(\theta\) satisfies \(\theta(0,p)=p\) and \(\theta(t_{1},\theta(t_{2},p))=\theta(t_{1}+t_{2},p)\) for all \(p\in\Sigma\). The flow is _maximal for \(X\)_ in the sense that for all \(p\in\Sigma\), the curve
\[\theta(\bullet,p)\to\Sigma \tag{5.1}\]
is the unique maximal integral curve of \(X\), i.e. \(\frac{\mathrm{d}}{\mathrm{d}t}\theta(t,p)=X_{\theta(t,p)}\) with initial value \(\theta(0,p)=p\). For \((X_{1},X_{2})\) a global frame on \(\Sigma\), we denote by \(\theta_{1}:D_{1}\to\Sigma\) and \(\theta_{2}:D_{2}\to\Sigma\) the corresponding maximal flows. The maximal integral curves for \((X_{1},X_{2})\) through a point \(p\in\Sigma\) are denoted by \(\theta_{1}^{(p)}:D_{1}^{(p)}\to\Sigma\) and \(\theta_{2}^{(p)}:D_{2}^{(p)}\to\Sigma\). Since \(X_{1},X_{2}\) are nowhere vanishing, the curves \(\theta_{1}^{(p)}\), \(\theta_{2}^{(p)}\) are smooth immersions for all \(p\in\Sigma\). Further details on maximal flows and framed manifolds and flows can be found e.g. in [13, Chapter 9].
Recall that a planar graph was defined as an abstract graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) with a smooth map \(\iota:\Gamma\to\mathbb{R}^{2}\), such that \(\iota_{|\Gamma-\Gamma_{0}}\) is a smooth embedding. Similar, for \((\Sigma,\partial\Sigma)\) a smooth surface \(\Sigma\) with boundary \(\partial\Sigma\) an _embedded graph_ is an abstract graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) together with a smooth map \(\iota_{\Sigma}:\Gamma\to\Sigma\), such that \(\iota_{\Sigma}|_{|\Gamma-\Gamma_{0}}\) is an embedding and \(\iota_{\Sigma}(\partial\Gamma)=\iota_{\Sigma}(\Gamma)\cap\partial\Sigma\). For an embedded graph \((\Gamma,\iota_{\Sigma})\), we usually suppress the embedding \(\iota_{\Sigma}\) from the notation.
We want to formulate the equivalent of a progressive graph for an arbitrary \(2\)-framed surface. In order to do so, we have to generalize the condition of injectivity of the projection to the second component that features in the definition of a progressive graph. The idea is to formulate a local condition on graphs at every point on the surface. Using the global frame of a \(2\)-framed surface \(\Sigma\), there is a neighborhood around every \(p\in\Sigma\), which looks like the strip \(\mathbb{R}\times[0,1]\) and the two vector fields give the two distinguished directions on the strip. The flow lines of \(X_{2}\) are then a natural analog of the vertical \(y\)-direction in the plane and we can perform a projection to \(X_{2}\)-flow lines by moving points along the flow of \(X_{1}\) (see figure 2). Given an embedded graph \(\Gamma\subset\Sigma\), we require that locally around every point, this projection, restricted to \(\Gamma\), is injective. This allows us to define a local evaluation map of an embedded \(\mathbb{C}\)-graph, which is the framed analog of the evaluation of graphs inside of disks in the oriented case.
A variant of the flow-out theorem [13, Theorem 9.20] shows that for a \(2\)-framed surface \(\Sigma\) with global frame \((X_{1},X_{2})\) and corresponding flow domains \(D_{1}\), \(D_{2}\), for every point \(p\in\Sigma\), there exist open intervals \(I_{1}^{(p)}\subset D_{1}^{(p)}\), \(I_{2}^{(p)}\subset D_{2}^{(p)}\) containing \(0\), such that
\[\begin{split}\phi^{(p)}:\overline{I}_{1}^{(p)}\times I_{2}^{(p)} &\hookrightarrow\Sigma\\ (s,t)&\mapsto\theta_{1}(s,\theta_{2}(t,p))\end{split} \tag{5.2}\]
is a smooth embedding. Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be an embedded graph in \(\Sigma\). An element \(t\in I_{2}^{(p)}\) is _regular_ with respect to \(\Gamma\), if \(\phi^{(p)}(I_{1}^{(p)}\times\{t\})\cap(\Gamma_{0}-\partial\Gamma)=\emptyset\), i.e. the flow line of \(X_{1}\) at
\(t\) inside \(\phi^{(p)}(\overline{I}_{1}^{(p)}\times I_{2}^{(p)})\) does not contain any inner nodes of \(\Gamma\). If \(t_{1}<0<t_{2}\) are regular levels, the image \(\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])\) is called a _standard rectangle_ for \(\Gamma\) at \(p\). The restriction of \(\Gamma\) to a standard rectangle at \(p\) is denoted by \((\Gamma^{(p)}[t_{1},t_{2}],\Gamma_{0}^{(p)}[t_{1},t_{2}],\partial\Gamma^{(p)} [t_{1},t_{2}])\).
**Definition 5.2**.: Let \((\Sigma,(X_{1},X_{2}))\) be a 2-framed surface and \((\Gamma,\Gamma_{0},\partial\Gamma)\) an embedded graph in \(\Sigma\). Then \(\Gamma\) is a _locally progressive graph_, if for every \(p\in\Sigma\), there exists a standard rectangle \(\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])\) for \(\Gamma\) at \(p\), such that the restriction of
\[\begin{split}\mathrm{pr}_{2}^{(p)}\coloneqq\mathrm{pr}_{2}\circ \left(\phi^{(p)}\right)^{-1}:\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])& \rightarrow[t_{1},t_{2}]\\ \phi^{(p)}(s,t)&\mapsto t\end{split} \tag{5.3}\]
to \(\Gamma^{(p)}[t_{1},t_{2}]-\Gamma^{(p)}[t_{1},t_{2}]\) is injective.
In order to understand these definitions, it is best to consider figure 2. The figure shows a small patch of a 2-framed surface \((\Sigma,(X_{1},X_{2}))\). The red horizontal lines are flow lines of \(X_{1}\) and the blue vertical line is a flow line of \(X_{2}\). In black, we show an embedded graph. Each of the dashed horizontal lines intersects an edge of the embedded graph at a unique point. Transporting this intersection point along the horizontal line until we hit the vertical blue line, defines the projection map \(\mathrm{pr}_{2}^{(p)}\) evaluated at the intersection point. For the graph shown in figure 2 the projection is obviously injective and thus, this is a locally progressive graph for the underlying 2-framed surface.
**Definition 5.3**.: Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be an embedded graph inside a framed surface \(\Sigma\) and \(\phi^{(p)}:\overline{I}_{1}^{(p)}\times I_{2}^{(p)}\hookrightarrow\Sigma\) a standard rectangle at \(p\). Given two regular levels \(t_{1}<0<t_{2}\) and \([s_{1},s_{2}]\subset\overline{I}_{1}^{(p)}\), the image \(\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) is an _evaluation rectangle_ for \(\Gamma\) at \(p\), if
\[\Gamma\cap\phi^{(p)}(\{s_{1},s_{2}\}\times I_{2}^{(p)})=\emptyset \tag{5.4}\]
Figure 2. In the colored version, red horizontal lines correspond to flow line of \(X_{1}\) and the blue vertical line is a flow line of \(X_{2}\). together they yield a standard rectangle (and even an evaluation rectangle) for the locally progressive graph shown in black.
and
\[\Gamma_{0}\cap\phi^{(p)}\left([s_{1},s_{2}]\times\{t_{1},t_{2}\}\right)=\emptyset \quad. \tag{5.5}\]
Let now \(\mathcal{C}\) be again a finite tensor category, which is not assumed to be pivotal. An evaluation rectangle at \(p\in\Sigma\) for a \(\mathcal{C}\)-colored graph \(\Gamma\) will be denoted by \(R^{(p)}_{\Gamma}\).
Given an evaluation rectangle \(R^{(p)}_{\Gamma}=\phi^{(p)}([s_{1},s_{2}]\times[t_{1},t_{2}])\) for a locally progressive graph \(\mathcal{C}\)-colored graph \(\Gamma\) in \(\Sigma\), by (5.5), only the lower and upper horizontal flow lines \(\phi^{(p)}([s_{1},s_{2}]\times t_{1})\), \(\phi^{(p)}([s_{1},s_{2}]\times t_{2})\) intersect edges of the graph \(\Gamma\). We associate to each intersection point the corresponding \(\mathcal{C}\)-color of the edge of \(\Gamma\). Taking the tensor product of these elements according to the linear order on \([s_{1},s_{2}]\) gives the _(co-)domain of \(\Gamma\) with respect to \(R^{(p)}_{\Gamma}\)_, which will be denoted by \(\mathrm{dom}_{R}(\Gamma)\) and \(\mathrm{codom}_{R}(\Gamma)\), respectively. Note that in analogy to the (co-)domain of a progressive graph, we have \(\mathrm{dom}_{R}(\Gamma)\), \(\mathrm{codom}_{R}(\Gamma)\in\mathrm{ob}(\mathcal{C})\).
_Remark 5.4_.: From the definition of a locally progressive graph, it directly follows that the preimage of \(\Gamma\) is a progressive graph in the rectangle \([s_{1},s_{2}]\times[t_{1},t_{2}]\) for every evaluation rectangle \(\phi^{(p)}([s_{1},s_{2}]\times[t_{1},t_{2}])\). The \(\mathcal{C}\)-colored progressive graph has (co-)domain (co-)dom\({}_{R}(\Gamma)\) and yields a morphism in \(f^{\Gamma}_{R}\in\mathrm{Hom}_{\mathcal{C}}(\mathrm{dom}_{R}(\Gamma), \mathrm{codom}_{R}(\Gamma))\). This defines an evaluation map \(\nu_{R}(\Gamma)\coloneqq f^{\Gamma}_{R}\).
_Remark 5.5_.: When defining the evaluation of a \(\mathcal{C}\)-colored progressive graph, we stressed the very different roles the \(x-\) and \(y\)-directions had in the plane. The first corresponds to taking tensor products in \(\mathcal{C}\), whereas the latter encodes composition of morphisms. The vector fields of a global frame have similar roles for \(\mathcal{C}\)-colored embedded graphs. As stated in Remark 5.4, the \(y\)-flow lines define domain and codomain for the morphism corresponding to a locally progressive graph, whereas going along \(x\)-flow lines corresponds to taking tensor products.
### Framed String-Net Spaces
Let \(\mathcal{C}\) be a finite tensor category and \(\Sigma\) a 2-framed surface. We now define a string-net space in terms of \(\mathcal{C}\)-graphs on \(\Sigma\), which we are going to call framed string-net space.
**Definition 5.6**.: Let \(B\coloneqq\{p_{1},\cdots,p_{n}\}\subset\partial\Sigma\) be a finite and possibly empty subset of the boundary of the surface \(\Sigma\) and \(\nu_{B}:B\to\mathrm{ob}(\mathcal{C})\) a map. The pair \((B,\nu_{B})\) is called a _boundary value_.
Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be \(\mathcal{C}\)-colored embedded graph in \(\Sigma\). Boundary nodes of \(\Gamma\) are mapped to the boundary \(\partial\Sigma\) of the surface. This gives a finite subset \(B_{\Gamma}\) of the boundary. Defining a map \(\nu_{\Gamma}:B_{\Gamma}\to\mathrm{ob}(\mathcal{C})\) by mapping the boundary node to the \(\mathcal{C}\)-color of its adjacent edge, we obtain a boundary value \((B_{\Gamma},\nu_{\Gamma})\) for a \(\mathcal{C}\)-colored embedded graph. We call this the _boundary value_ of the graph \(\Gamma\).
**Definition 5.7**.: The set of all \(\mathcal{C}\)-colored locally progressive graphs on a 2-framed surface \(\Sigma\) with boundary value \((B,\nu_{B})\) is denoted by
\[\mathrm{Graph}(\Sigma,(B,\nu_{B}))\quad. \tag{5.6}\]
The vector space
\[\mathrm{VGraph}_{\mathbb{K}}(\Sigma,(B,\nu_{B}))\coloneqq\mathrm{span}_{ \mathbb{K}}\mathrm{Graph}(\Sigma,(B,\nu_{B})) \tag{5.7}\]
freely generated by this set is called _framed pre-string-net space_.
From now on all string-nets on 2-framed surfaces will be locally-progressive. Similar to the construction of string-net spaces on oriented surfaces, we want to identify elements of \(\operatorname{VGraph}(\Sigma,(B,\nu_{B}))\) if they locally evaluate to the same morphism in \(\mathcal{C}\). However, the additional datum of a 2-framing on \(\Sigma\) allows us to use evaluation rectangles of graphs instead of disks so that as an algebraic input we do not need a pivotal structure on \(\mathcal{C}\). By Remark 5.4 the preimage of a locally progressive graph inside every evaluation rectangle is a progressive graph. Thus, we can use the evaluation map for \(\mathcal{C}\)-colored progressive graphs we explained in section 4 to associate to every \(\mathcal{C}\)-colored locally progressive graph and evaluation rectangle \(\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) at any point \(p\in\Sigma\) a morphism in \(\mathcal{C}\).
**Definition 5.8**.: Let \((B,\nu_{B})\) be a boundary value and \(\Gamma_{1},\cdots,\Gamma_{n}\in\operatorname{Graph}(\Sigma,(B,\nu_{B}))\). For \(\lambda_{1},\cdots,\lambda_{n}\in\mathbb{K}\), the element \(\Gamma\coloneqq\sum_{i=1}^{n}\lambda_{i}\Gamma_{i}\in\operatorname{VGraph}_{ \mathbb{K}}(\Sigma,(B,\nu_{B}))\) is a _null graph_, if there exists a common evaluation rectangle \(R^{(p)}\coloneqq\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) for all \(\Gamma_{i}\), such that
1. (5.8) \[\Gamma_{i}\cap\phi^{(p)}([s_{1},s_{2}]\times\{t_{1},t_{2}\})=\Gamma_{j}\cap \phi^{(p)}([s_{1},s_{2}]\times\{t_{1},t_{2}\})\] for all \(i,j=1,\cdots,n\).
2. \(\operatorname{dom}_{R}(\Gamma)\coloneqq\operatorname{dom}_{R}(\Gamma_{i})= \operatorname{dom}_{R}(\Gamma_{j})\) and \(\operatorname{codim}_{R}(\Gamma)\coloneqq\operatorname{codim}_{R}(\Gamma_{i} )=\operatorname{codim}(\Gamma_{j})\) for all \(i,j=1,\cdots,j\).
3. \(\Gamma_{i}|_{\Sigma-R^{(p)}}=\Gamma_{j}|_{\Sigma-R^{(p)}}\) for all \(i,j=1,\cdots,n\).
4. (5.9) \[\sum_{i=1}^{n}\lambda_{i}\nu_{R}(\Gamma_{i})=0\in\operatorname{Hom}_{ \mathcal{C}}(\operatorname{dom}_{R}(\Gamma),\operatorname{codim}_{R}(\Gamma))\]
The sub-vector space spanned by all null graphs is denoted by \(\operatorname{NGraph}(\Sigma,(B,\nu_{B}))\).
**Definition 5.9**.: Let \(\Sigma\) be a framed surface, \(\mathcal{C}\) a finite tensor category and \((B,\nu_{B})\) be a boundary value in \(\mathcal{C}\). The _framed string-net space_ with boundary value \((B,\nu_{B})\) is defined as the vector space quotient
\[\operatorname{SN}^{f_{r}}(\Sigma,(B,\nu_{B}))\coloneqq\frac{\operatorname{ VGraph}(\Sigma,(B,\nu_{B}))}{\operatorname{NGraph}(\Sigma,(B,\nu_{B}))} \tag{5.10}\]
_Remark 5.10_.: Taking the quotient by null graphs also takes appropriate isotopies between locally progressive graphs into account. Recall that we defined locally progressive graphs as embedded graphs with a fixed embedding. Thus, a priori abstract \(\mathcal{C}\)-colored graphs with different embeddings yield different elements in \(\operatorname{VGraph}(\Sigma)\). By taking the above quotient, we can identify embedded graphs which differ by those isotopies such that graphs along the isotopy are all locally progressive graphs.
## 6. **Circle Categories and Twisted Drinfeld-Centers**
In this final section, we put our construction of string-nets for framed surfaces to the test and compute the relevant circle categories. We show that they are related to Drinfeld centers twisted by appropriate powers of the double dual.
### \(2\)-Framings of the Circle and Framed Cylinders
A _\(2\)-framing_ of \(S^{1}\) of a circle \(S^{1}\) is an isomorphism \(\lambda:TS^{1}\oplus\underline{\mathbb{R}}\xrightarrow{\simeq}\underline{ \mathbb{R}^{2}}\) of vector bundles, where \(\underline{\mathbb{R}}\to S^{1}\) and \(\underline{\mathbb{R}^{2}}\to S^{1}\) are the trivial vector bundles with fibers \(\mathbb{R}\) and \(\mathbb{R}^{2}\), respectively. There is a bijection [10, section 1.1]
\[\left\{\text{Homotopy classes of $2$-framings of $S^{1}$}\right\}\simeq\mathbb{Z}\quad. \tag{6.1}\]
The different \(2\)-framings for \(n\in\mathbb{Z}\) can be depicted as follows. We identify \(S^{1}\) as the quotient \(S^{1}\simeq[0,1]/0\sim 1\) and draw a circle as an interval, while keeping in mind that we identify the endpoints. The integer \(n\) then counts the number of full rotations in counterclockwise direction a frame of \(\mathbb{R}^{2}\) undergoes while going around the circle. We denote the circle with \(2\)-framing corresponding to \(n\in\mathbb{Z}\) by \(S^{1}_{n}\). We can trivially continue the \(2\)-framing of \(S^{1}_{n}\) along the radial direction of a cylinder over \(S^{1}\). This gives a \(2\)-framed cylinder \(\mathsf{C}\), which can be seen as \(2\)-framed cobordism \(\mathsf{C}:S^{1}_{n}\to S^{1}_{n}\). Possibly after a global rotation of the two vector fields, we can arrange that there is at least one point on \(S^{1}\) such that the flow line for the second vector field is radial. We fix such a point as an auxiliary datum and call the corresponding flow line the _distinguished radial line_.
We denote the cylinder with this particular \(2\)-framing corresponding to \(n\in\mathbb{Z}\) by \(\mathsf{C}_{n}\). The flow lines for \(\mathsf{C}_{-1}\), \(\mathsf{C}_{0}\) and \(\mathsf{C}_{1}\) are shown in figure 3.
### Circle Categories
Given a finite tensor category \(\mathcal{C}\) and a \(2\)-framed cylinder \(\mathsf{C}_{n}\) over a one-manifold, we construct a \(\mathsf{Vect}_{\mathbb{K}}\)-enriched category as follows.
**Definition 6.1**.: The _circle category_\(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) is defined as follows:
* the _objects_ of \(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) are the objects of \(\mathcal{C}\);
* the vector space of _morphisms_ between two objects \(X,Y\in\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) is the framed string-net space (6.2) \[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{n},\mathcal{C})}(X,Y)\coloneqq \operatorname{SN}^{f_{r}}(\mathsf{C}_{n},B_{X,Y})\] where we take the boundary value \(B_{X,Y}\coloneqq\left(\left\{p_{1},p_{2}\right\},\left(X,Y\right)\right)\) with the chosen point \(p_{1}\) on \(S^{1}\times\left\{0\right\}\) and its counterpart \(p_{2}\) on \(S^{1}\times\left\{1\right\}\) in \(\mathsf{C}_{n}\).
The composition of morphisms is given by stacking cylinders and concatenating the corresponding string-nets.
We first define a functor \(I:\ \mathcal{C}\to\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) which is the identity on objects. It maps a morphism \(f:c_{1}\to c_{2}\) in \(\mathcal{C}\) to the string-net which has two edges, both on the distinguished radial line, with a single node on this line, labeled by \(f\).
In the following, we consider as an example the _blackboard framed cylinder_ which is the framed surface \(\mathsf{C}_{1}\) in figure 3.
### Circle Category as a Kleisli Category
To describe the morphism spaces of the circle category purely in terms of algebraic data, we need to know that string-net constructions obey factorization. This has been discussed repeatedly in the literature, starting from [25, Section 4.4]. Other references include [10, p. 40] and [11, Section 7]. The idea is that gluing relates the left exact functors associated to a surface to a coend. The cylinder can be obtained by gluing a rectangle at two opposite boundaries; taking the insertions at the remaining boundaries into account and using the fact that for the rectangle string-net spaces give morphisms in \(\mathcal{C}\), the idea to implement factorization by a coend yields
\[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{1},\mathcal{C})}(\bullet, \bullet)\cong\oint^{c\in\mathcal{C}}\operatorname{Hom}_{\mathcal{C}}(\left( \bullet\right),c\otimes\left(\bullet\right)\otimes{}^{\vee}c)\quad. \tag{6.3}\]
**Lemma 6.2**.: _Let \(X\), \(Y\in\mathcal{C}\) be two objects of a finite tensor category \(\mathcal{C}\). Then there is an isomorphism of vector spaces_
\[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{1},\mathcal{C})}(x,y)\simeq \operatorname{Hom}_{\mathcal{C}}(x,Ty) \tag{6.4}\]
_where \(T\coloneqq{}_{\operatorname{id}}T{}_{\operatorname{id}}\) is the usual central monad of \(\mathcal{C}\)._
Proof.: Recall from Lemma 3.5 that
\[\operatorname{Hom}_{\mathcal{C}}(x,Ty)=\oint^{c\in\mathcal{C}}\operatorname{ Hom}_{\mathcal{C}}(\left(\bullet\right),c\otimes\left(\bullet\right)\otimes{}^{ \vee}c)(x,y)\quad. \tag{6.5}\]
and combine it with the factorization (6.3).
**Theorem 6.3**.: _There is an equivalence of \(\mathsf{Vect}\)-enriched categories_
\[\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\cong\mathcal{C}_{T}\,. \tag{6.6}\]
Proof.: Note that the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) and the Kleisli category \(\mathcal{C}_{T}\) have the same objects as \(\mathcal{C}\). Thus we can define a functor
\[\kappa:\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\to\mathcal{C}_{T} \tag{6.7}\]
which is the identity on objects and acts on morphism spaces via the isomorphism induced by Lemma 6.2. For \(\kappa\) to be a functor, we need to check that it respects identity morphisms and composition of morphisms. For \(\overline{x}\), \(\overline{y}\in\mathcal{C}_{T}\) it holds \(\operatorname{Hom}_{\mathcal{C}_{T}}(\overline{x},\overline{y})=\operatorname {Hom}_{\mathcal{C}}(x,Ty)\). Let \(\{\iota_{c}:c\otimes(\bullet)\otimes\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
form, we get
\[\mapsto(t_{(c\otimes d)})_{z}\circ(\operatorname{id}\otimes g\otimes \operatorname{id})\circ h) \tag{6.10}\]
There is the commutative diagram
The lower path is the composition \((\alpha_{d}\circ g)\circ_{\otimes_{T}}(\alpha_{c}\circ h)\) in \(\mathcal{C}_{T}\). By Lemma 6.2, \(\kappa\) is fully faithful and since it is essentially surjective, it is an equivalence.
Recall the functor \(I:\ \mathcal{C}\to\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) introduced at the end of section 6.2. Under the equivalence between the circle category and the Kleisli category, it is mapped to the induction functor \(I_{T}:\ \mathcal{C}\to\mathcal{C}_{T}\). Combining from Theorem 6.3, Proposition 3.6 and Proposition 3.2, we obtain
**Theorem 6.4**.: _Let \(\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C}))\) be the category of \(I\)-representable presheaves on the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\). There is an equivalence of \(\mathbb{K}\)-linear categories_
\[\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C}))\cong\mathsf{ Z}(\mathcal{C})\,, \tag{6.11}\]
_Remark 6.5_.:
1. Since \(\mathcal{C}\) is not required to be fusion, the Karoubification of the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) does not, in general, yield the full center \(\mathsf{Z}(\mathcal{C})\). Recall that a projective module for a monad is a retract of a free module (cf. [17, Section 7.3.2]). The Karoubification of the Kleisli category only yields the subcategory of \(\mathsf{Z}(\mathcal{C})\) which has as objects the objects that under the equivalence \(T-\mathsf{Mod}\simeq\mathsf{Z}(\mathcal{C})\) correspond to projective \(T\)-modules. This was our motivation to discuss a different completion of the Kleisli category as \(I\)-representable presheaves on the Kleisli category in section 3.2.
2. For the general \(2\)-framed cylinder \(\mathsf{C}_{n}\), the \(2\)-framing forces us to add sufficiently many evaluations and coevaluations so that we get an equivalence (6.12) \[\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C}))\simeq\mathsf{ Z}_{n}(\mathcal{C})\] The proof of this is in complete analogy to the case of \(\mathsf{C}_{1}\).
Our computation of circle categories for string-nets on framed cylinders \(\mathsf{C}_{n}\) is in complete accordance with the results of [13, Corollary 3.2.3, table 3]. |
2309.05506 | Singularity theory of Weyl-point creation and annihilation | Weyl points (WP) are robust spectral degeneracies, which can not be split by
small perturbations, as they are protected by their non-zero topological
charge. For larger perturbations, WPs can disappear via pairwise annihilation,
where two oppositely charged WPs merge, and the resulting neutral degeneracy
disappears. The neutral degeneracy is unstable, meaning that it requires the
fine-tuning of the perturbation. Fine-tuning of more than one parameter can
lead to more exotic WP mergers. In this work, we reveal and analyze a
fundamental connection of the WP mergers and singularity theory: phase boundary
points of Weyl phase diagrams, i.e., control parameter values where Weyl point
mergers happen, can be classified according to singularity classes of maps
between manifolds of equal dimension. We demonstrate this connection on a
Weyl--Josephson circuit where the merger of 4 WPs draw a swallowtail
singularity, and in a random BdG Hamiltonian which reveal a rich pattern of
fold lines and cusp points. Our results predict universal geometrical features
of Weyl phase diagrams, and generalize naturally to creation and annihilation
of Weyl points in electronic (phononic, magnonic, photonic, etc) band-structure
models, where Weyl phase transitions can be triggered by control parameters
such as mechanical strain. | György Frank, Gergő Pintér, András Pályi | 2023-09-11T14:49:30Z | http://arxiv.org/abs/2309.05506v1 | # Singularity theory of Weyl-point creation and annihilation
###### Abstract
Weyl points (WP) are robust spectral degeneracies, which can not be split by small perturbations, as they are protected by their non-zero topological charge. For larger perturbations, WPs can disappear via pairwise annihilation, where two oppositely charged WPs merge, and the resulting neutral degeneracy disappears. The neutral degeneracy is unstable, meaning that it requires the fine-tuning of the perturbation. Fine-tuning of more than one parameter can lead to more exotic WP mergers. In this work, we reveal and analyze a fundamental connection of the WP mergers and singularity theory: phase boundary points of Weyl phase diagrams, i.e., control parameter values where Weyl point mergers happen, can be classified according to singularity classes of maps between manifolds of equal dimension. We demonstrate this connection on a Weyl-Josephson circuit where the merger of 4 WPs draw a swallowtail singularity, and in a random BdG Hamiltonian which reveal a rich pattern of fold lines and cusp points. Our results predict universal geometrical features of Weyl phase diagrams, and generalize naturally to creation and annihilation of Weyl points in electronic (phononic, magnonic, photonic, etc) band-structure models, where Weyl phase transitions can be triggered by control parameters such as mechanical strain.
## Contents
* I Introduction
* II Singularity theory predicts generic and stable features of Weyl phase diagrams
* II.1 Math example for \(m=1\): fold
* II.2 Math example for \(m=2\): cusp
* II.3 Math example, \(m=3\): swallowtail
* II.4 Weyl phase diagrams
* III Swallowtail singularity in a Weyl-Josephson circuit
* III.1 Hamiltonian
* III.2 Symmetries
* III.3 Weyl points
* IV Cusp and fold singularities in superconducting systems of class D
* V Discussion
* V.1 When does the set of Weyl points form a manifold?
* V.2 Not all singularities appear on Weyl phase boundaries
* VI Conclusions
## I Introduction
Singularity theory [1] provides a classification of singularities of mappings between manifolds. An instructive and easy-to-visualise example, where the dimension of both manifolds is \(m=2\), is shown in Fig. 1. The source manifold is the curved surface embedded in the 3D space, the target manifold is a plane, and the mapping \(\pi\) is the projection of the curved surface to the plane. The singular points of this mapping are red points of the curved surface, i.e., those points that are mapped to the red points of the flat surface.
According to Whitney's theorem [2], there are two classes of singular points in this setting: the _fold_ class, exemplified by the pre-images of the values forming the two red curves, and the _cusp_ (or _pleat_) class, exemplified by the pre-image of the meeting point of the two red curves. In fact, Whitney's theorem asserts that for a _generic_ mapping between two two-dimensional (2D) manifolds, the singular points belong to one of these two classes. Further work from Mather [3] has generalised this classification for mappings between higher-dimensional manifolds. The classes of singular points (which, in technical terms, are left-right equivalence classes of map germs) are often referred to as _singularities_.
Singularity theory (sometimes referred to as catastrophe theory [4]) is strongly interlinked with physics [5; 6], e.g., via applications in optics [7; 8], seismology [9], molecular physics [10; 11], band-structure theory [12; 13; 14; 15], Hermitian and non-Hermitian quantum mechanics [16; 17; 18], and dynamical systems [19].
In particular, a recent work [17; 18] discovered and analysed, both in theory and in experiment, a new link between singularity theory and physics. That work has revealed that the swallowtail singularity, characteristic of mappings between manifolds of dimension \(m=3\), can appear as the phase diagram in the three-dimensional parameter space of the studied physical system, which is described by a parameter-dependent \(3\times 3\) non-Hermitian Hamiltonian matrix with a particular symmetry.
In this work, we show that the singularites classified by Whitney and Mather naturally appear in physi
cal systems that are described by parameter-dependent Hermitian matrices - a ubiquitous situation in quantum mechanics. We focus on the case when the number \(n_{\rm p}=3+m\) of parameters is greater than \(3\), and the parameters can be grouped into two groups: a group with \(3\) parameters, which we call the 'configurational parameters', and another group with \(m\) parameters, which we call the 'control parameters'. This setting is relevant for many physical systems, e.g., (i) electronic (phononic, magnonic, photonic) band structure theory of 3D crystals, where the configurational space is formed by the three components of the crystal momentum, and the control space is formed by any further parameters, e.g., mechanical strain of the crystal [20; 21; 22; 23; 24; 25]; (ii) spin systems in a homogeneous magnetic field, where the three magnetic-field components form the configurational space, and further parameters are the control parameters [26; 27; 28]; (iii) multi-terminal Josephson junctions, controlled by more than three parameters such as magnetic fluxes and electric voltages, etc. [29; 30; 31].
The central object of our study is the _Weyl phase diagram_: the phase diagram in the control space that shows the number of Weyl points in the configurational space. By connecting results of singularity theory with parameter-dependent Hermitian matrices, we find that under certain conditions, Weyl phase diagrams exhibit universal geometrical features, which correspond to the known singularities of generic mappings between manifolds of equal dimension.
We exemplify this general observation using two example physical setups. First, we show that the swallowtail singularity, characteristic of mappings between manifolds of dimension \(m=3\), appear in the Weyl phase diagram of multi-terminal Weyl Josephson junctions. Second, we illustrate the universality of our observation by zero-energy Weyl phase diagrams of class D matrices with random parametrization. This latter model describes the excitation spectrum of hybrid normal-superconducting systems in the presence of spin-orbit interaction and in the absence of time-reversal symmetry, and the corresponding zero-energy Weyl points appear in a 1D configurational space. The numerically obtained zero-energy Weyl phase diagrams exhibit fold lines and cusp points, as expected from our observation that this setting is related to singularities of mappings between 2D manifolds.
The rest of this paper is structured as follows. In section II., we summarize those key concepts and results from singularity which we will use to analyse the geometrical features of Weyl phase diagrams. In section III., we showcase the appearance of the swallowtail singularity, characteristic of maps between 3D manifolds, in the Weyl phase diagram of a Weyl Josephson junction. In section IV., we visually illustrate the appearance of fold and cusp singularities in the zero-energy Weyl phase diagram of parameter-dependent class D Hamiltonians that describe hybrid normal-superconducting systems. Finally, in sections V. and VI., we extend the discussion of our results, and provide conclusions.
## II Singularity theory predicts generic and stable features of Weyl phase diagrams
In this section, we first introduce the key concepts and relations from singularity theory that are relevant for the analysis of Weyl phase diagrams. We do this via simple and instructive examples of mappings between manifolds of equal dimension, for dimensions \(m=1\), \(m=2\), and \(m=3\). Then, we outline the connection between these mathematical concepts and results, and Weyl points and Weyl phase diagrams.
### Math example for \(m=1\): fold
_The source manifold._ Consider the 1D manifold \(M^{1}:\{(3x-x^{3},x)|x\in\mathbb{R}\}\in\mathbb{R}^{2}\), i.e., the graph of a cubic polynomial.
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t,x)\) of \(M^{1}\) to the first coordinate \(t\) of the point. That is, \(\pi\) is a \(M^{1}\rightarrow\mathbb{R}\) map, i.e., a map between two 1D manifolds.
_The counting function of pre-images._ To each point \(t\) of the codomain of the projection map \(\pi\), we can associate
Figure 1: Fold and cusp singularities of the projection of a curved 2D manifold to a flat 2D manifold. The Weyl points in the \(n_{\rm p}\)-dimensional total parameter space of a physical system described by Hermitian matrices usually form an \(m=n_{\rm p}-3\)-dimensional manifold. A minimal model of this Weyl-point manifold is illustrated here with a surface of dimension \(m=2\), parametrized by \((x,t_{1},-x^{3}-t_{1}x)\) in the three-dimensional space of \((x,t_{1},t_{2})\). Separating the total parameter space into a 1D configurational (\(x\)) and 2D control (\(t_{1}\),\(t_{2}\)) space correspond to a projection \(\pi\). The number of Weyl points in the configurational space corresponding to a control parameter set \((t_{1},t_{2})\) is the number of pre-images \(\#\pi^{-1}(t_{1},t_{2})\) of the projection. The characteristic Weyl-point merger processes correspond to the singularities (see text) of the projection.
the number of pre-images \(\#\pi^{-1}(t)\) of that point; we will use \(N:\mathbb{R}\to\mathbb{Z}_{0}^{+}\) to denote this function, and call it the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\). There are three partitions that are regions with non-zero length; these are \(]-\infty,-2[,]-2,2[,\) and \(]2,\infty[\), and \(N\) takes the value \(1\), \(3\), and \(1\), respectively, in these regions. Furthermore, there are two isolated points, \(-2\) and \(2\), that separate the above regions. The counting function \(N\) takes the value \(2\) in these points.
The isolated points separating the extended regions are locations of pairwise 'creation' or 'annihilation' processes of pre-images. Let us follow the points of a curve in the target manifold \(\mathbb{R}\) from \(t<2\) to \(t>2\): as \(t\) increases in the range \(t<2\), there are \(3\) pre-images in the source manifold that move, two of them merge to a single point when \(t=2\), and those two pre-images disappear ('pair-wise annihilation') for \(t>2\) where the pre-image count is \(1\).
Following physics terminology, we call the extended regions 'pre-image phases' or 'phases' for short, and the isolated points separating them we term 'pre-image phase boundaries', or 'phase boundaries' for short.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of the domain can be classified as regular or singular. Regular (singular) points are those where the derivative of the map is non-zero (zero). This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram. In fact, the images of the singular points of the domain (i.e., the singular values of the map) appear in the pre-image phase diagram as phase boundaries.
_Extension from the example to generic maps._ The above picture, although described for the case of a single example, extends naturally to generic maps between 1D manifolds. Furthermore, for generic maps between 1D manifolds, the local behavior of the map in any two regular (singular) points is equivalent, in the following sense: In a regular (singular) point, in appropriately chosen coordinates, the map can be written as \(f(x)=x\) (\(f(x)=x^{2}\)). These singular points are also called 'fold points' (see Table 1). Furthermore, for generic maps, the structure of singular points is robust against small deformations of the map, which implies that the pre-image phase diagram is also robust against such small deformations.
### Math example for \(m=2\): cusp
_The source manifold._ Consider now the 2D manifold \(M^{2}:\{(t_{1},-x^{3}-t_{1}x,x)|(x,t_{1})\in\mathbb{R}^{2}\}\in\mathbb{R}^{3}\), as shown in Fig. 1.
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t_{1},t_{2},x)\) of \(M^{2}\) to the first two coordinates \((t_{1},t_{2})\). That is, \(\pi\) is a \(M^{2}\to\mathbb{R}^{2}\) map, i.e., a map between two 2D manifolds. The projection map \(\pi\) is also illustrated in Fig. 1.
_Counting function of pre-images._ To each point \((t_{1},t_{2})\) of the codomain of the projection map \(\pi\), we can associate the number of pre-images \(\#\pi^{-1}(t_{1},t_{2})\) of that point; we will use \(N:\mathbb{R}^{2}\to\mathbb{Z}_{0}^{+}\) to denote this function. We call \(N\) the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\), as illustrated in Fig. 1 as the patterns on the \((t_{1},t_{2})\) plane. The light gray and dark gray partitions are extended regions with non-zero area, cor
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline dim Ct & min dim Cf & Name & Canonical form \\ \hline \hline
1 & 1 & fold point & \((x^{2})\) \\ \hline
2 & 1 & fold line & \((x^{2},y)\) \\
2 & 1 & cusp point & \((x^{3}+xy,y)\) \\ \hline
3 & 1 & fold surface & \((x^{2},y,z)\) \\
3 & 1 & cusp line & \((x^{3}+xy,y,z)\) \\
3 & 1 & swallowtail point & \((x^{4}+x^{2}y+xz,y,z)\) \\ \hline
4 & 1 & fold hypersurface & \((x^{2},y,z,w)\) \\
4 & 1 & cusp surface & \((x^{3}+xy,y,z,w)\) \\
4 & 1 & swallowtail line & \((x^{4}+x^{2}y+xz,y,z,w)\) \\
4 & 1 & butterfly point & \((x^{5}+x^{3}y+x^{2}z+xw,y,z,w)\) \\
4 & 2 & elliptic umbilic point & \((x^{2}-y^{2}+xz+yw,xy,z,w)\) \\
4 & 2 & hyperbolic umbilic point & \((x^{2}+y^{2}+xz+yw,xy,z,w)\) \\ \hline \end{tabular}
\end{table}
Table 1: Singularities of mappings between manifolds of equal dimension \(m\leq 4\). ‘Name’ and ‘Canonical form’ originate from singularity theory. A given singularity can appear on a system’s Weyl phase diagram if the system has dim Ct control parameters and at least \(\min\dim\text{Cf}\) configurational parameters. For example, the fold point singularity can appear in the Weyl phase diagram if the system is described by a parameter-dependent Hermitian matrix (implying a 3D configurational space) with a single control parameter. In contrast, the elliptic and hyperbolic umbilic points cannot appear on the zero-energy Weyl phase diagram of class-D matrices (1D configurational space) with 4 control parameters, since the configurational space dimension is less than \(\min\dim\text{Cf}=2\).
responding to pre-image counts of 1 and 3, respectively. The red curves separating the grey regions correspond to pre-image count of 2, except the cusp point where the two curves meet, corresponding to pre-image count of 1.
The curve-type boundaries correspond to pairwise 'creation' or 'annihilation' processes of pre-images. In fact, the left (right) curve boundary corresponds to the creation or annihilation of the upper (lower) two pre-images. The cusp point is a location of a three-point process [32; 33], where the number of pre-images change from 1 to 3 such that the two newborn pre-images are created at the position of the original single pre-image. Analogously to the \(m=1\) case, we call the extended regions with non-zero area 'pre-image phases' or 'phases' for short, and the boundaries separating these regions we call 'phase boundaries'.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of its domain can be classified as regular or singular. Regular (singular) points are those where the Jacobian of the map has a non-vanishing (vanishing) determinant. Singular points can be further classified, as fold points or cusp points. This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram shown in the \((t_{1},t_{2})\) plane of Fig. 1: The images of the fold points of \(\pi\) form the curved phase boundary lines, whereas the image of the single cusp point of \(\pi\) is the meeting point of the curved phase boundary lines.
_Extension from the example to generic maps._ The above picture extends naturally to generic maps between 2D manifolds. According to Whitney's theorem, singular points of such generic mappings are either cusp points, or fold points forming lines ('fold lines') (see Table 1). Furthermore, for generic maps between 2D manifolds, the local behavior of the map in all regular [fold] [[cusp]] points is equivalent, in the sense that in appropriately chosen coordinates, the map can be written in the canonical form \(f(x,y)=(x,y)\) [\(f(x,y)=(x^{2},y)\)] [\([f(x,y)=(x^{3}+xy,y)]\)]. Furthermore, for generic maps, the structure of singular points is robust against small deformations of the map, which implies that the pre-image phase diagram is also robust against such deformations.
### Math example, \(m=3\): swallowtail
_The source manifold._ Consider now the 3D manifold \(M^{3}:\{(t_{1},t_{2},-x^{4}-t_{1}x^{2}-t_{2}x,x)|(t_{1},t_{2},x)\in\mathbb{R}^{ 3}\}\in\mathbb{R}^{4}\).
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t_{1},t_{2},t_{3},x)\) of \(M^{3}\) to the first three coordinates \((t_{1},t_{2},t_{3})\). That is, \(\pi\) is a \(M^{3}\to\mathbb{R}^{3}\) map, i.e., a map between two 3D manifolds.
_Counting function of pre-images._ To each point \((t_{1},t_{2},t_{3})\) of the codomain of the projection map \(\pi\), we can associate the number of pre-images \(\#\pi^{-1}(t_{1},t_{2},t_{3})\) of that point; we will use \(N:\mathbb{R}^{3}\to\mathbb{Z}_{0}^{+}\) to denote this function, and call it the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\). This partitioning is shown in Fig. 2b. As illustrated there, there are extended partitions of non-zero volume, there are surfaces (fold surfaces) that separate the regions, there are two curves (cusp lines) that separate the surfaces, there is an intersection curve of the fold surfaces, and there is a single point (swallowtail point), where the curves meet. The counting function takes the values 0, 2, and 4, in the bottom, top, and middle regions of the figure. Along the fold surfaces, the pre-image count is 3. Along the intersection curve of the two fold surfaces, it is 2. Along the cusp lines, it is also 2. In the swallowtail point, it is 1.
The fold surface phase boundaries correspond to pairwise creation or annihilation of pre-images. The cusp lines correspond to 'three-point processes' [32; 33], where three pre-images merge into a single one. The intersection curve of the fold surfaces corresponds to'simultaneous two-point processes', where the four pre-images merge and annihilate in two pairs, simultaneously. The swallowtail point corresponds to a 'four-point process', where the four pre-images merge in a single location and annihilate. We call the extended regions with non-zero volume 'phases', and the boundaries separating these regions 'phase boundaries'.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of its domain can be classified as regular or singular. Regular (singular) points are those where the Jacobian of the map has a non-vanishing (vanishing) determinant. Singular points can be further classified, as fold points, cusp points, or swallowtail points. This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram shown in Fig. 2b: The image of the surface formed by the fold points of \(\pi\) form the fold surfaces in Fig. 2b, the images of the curves of the cusp points of \(\pi\) form the cusp lines in Fig. 2b, and the image of the single swallowtail point of \(\pi\) is the swallowtail point in Fig. 2b.
_Extension from the example to generic maps._ The above picture extends naturally to generic maps between 3D manifolds. Singular points of such maps are either swallowtail points, or cusp points forming lines ('cusp lines'), or fold points forming surfaces ('fold surfaces'), see Table 1. Furthermore, for generic maps between 3D manifolds, the local behavior of the map in any two regular [fold] [[cusp]] [[[swallow
### Weyl phase diagrams
In this work, we focus on physical systems that are described by parameter-dependent Hamiltonians, i.e., Hermitian matrices. In particular, we assume that the number of parameters \(n_{\rm p}\) is at least 4, and the parameters are naturally grouped into two groups, of size 3 (configurational parameters) and \(m=n_{\rm p}-3\) (control parameters). We denote the configurational space as \({\rm Cf}^{3}\) and the control space as \({\rm Ct}^{m}\).
For a fixed set of the control parameters, the energy eigenvalues as functions of configurational parameters ('energy bands') might exhibit generic twofold degeneracies (Weyl points) or more exotic degeneracy patterns [34; 35]. Focus our attention to degeneracies between two specific bands - without the loss of generality, let us choose the bands of the ground state and the first excited state. As the control parameters are varied continuously, the degeneracy points 'evolve': generically, the Weyl points of the two lowest-energy bands move in the configurational space, and for special values of the control parameters, Weyl points can merge and annihilate, or Weyl points can be created. Control parameter values where Weyl points are created or annihilated are regarded as 'phase boundaries', separating different regions ('phases') in the control space characterized by different numbers of Weyl points. We call this partitioning of the control parameter space a 'Weyl phase diagram'.
Next, we argue that the Weyl phase diagram is actually a special case of a pre-image phase diagram, introduced in the previous subsections for \(m=1,2,3\). What is the corresponding source manifold, projection map, and target manifold? The source manifold is the'surface' \({\rm W}^{m}\subset{\rm Cf}^{3}\times{\rm Ct}^{m}\) drawn by the Weyl points in the product of the configuration space and the control space. Recall that Weyl points are isolated points (i.e., zero-dimensional objects) in the configurational space, and the product of the configuration space and the control space is \((3+m)\)-dimensional, hence the Weyl points draw an \(m\)-dimensional manifold \({\rm W}^{m}\) in the product space.
The projection map \(\pi:{\rm W}^{m}\rightarrow{\rm Ct}^{m},(k,t)\mapsto t\) is defined as the projection from the \(m\)-manifold of Weyl points on the control space. The counting function of pre-images \(N\), defined in the previous subsections, can be also defined for this projection map \(\pi\), and the corresponding pre-image phase diagram provides the Weyl phase diagram.
To conclude, we found that under generic conditions, a Weyl phase diagram of dimension \(m\) is a pre-image phase diagram of a specific projection map, and hence its geometric features are universal: the phase diagram consists of extended regions (phases) where the number of Weyl points is constant, these phases are separated by phase boundaries formed by the singular values of the projection map, and these phase-boundary points carry universal geometrical characteristics determined by their singularity class. In particular, for \(m=2\), the phase boundary consists of fold lines that may meet at cusp points, and for \(m=3\), the phase boundary consists of fold surfaces, that may meet in cusp lines, that may meet in swallowtail points. We note that the list of singularities is enriched further as \(m\) increases above 3, as exemplified in the lowest block of Table 1.
## III Swallowtail singularity in a Weyl-Josephson circuit
To demonstrate the Weyl-point singularities in a concrete physical system, we consider the Weyl Josephson circuit, originally proposed in Fig. 1 of [30]. The circuit consists of superconductor islands connected by Josephson junctions which form loops (Fig. 2a). In this setup, Weyl points are defined in a 3D configurational space (fluxes), the 3D control space consists of gate-voltage parameters, and the singularities (fold surfaces, cusp lines, and the swallowtail point) appear in the 3D Weyl phase diagram defined in the control (gate-voltage) parameter space.
### Hamiltonian
The Hamiltonian of the circuit reads
\[\hat{H}(\mathbf{\varphi},\mathbf{n}_{\rm g}) = E_{\rm C}\left(\hat{\mathbf{n}}-\mathbf{n}_{\rm g}\right)\cdot c^{-1} \left(\hat{\mathbf{n}}-\mathbf{n}_{\rm g}\right)\] \[- \sum_{\begin{subarray}{c}\alpha,\beta=0\\ \alpha<\beta\end{subarray}}^{3}E_{\rm J,\alpha\beta}\cos\left[\hat{\varphi}_ {\alpha}-\hat{\varphi}_{\beta}+\gamma_{\alpha\beta}(\varphi_{x},\varphi_{y}, \varphi_{z})\right].\]
The first term in the Hamiltonian is the charging energy term where the charging energy scale \(E_{\rm C}=(2e)^{2}/(2C_{0})\approx 77.5\) GHz is set by the capacitance scale \(C_{0}=1\) fF, and \(c=C/C_{0}\) is the dimensionless capacitance matrix defined from the capacitance matrix [36]\(C\) of the circuit. The elements of the vector \(\hat{\mathbf{n}}=(\hat{n}_{1},\hat{n}_{2},\hat{n}_{3})\) are the number operators \(\hat{n}_{\alpha}\) counting the Cooper pairs on the islands \(\alpha\in\{1,2,3\}\). The gate voltage \(V_{\rm g,\alpha}\) coupled to the \(\alpha\)th island through the capacitance \(C_{\rm g,\alpha}\) shifts the number operator in the Hamiltonian by the effective offset charge \(n_{\rm g,\alpha}=C_{\rm g,\alpha}V_{\rm g,\alpha}/(2e)\).
The second term in the Hamiltonian is the tunneling term with the Josephson energies \(E_{\rm J,\alpha\beta}\) of the junctions between island \(\alpha\) and \(\beta\), with the phase operators \(\hat{\varphi}_{i}\) canonically conjugated to the number operators \(\hat{n}_{i}\). The control angles \(\gamma_{\alpha\beta}\) are given by \(\gamma_{0\beta}=0\), \(\gamma_{12}=\varphi_{x}\), \(\gamma_{13}=-\varphi_{z}\), and \(\gamma_{23}=\varphi_{y}\) with the magnetic fluxes \(\varphi_{i}=\pi\Phi_{i}/\Phi_{0}\) of the loops. The Josephson energies and capacitances are given in Table 2.
The Hamiltonian is truncated to the 8-dimensional subspace spanned by the number operator eigenstates \(|n_{1},n_{2},n_{3}\rangle\) with \(n_{i}\in\{0,1\}\). Degeneracies between the ground state and first excited state is investigated. The magnetic fluxes and offset charges gives \(n_{\rm p}=6\)-dimensional total parameter space divided into \(3+3\)
where we choose the magnetic fluxes to be the configurational parameters hosting the Weyl points.
### Symmetries
The Hamiltonian has an effective time-reversal and inversion symmetry
\[H(-\mathbf{\varphi},\mathbf{n}_{\rm g}) = H^{*}(\mathbf{\varphi},\mathbf{n}_{\rm g}), \tag{2}\] \[H(-\mathbf{\varphi},1-\mathbf{n}_{\rm g}) = PH(\mathbf{\varphi},\mathbf{n}_{\rm g})P^{-1}, \tag{3}\]
with \(P\left|n_{1},n_{2},n_{3}\right\rangle=|1-n_{1},1-n_{2},1-n_{3}\rangle\). The consequence of Eq. (2) is that \(H(\mathbf{\varphi},\mathbf{n}_{\rm g})\) and \(H(-\mathbf{\varphi},\mathbf{n}_{\rm g})\) has the same spectra, meaning that a Weyl point located at \(\mathbf{\varphi}_{\rm WP}\) has a time-reversal partner with the same chirality at \(-\mathbf{\varphi}_{\rm WP}\) for any \(\mathbf{n}_{\rm g}\). The two symmetries together results that \(H(\mathbf{\varphi},\mathbf{1}/\mathbf{2})=PH^{*}(\mathbf{\varphi},\mathbf{1}/\mathbf{2})P^{-1}\) with \(\mathbf{1}/\mathbf{2}:=(1/2,1/2,1/2)\) for any \(\mathbf{\varphi}\), meaning that it is possible to do a constant (not depending on \(\mathbf{\varphi}\)) basis transformation so that \(UH(\mathbf{\varphi},\mathbf{1}/\mathbf{2})U^{-1}\) is a real-valued matrix. This lowers the codimension of the band crossings to 2 in the special control point \(\mathbf{n}_{\rm g}=\mathbf{1}/\mathbf{2}\), meaning that the general degeneracy pattern in the 3-dimensional configurational space is a 1-dimensional nodal loop [30; 31].
Due to the periodicity of the configurational (flux) parameter space, the total topological charge, i.e., the sum of topological charges of all the Weyl points is zero [37]. Therefore, the number of Weyl points must be even. Due to the additional conditions that (1) Weyl points come in time-reversal pairs, and (2) the two Weyl points of a time-reversed pair carry the same topological charge, the number of Weyl points must be a multiple of 4.
### Weyl points
To investigate exotic Weyl-point merging processes one needs as many Weyl points in the configurational space as possible. To achieve this we search the Weyl points in the vicinity of the nodal loop control parameter point \(\mathbf{n}_{\rm g,loop}=\mathbf{1}/\mathbf{2}\). This is advantageous as the nodal loop can be used as source of Weyl points. Upon a small perturbation \(\mathbf{n}_{\rm g,loop}+\delta\mathbf{n}_{\rm g}\) the nodal loop breaks into multiple Weyl points. The perturbation \(\mathbf{n}_{\rm g,loop}+t\mathbf{e}\) in the direction \(\mathbf{e}=(-4,1,9)/\sqrt{98}\) results 8 Weyl points (4 time-reversal symmetric Weyl point pairs) with sufficiently small \(t\). For larger \(t\) the 8-point region curves away from the straight line.
Fig. 2c-e show 2D cuts of the Weyl phase diagram which reveal the characteristic shape of a swallowtail singularity corresponding to the interaction of 4 Weyl points with alternating topological charges (the time-reversal pairs are far). In Fig. 2c for \(n_{\rm g,3}=0.6\) the 8-point region (yellow) appears with a triangular shape with 3 different boundaries, in Fig. 2d this triangle shrinks, and in Fig. 2e it is absent. This corresponds to the 2D \((t_{2},t_{3})\) cuts of the swallowtail shown in Fig. 2b. The boundaries are fold lines, which correspond to the merger and annihilation of 2 oppositely charged Weyl points (see Fig. 2f-h). This can happen between the 2 leftmost, between the 2 middle, or between the 2 rightmost points. The merger of the 2 leftmost and the 2 rightmost Weyl points are independent, hence these fold lines intersect at a point, where the two mergers coincide (Fig. 2i). The merger of the 2 middle points and the merger of the 2 leftmost (rightmost) points are not independent, their fold lines are touching each other at a cusp point, which correspond to the merger of the 3 leftmost (rightmost) points, see Fig. 2j. The Weyl phase diagram is actually three-dimensional with fold surfaces and cusp lines. The two cusp lines touch each other at the swallowtail point where the 4 Weyl points merge at a single point. This is illustrated in Fig. 2d where the triangular 8-point region is almost disappeared and in Fig. 2k, where the corresponding Weyl point configuration at the '+' marker shows 4 Weyl points close together. We found that the actual swallowtail point is at \(\mathbf{n}_{\rm g,swallowtail}=(0.418,0.481,0.735)\).
## IV Cusp and fold singularities in superconducting systems of class d
In the preceding sections, we focused on parameter-dependent \(n\times n\) Hermitian matrices with a 3D configurational space and an \(m\)-dimensional control space. Our considerations above, concerning the Weyl points of these matrices, hold for any pair of neighboring bands \((j,j+1)\), where \(1\leq j\leq n-1\). For these Hermitian matrices, the Weyl points are zero-dimensional objects in the 3D configuration space.
There are many quantum-mechanical models where the Hamiltonian is not a generic Hermitian matrix, but a constrained one. In particular, the tenfold-way classification of Altland and Zirnbauer defines Hamiltonian classes constrained by various combinations of time-reversal, particle-hole, and chiral symmetries [38]. In this section, we present our results corresponding to the Altland-Zirnbauer class D, which represents Hamiltonians (also called Bogoliubov-de Gennes Hamiltonians or BdG Hamiltonians) describing excitations in superconductors or hybrid normal-superconductor hybrid systems. A typical setup modelled by matrices of class D is a (possibly multi-terminal) Josephson junction in the presence of spin-orbit interaction and time-reversal-breaking magnetic fields, and in the absence of charging effects [39; 40; 41]. Non-interacting models of one
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \(\alpha\beta\) & 01 & 02 & 03 & 12 & 13 & 23 \\ \hline \(E_{\rm J,\alpha\beta}\) (GHz) & 2 & 4 & 6 & 3 & 3 & 6 \\ \(C_{\alpha\beta}\) (fF) & 2 & 1 & 2 & 3 & 4 & 3 \\ \hline \end{tabular}
\end{table}
Table 2: Weyl–Josephson circuit parameters used in the numerical calculations yielding Fig. 2c-k. Gate capacitances are set to \(C_{\rm g,1}=C_{\rm g,2}=C_{\rm g,3}=0.1\,\rm{f}\rm{F}\).
dimensional topological superconductors hosting Majorana zero modes also fall into class D [42].
Studying the properties of parameter-dependent class-D matrices is motivated by the intense experimental efforts on superconducting devices modelled by such matrices. However, here we focus on class D also because certain aspects of the singularity-theory analysis of their Weyl points can be visualised in a particularly straightforward manner using surface plots. To appreciate this, we first note that class D matrices have even dimension, i.e., \(n=2n_{s}\) with \(n_{s}\) being a positive integer. Furthermore, the eigenvalue spectrum of class D matrices is symmetric with respect to zero. Finally, the generic eigenvalue degeneracies between bands \(n_{s}\) and \(n_{s}+1\), which necessarily happen at zero energy, and sometimes referred to as 'parity switches', are special in the sense that they appear for _single-parameter_ families of matrices, as opposed to Weyl points of Hermitian matrices which require three parameters to be varied. In what follows, we will use the term 'zero-energy Weyl points' for parity switches of single-parameter class D matrix families.
Consider now a physical system described by a parameter-dependent class-D matrix, where the number of parameters is \(n_{\rm p}=3\), which are grouped into a single-parameter group defining the 1D configurational space and the two other parameters forming the control space of dimension \(m=2\). We might be interested in the number of zero-energy Weyl points in the configurational space, and how that number changes as the control parameters are varied. This dependence is characterized by the zero-energy Weyl phase diagram. This zero-energy Weyl phase diagram has certain universal geometric properties, which follows from Whitney's theorem describing the singularities of mappings between 2D manifolds. Namely, the zero-energy Weyl phase diagram consists of extended regions of finite area where the number of zero-energy Weyl points is constant, and phase boundaries constructed from fold lines that might meet in cusp points.
We now illustrate these universal geometric properties using a random-matrix approach. The BdG Hamiltonian
Figure 2: Swallowtail singularity in the Weyl–Josephson circuit. (a) The layout of the Weyl–Josephson circuit The system can be tuned with changing the magnetic fluxes \(\mathbf{\varphi}\) through the loops or the voltage differences (offset charges \(\mathbf{n}_{\rm g}\)) between the superconducting islands. Every other parameters such that the Josephson energies and capacitances are set to be constant. (b) Swallowtail singularity illustrated with the roots of the depressed quartic equation \(x^{4}+t_{1}x^{2}+t_{2}x+t_{3}=0\). The merger of (real valued) roots correspond to the characteristic self-intersecting surface in the 3D control parameter space of coefficients. (c-e) 2D cuts of the Weyl phase diagram of control parameters showing the number of Weyl points in the configurational parameter space revealing a similar structure. The triangular 8-point region disappears as \(n_{\rm g3}\) increased. The generic boundaries between the regions are fold lines which can cross each other and also touch at cusp points. These cusp points form two cusp lines in the 3D Weyl phase diagram which touch at the swallowtail point. The possible Weyl-point mergers in the configurational space correspond to the points marked in the Weyl phase diagram are shown in panel (f-k). The points are colored by their topological charge: red (blue) points have charge +1 (-1).
depends on 3 parameters in the following way:
\[H(\alpha,\beta,\gamma) = H_{1}\cos(\alpha)\cos(\beta)\cos(\gamma) \tag{4}\] \[+ H_{2}\cos(\alpha)\cos(\beta)\sin(\gamma)\] \[+ H_{3}\cos(\alpha)\sin(\beta)\cos(\gamma)\] \[+ H_{4}\cos(\alpha)\sin(\beta)\sin(\gamma)\] \[+ H_{5}\sin(\alpha)\cos(\beta)\cos(\gamma)\] \[+ H_{6}\sin(\alpha)\cos(\beta)\sin(\gamma)\] \[+ H_{7}\sin(\alpha)\sin(\beta)\cos(\gamma)\] \[+ H_{8}\sin(\alpha)\sin(\beta)\sin(\gamma),\]
where \(H_{n}\) are random matrices with the structure
\[H_{n}=\begin{pmatrix}H_{0,n}&\Delta_{n}\\ -\Delta_{n}^{*}&-H_{0,n}^{*}\end{pmatrix}, \tag{5}\]
where \(\Delta_{n}\) is a skew-symmetric complex matrix and \(H_{0,n}\) is Hermitian. We constructed these matrices with \(\Delta_{n}=d_{n}-d_{n}^{\rm T}\) and \(H_{0,n}=h_{n}+h_{n}^{\dagger}\) where the entries are pseudo-random numbers between -1/2 and 1/2, defined via
\[\operatorname{Re}d_{n,kl} = \Big{\{}\sqrt{2}k+\sqrt{3}l+\sqrt{5}n\Big{\}}-\frac{1}{2} \tag{6}\] \[\operatorname{Im}d_{n,kl} = \Big{\{}\sqrt{6}k+\sqrt{7}l+\sqrt{10}n\Big{\}}-\frac{1}{2}\] (7) \[\operatorname{Re}h_{n,kl} = \Big{\{}\sqrt{11}k+\sqrt{13}l+\sqrt{14}n\Big{\}}-\frac{1}{2}\] (8) \[\operatorname{Im}h_{n,kl} = \Big{\{}\sqrt{15}k+\sqrt{17}l+\sqrt{19}n\Big{\}}-\frac{1}{2}, \tag{9}\]
where \(\{x\}\) denotes the fractional part of \(x\). In this example the dimension of the full BdG Hamiltonian is \(n=12\times 12\).
The BdG Hamiltonian is skew-symmetric in the Majorana basis, resulting a symmetric spectrum. It also has a so-called Pfaffian for which \(\operatorname{pf}(H)^{2}=\det(H)\). The Pfaffian is a polynomial of the entries of the matrix. It changes sign when two energy levels cross at zero. Therefore, zero-energy degeneracies appear with the fine-tuning of only 1 parameter. In a 3D parameter space they generally form a 2D manifold.
Fig. 3a shows the zero-energy degeneracy surface of the pseudo-random BdG Hamiltonian in the total parameter space. The figure is produced with calculating the Pfaffian on a \(100\times 100\times 100\) grid and highlighting points where the Pfaffian changes sign. We divided the total parameter space into the configurational parameter \(\gamma\) and to the control parameters \((\alpha,\beta)\). For a fixed \(\alpha\) and \(\beta\), we counted the sign changes of the Pfaffian along the \(\gamma\) axis; plotting these counts as the function of \(\alpha\) and \(\beta\) provides the zero-energy Weyl phase diagram as shown in Fig. 3b. We created this phase diagram using \(200\times 200\times 200\) grid.
The parameter dependence of the Hamiltonian in Eq. (4) is given in a way that by shifting any angle by \(\pi\) results in the negative of the Hamiltonian, e.g., \(H(\alpha,\beta,\gamma+\pi)=-H(\alpha,\beta,\gamma)\). Because the negative have the same spectrum, the Weyl phase diagram is \(\pi\) periodic and in the generic points the Weyl-point number is divisible by 4.
The zero-energy Weyl phase diagram of the BdG Hamiltonian shows a rich structure with the stable singularities of 2D manifolds: fold lines meeting in cusp points and crossing each other. Fig. 3 highlights the interval \(-1\leq\alpha,\beta\leq 1/2\) which resembles a 2D cut of the phase diagram of a swallowtail singularity. An additional angle parameter might complete the swallowtail singularity if the two cusp lines meet upon changing the new parameter. The total phase diagram is crowded with the singularities. This structure of singularities becomes more complicated upon increasing the dimension of the Hilbert space (not shown) because this also increases the number of zero-energy Weyl points in the configurational space, leading to more possible mergers between them.
## V Discussion
### When does the set of Weyl points form a manifold?
In Sec. II.4, we have argued that the set of Weyl points in the total parameter space \(\operatorname{Cf}^{3}\times\operatorname{Ct}^{m}\) forms a manifold. Based on this precondition, we highlighted and exploited a strong connection between the Weyl-point merging processes and the stable mappings between manifolds of equal dimension. We discuss this precondition further in this subsection.
The \(n\times n\) Hermitian matrices form a \(n^{2}\)-dimensional real vector space. The subset of two-fold degenerate matrices is a \(n^{2}-3\)-dimensional (3 codimensional) submanifold [34; 35]. Furthermore, the set of matrices with two-fold degeneracy between the \(i\)-th and \((i+1)\)-th eigen
Figure 3: Fold lines and cusp points as singularities in the zero-energy Weyl phase diagram of a random BdG Hamiltonian. (a) Due to the particle-hole symmetry, zero-energy degeneracies appear with the fine-tuning of a single parameter. Therefore, zero-energy degeneracies appear as surfaces in a 3D parameter space. (b) Zero-energy Weyl phase diagram corresponding to the vertical projection of the surface on (a), i.e., obtained by counting the degeneracy points along the vertical direction. The phase diagram exhibits the generic and robust singularities in 2D: fold lines and cusp points.
values and those with two-fold degeneracy between the \((i+1)\)-th and \((i+2)\)-th eigenvalues meet at points with a three-fold degeneracy with dimension \(n^{2}-8\). In the following we denote the two-fold degeneracy set between the \(i\)-th and \((i+1)\)-th eigenvalues by \(\Sigma\). Note that our arguments remain true for the whole two-fold degeneracy set.
The Hamiltonian of a physical system is map \(H:\mathrm{Cf}^{3}\times\mathrm{Ct}^{m}\rightarrow\mathrm{Herm}(n)\) from the total parameter space to the space of \(n\times n\) Hermitian matrices. The set of Weyl points corresponding to the two-fold degeneracy set between the \(i\)-th and \((i+1)\)-th eigenvalues is the pre-image \(H^{-1}(\Sigma)\). According to the transversality theorem [43], a generic Hamiltonian map \(H\) is transverse to \(\Sigma\) (intuitively, 'non-tangential') and the pre-image \(H^{-1}(\Sigma)\) is a submanifold in the total parameter space of codimension \(3\).
Based on the above considerations, we can envision situations when the set of Weyl points is _not_ a manifold. For example, this is the case when the image of the Hamiltonian map is tangential to the two-fold degeneracy set \(\Sigma\), i.e., the intersection is non-generic; or if the image of the Hamiltonian map intersects a multi-fold degeneracy set. The former case might arise in case of fine tuning or symmetries, i.e., it does not arise when the mapping is generic. The latter case is also non-generic if \(n_{\mathrm{p}}<8\), e.g., in the case \(n_{\mathrm{p}}=6\) studied in Sec. III. However, for \(n_{\mathrm{p}}\geq 8\), stable intersections of the image of the Hamiltonian map and the multi-fold degeneracy sets can arise, and the whole degeneracy set is not a manifold. In this case, our argument is still valid _locally_, in a small neighbourhood of a two-fold degeneracy with a finite gap from the other levels.
Note also that for our argument a further condition should hold as well, namely, the projection \(\pi\) has to be generic. Despite we assumed that \(H\) is generic, \(\pi\) is not necessarily a generic map. Without providing a full analysis of this condition, we note that if \(x\mapsto H(x,t)\) is generic as a deformation of \(x\mapsto H(x,t_{0})\) for every \(t_{0}\), then the condition is satisfied.
### Not all singularities appear on Weyl phase boundaries
In Secs. II, III, and IV, we have argued and illustrated that Weyl phase diagrams are pre-image phase diagrams of mappings between manifolds of equal dimension, and that each point of a phase boundary on a Weyl phase diagram belongs to a singularity type. This result raises the following natural question: are all singularity types realised as phase boundary points of Weyl phase diagrams? No -- as we show in this subsection.
Sec. IV shows an example where the Hamiltonian has a symmetry which lowers the codimension of a two-fold degeneracy at zero energy to be \(1\). The corresponding configurational parameter space is therefore \(1\)-dimensional with \(1\)D Weyl points. Similarly, for \(\mathcal{PT}\)-symmetric Hamiltonians the codimension of a two-fold degeneracy is \(2\), thus, the configurational space is \(2\)-dimensional with \(2\)D Weyl points. We denote the codimension of the two-fold degeneracy with \(0<l\leq 3\).
The \(n_{\mathrm{p}}=l+m\)-dimensional total parameter space has an \(m\)-dimensional Weyl submanifold \(\mathrm{W}^{m}\). We defined the projection \(\pi:\mathrm{W}^{m}\rightarrow\mathrm{Ct}^{m}\) as it erases the first \(l\) configurational coordinates of the points of the Weyl manifold. Defining the 'total projection' \(\Pi:\mathrm{Cf}^{l}\times\mathrm{Ct}^{m}\rightarrow\mathrm{Ct}^{m}\) with the same definition \(\Pi(x,t)=t\) for the total parameter space, we get a mapping with corank \(l\), everywhere in the domain of \(\Pi\). Restricting the total projection \(\Pi\) to \(\mathrm{W}^{m}\) results \(\pi=\Pi|_{\mathrm{W}^{m}}\). Therefore, the mapping \(\pi\) has a corank smaller or equal to \(l\). Recall that the corank of a map \(f\) at a point \(w\) of the domain is defined as the corank of the Jacobian matrix \(\mathrm{Jac}_{w}(f)\) of \(f\) at \(w\). Clearly the corank of \(\pi\) at a point \(w\in W^{m}\) is exactly the dimension of the intersection of the \(m\)-dimensional tangent space \(T_{w}W^{m}\) of \(\mathrm{W}^{m}\) at \(w\) and the \(l\)-dimensional kernel of the Jacobian \(\mathrm{Jac}_{w}(\Pi)\) of \(\Pi\) at \(w\). Since this corank is at most \(l\), in a Weyl-point merging process only those singularities appear whose corank is less than or equal to the dimension \(l\) of the dimension of the configurational parameter space.
Concrete examples of'missing singularities' are the elliptic umbilic point and the hyperbolic umbilic point, listed as the last two entries in Table 1, which cannot appear as stable features in zero-energy Weyl phase diagrams of class-D systems.
As seen from Table 1, these singularities are characteristic of generic maps between manifolds of dimension \(m=4\), hence it is plausible to search for them in zero-energy Weyl phase diagrams of class-D systems controlled by \(5\) parameters (\(l=1\) configurational, \(m=4\) control). However, as seen from the corresponding canonical forms in Table 1, the corank of these singularities is \(2\). The consideration of the previous paragraph, on the other hand, implies that the corank of the projection map \(\pi\) is at most \(l\), which is \(1\) in this case. As a consequence, umbilic points do not appear on zero-energy Weyl phase diagrams of class D systems.
## VI Conclusions
To conclude, we have argued that singularities of maps between manifolds of equal dimension naturally appear in Weyl phase diagrams of parameter-dependent Hermitian matrices, and illustrated this by numerical results revealing the swallowtail singularity in the gate-voltage control parameter space of a Weyl Josephson junction. We have also illustrated singularities (fold and cusp) on the zero-energy Weyl phase diagram of a parameter-dependent class-D Hermitian matrices, which describe superconducting nanostructures. Based on our arguments, we expect that the results generalise in a broad range of systems; for example, Weyl phase diagrams representing Weyl-point creation and annihilation in elec
tron (phonon, magnon, photon) band structures show similar universal geometrical features characterised by singularities.
###### Acknowledgements.
We thank Z. Guba for useful discussions. This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-0004), and by NKFIH via the OTKA Grant No. 132146. Supported by the UNKP-22-3-II-BME-6 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund.
|
2309.03177 | 3D Object Positioning Using Differentiable Multimodal Learning | This article describes a multi-modal method using simulated Lidar data via
ray tracing and image pixel loss with differentiable rendering to optimize an
object's position with respect to an observer or some referential objects in a
computer graphics scene. Object position optimization is completed using
gradient descent with the loss function being influenced by both modalities.
Typical object placement optimization is done using image pixel loss with
differentiable rendering only, this work shows the use of a second modality
(Lidar) leads to faster convergence. This method of fusing sensor input
presents a potential usefulness for autonomous vehicles, as these methods can
be used to establish the locations of multiple actors in a scene. This article
also presents a method for the simulation of multiple types of data to be used
in the training of autonomous vehicles. | Sean Zanyk-McLean, Krishna Kumar, Paul Navratil | 2023-09-06T17:30:26Z | http://arxiv.org/abs/2309.03177v1 | # 3D Object Positioning Using Differentiable Multimodal Learning
###### Abstract
This article describes a multi-modal method using simulated Lidar data via ray tracing and image pixel loss with differentiable rendering to optimize an object's position with respect to an observer or some referential objects in a computer graphics scene. Object position optimization is completed using gradient descent with the loss function being influenced by both modalities. Typical object placement optimization is done using image pixel loss with differentiable rendering only, this work shows the use of a second modality (Lidar) leads to faster convergence. This method of fusing sensor input presents a potential usefulness for autonomous vehicles, as these methods can be used to establish the locations of multiple actors in a scene. This article also presents a method for the simulation of multiple types of data to be used in the training of autonomous vehicles.
Differentiable Rendering, Inverse Rendering, Lidar, Object Position Optimization, Gradient Descent, Sensor Fusion
## I Introduction
Differentiable rendering is an emerging technique in computer graphics that enables the calculation of gradients with respect to parameters in a 3D rendering pipeline. Recent advances in physics-based differentiable rendering allow researchers to generate realistic images by accurately representing light propagation through a scene. Differentiable rendering enables solving complex inverse rendering problems such as the optimization of 3D scene parameters via gradient descent [1].
Simultaneously, the integration of data from multiple sensor modalities, termed multi-modal sensor fusion, has captured researchers' attention due to its significance in the development of autonomous vehicles [2]. For instance, researchers fuse vision systems and Lidar (light detection and ranging) to enhance autonomous driving capabilities. Lidar is a remote sensing technology that uses laser pulses to measure distances to objects and create precise 3D representations of the surrounding environment. It works by emitting a laser beam that bounces off objects and returns to the sensor, allowing the creation of high-resolution 3D maps of the environment.
Lidar technology can be used with 3D object detection algorithms to identify and classify objects in the environment. 3D object detection algorithms analyze the Lidar data to identify the location, size, and shape of objects in the environment, such as cars or pedestrians. Lidar data can be combined with other sensor data, such as vision systems and radar, to provide a more complete and accurate picture of the environment and to help identify and track objects in real time. This technology has numerous applications, including self-driving cars, robotics, and urban planning.
We introduce a method for optimizing the placement of objects in a 3D graphics scene relative to a specified viewpoint leveraging multi-modal sensor fusion and differentiable rendering. In this inverse rendering setup, an object starts in an initial position within a 3D graphics scene. The goal is to transform the object's position to a predefined target location through optimization. The optimization hinges on an image loss comparison between the current object's rendered position (in terms of pixel values) and the target position's rendered image.
We employ differentiable rendering, which moves the object from its starting point to its target by using gradients of the rendered image relative to scene parameters. Our approach augments conventional differentiable rendering by incorporating Lidar data, allowing for 3D object detection and distance measurements from the sensor to the object. This depth sensing enhances the optimization by conveying object distances and positions relative to the viewpoint. The optimized object can be any visible element in the scene, such as a car or light. Alternatively, we can optimize for the observer location (camera) while visible objects remain fixed. The multi-modal fusion of vision and Lidar sensing facilitates precise 3D object positioning to match desired viewpoints.
## II Related Works
Our multimodal differentiable rendering method builds upon prior work on inverse graphics and sensor fusion. Mitsuba [3] and PyTorch3D [4] support gradient-based optimization of inverse rendering with ray tracing. These tools are enhanced and made possible by many advances in gradient-based methods for rendering, including Differentiable Monte Carlo Ray Tracing through Edge Sampling [5].
Differentiable rendering has been used in many ways, for example in Zakharov et al. [6] differentiable rendering was used to predict shapes and poses from image patches. Their pipeline combines vision and geometry losses for multimodal optimization.
There is a rich literature on sensor fusion for 3D understanding. For instance, Perception-Aware Multi-Sensor Fusion for 3D Lidar Semantic Segmentation [7] fuses appearance information from RGB images and spatial-depth information from point clouds to improve semantics segmentation.
Our method integrates insights from prior work to address multimodal inverse rendering tasks. The flexibility of differen
tiable rendering enables joint optimization over multiple data sources and loss functions.
## III Methods
At the core of our optimization is Mitsuba, a research-oriented differentiable renderer. Mitsuba leverages automatic differentiation via Dr.Jit [8] to enable gradient-based inverse rendering. We use Mitsuba to simulate both RGB images and Lidar data from 3D scenes. The built-in optimizers allow joint training over multimodal losses. To render a scene in Mitsuba, users specify parameters including integrator, max depth, samples per pixel, and sampler type. We use path tracing with a bidirectional path tracer integrator. The number of samples per pixel is set to 16 to reduce Monte Carlo noise. The gradients from the renderer are used to iteratively refine scene parameters like camera pose, lighting, and materials using the Adam optimizer.
We use Mitsuba's Reparameterized Path Replay Backpropagation integrator [9] for differentiable rendering. This technique performs integration over high-dimensional lighting and material spaces. It provides an efficient path tracer that handles discontinuities in 3D scenes. We set the max path depth to 3 for our experiments and used 16 samples per pixel. These values balance computation time and rendering quality for our purposes. Using more samples and higher max depth yields more photorealistic results at increased cost. We set the sampler type to be independent of uncorrelated noise. Experiment resources and code can be found on Github.1.
Footnote 1: [https://github.com/szanykmclean/differentiable_multimodal_learning](https://github.com/szanykmclean/differentiable_multimodal_learning)
We use a simple scene containing a car model on a homogeneous background for our experiments. This setup is motivated by autonomous driving applications, where 3D detectors are often trained to locate cars. The objective is to optimize the camera pose with respect to the stationary car object. We fix the car's position and orientation and optimize the camera location and orientation to match target renderings. This inverse rendering approach could also be applied to optimize object poses given fixed camera intrinsic and extrinsic parameters. While simple, this scene captures key challenges in multimodal inverse rendering. Optimizing camera pose requires reasoning about viewpoint, visibility, lighting, and materials. The homogeneous background enables the isolation of the car as the primary focus. More complex scenes could incorporate detailed environments and multiple objects. The differentiable rendering approach provides a principled methodology to handle complex scenarios with multiple objects, occlusion, and background clutter. Overall, this controlled setup provides a strong testbed for multifaceted inverse rendering of a central 3D object.
### _Lidar Data_
In order to generate Lidar data, we use the built-in ray-tracing functionality of the Reparameterized Path Replay Backpropagation Integrator. During rendering, the ray intersections at a depth of 0 are recorded and written to a text file. Each ray intersection is an instance of light bouncing off an object in the scene, and an \((x,y,z)\) coordinate is recorded. All the intersection points are used together to create a simple point cloud without intensity data. This point cloud effectively simulates Lidar data, a common data modality in autonomous driving. Lidar data allows for distance estimation. This data is used with a large trained 3D object detection network and will allow the system to utilize distance measures from the camera to the car.
### _3D Object Detection_
After the Lidar data is generated, a 3D object detection algorithm is used to generate a bounding box for the car located in the scene. The detection algorithm used is PointPillars [10] which is an encoder that utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). The algorithm is pre-trained on the KITTI 3D object detection dataset [11], a large and commonly used autonomous vehicle dataset made available by the Intelligent Systems Lab Organization [12]. This pre-training allows the algorithm to detect objects during inference, specifically cars, buses, and pedestrians. Point Pillars inference is applied to the text file containing Lidar data in \((x,y,z)\) format generated via the previous section.
The algorithm detects a car object in the scene and is able to generate a bounding box around the object. This algorithm may have slight variations depending on the target camera location and a resulting point cloud of Lidar data. It will not be effective during every inference at detecting the car. This makes intuitive sense as car location being far away or at unique locations with respect to the camera will alter the Lidar data and potentially cause it to become unrecognizable to the system. In practice, the system works well in simple scenes, and the method presented will utilize the bounding box with the highest confidence score for a predicted car class. This establishes the \((x,y,z)\) location of the car in the scene and the distance from the camera location to the car.
### _Initial and Target Camera Locations_
The experiment's goal is position optimization, which involves moving an object from an initial position to a target
Fig. 1: Lidar data generated via Mitsuba ray-tracing. The variation in the colors of the points is used to improve the visualization of the Lidar data.
position with respect to the objects and observers in the scene. An initial and target object (camera) location was used with the simple 3D computer graphics scene containing one car object to experiment and show the utility of using multi-modal data for object position optimization. The initial camera location has an \((x,y,z)\) coordinate of \((20,13,23)\) and the target camera location is \((8,5,14)\). The target camera location was translated by \((12,8,9)\) to be used at the start of the optimization loop. The values are unitless as this is a simulated computer vision scene. In the images below, one can see that the initial camera location is much further away from the car and that the car is not centered in the camera view. The translation of the camera location from the target to the initial location at the start of the optimization moves the camera further away from the car in the scene. The target camera location is a distance of 16.93 units away from the center of the established bounding box for the car. The initial location is further away, with a distance of 32.51 units from the car. This is almost twice the distance from the target location to the car. This initial location was chosen to effectively show how the distance loss with Lidar data can improve the optimization. No rotations were applied to the target camera location in order to keep the optimization simple and allow for convergence.
### _Object Position Optimization_
In order to move the object position to the target position, the system utilizes Adam [13], a first-order gradient-based optimization method with a learning rate of 0.15. This learning rate was selected as it resulted in optimal convergence given the desired iteration limit. For the experiments, 30 iterations of gradient descent were used, and at each step, the object's position was moved based on the gradients of the loss function. In each iteration, the computed transformation based on the previous gradients is first applied to the object's position. A new image is rendered based on this new object's position using Mitsuba. Then, the new \((x,y,z)\) object's position is compared to the 3D object's detected car-bounding box location. The distance between the current object's position and the car is computed. Then, the current object's distance from the car is compared to the target object's distance from the car. A loss function is used to compare these two values, and this loss is combined with image pixel-wise loss between the target ending image and the current object's location rendered image. These two losses are combined to help steer the object location to the target through each step of gradient descent and help weigh the optimization to consider not only the gradients of the pixels with respect to scene parameters but also the current distance of the object from the car using the simulated Lidar data. This aims to help the object avoid moving in the wrong direction during the optimization as the camera can often lose the car off-screen when searching for a lower image loss.
### _Joint Loss Function_
One component of the joint loss function is distance loss. Distance loss is derived from the distance formula for Euclidean distance. Two distances are calculated and compared in value. The first distance calculated is from the current object location which is denoted as \((x_{c},y_{c},z_{c})\) to the center of the bounding box which is denoted as \((x_{b},y_{b},z_{b})\) and was detected via the previous object detection stage. This distance is denoted as \(d_{c}\). The second distance calculated is from the target object location which is denoted as \((x_{t},y_{t},z_{t})\) to the bounding box is also computed using the same formula and is denoted as \(d_{t}\):
\[d_{c}=\sqrt{\left(x_{c}-x_{b}\right)^{2}+\left(y_{c}-y_{b}\right)^{2}+\left(z _{c}-z_{b}\right)^{2}}\]
Fig. 3: Initial camera location is shown in the top image. The target camera location is shown in the bottom image.
Fig. 2: PointPillars object detection using Lidar data. The bounding box and arrow show a detected car object oriented towards the direction of the arrow.
These two distances are compared using Root Mean Squared Error (RMSE) with a scalar value \(\alpha\) to calculate the loss, \(L_{d}\). The other component of the joint loss function is image loss. The number of pixels in an image rendered during the optimization is denoted as \(N\) and is simply defined as \(N=l\cdot w\) where \(l\) is the length of the image and \(w\) is the width. In the experiments, \(l=200\) and \(w=300\). At each step during the optimization, image loss is computed by comparing the currently rendered image to the target image at each corresponding pixel value. The image loss function outputs one scalar value which is the Mean Squared Error (MSE). In the loss function, \(x_{j}\) is the current image pixel value at index \(j\) and \(\hat{x}_{j}\) is the target image pixel value at index \(j\). The resulting function for image loss is defined as \(L_{i}\). Both loss components are defined below:
\[L_{d}=\alpha\cdot\sqrt{(d_{c}-d_{t})^{2}}\]
\[L_{i}=\frac{\sum_{j=0}^{N}(x_{j}-\hat{x}_{j})^{2}}{N}\]
The scalar \(\alpha\) is used to help weigh the importance of distance during the optimization. Finally, the joint loss function is defined as:
\[L=L_{i}+L_{d}\]
This loss function will be used in multiple experiments to assess the usefulness of the multi-modal method and will be compared with baseline optimization methods. One experiment will utilize this joint loss \(L\) during the entire optimization. Another experiment which is shown in Fig. 4 will utilize a two-stage loss function that utilizes the joint loss function during optimization while the object (camera) is a user-defined threshold away from the target distance, then the optimization will switch to the second stage where the simple image loss will be used to optimize the location of the car in the image.
## IV Results
Experiments were conducted using the previously mentioned methods as well as baseline methods for comparison and establishing performance. After running four separate experiments with different loss functions, the two-stage loss method is able to converge in the shortest amount of iterations and to the best location. This experiment is described below and utilizes the presented method of multi-modal optimization.
### _Image Loss_
One experiment conducted was to use only image loss for the inverse rendering problem. This is a common out-of-the-box method and establishes a baseline performance. The results for this method clearly show that the optimization process will at first move further away from the target object location due to the gradients of the image with respect to the scene parameters. This is clearly sub-optimal behavior. Towards the end of the optimization, the camera location is moving in the correct direction, however, it is taking several iterations to begin converging towards the appropriate direction.
### _Distance Loss_
Another experiment conducted was to use only distance loss for the inverse rendering problem. If the translation of the target image to the initial image was simply an equal scaled
Fig. 4: Two-stage joint loss diagram.
Fig. 5: Image loss optimization. The final distance from the car bounding box to the camera was 33.7 compared to a target distance of 16.9.
move in every \((x,y,z)\) direction, then, in theory, this method would work very efficiently and be optimal. However, from the results, it is clear this is sub-optimal. The optimization only uses distance as the guiding metric for object position so the location and pixels of the car are clearly ignored and the object will only move towards the target with the goal of finding the optimal distance from the car, but will not be able to find the target location.
### _Joint Loss_
To test this presented method of camera optimization, an experiment using the joint loss method \(L\) which takes both image loss and distance loss into account is presented here. Clearly, the use of both image loss and distance loss to optimize object position is leading to lower image loss and more optimal camera location than the previous two methods. It is also leading to faster optimization. One issue that is clear with this method is that distance loss as a guide for optimization works well when the camera is relatively far away from the target distance. However, when the camera is already close to the target distance, this part of the joint loss is seemingly forcing the optimization out of the correct location. Distance loss prevents the system from doing the necessary small \((x,y,z)\) coordinate transformations that may lead to higher distance loss but allows for the image loss optimization to find the correct location. This is evident by the car in the optimized image being slightly out of the frame in Fig. 7. The value of \(\alpha\) selected for this optimization is 10. This value was selected based on experimentation and was chosen to balance out the two loss components of the loss function. The image comparison heat map shows some overlap between the initial and target car locations, however, there is still room for improvement.
### _Two-Stage Joint Loss_
To solve the issues with the joint loss optimization, a new experiment was conducted using the two-stage loss. For the first part of the optimization, the joint loss method was used. Once the camera location reaches a distance that is less than a user-defined distance threshold, the optimization will set \(\alpha\) to 0 and will then be guided only by the image loss method from the first experiment. Before the threshold is reached, the value of \(\alpha\) is again set to 10. The threshold distance where \(\alpha\) is set to 0 is also user selected and for this experiment, it was set to 2.0 units. This threshold was selected after experimenting with thresholds in the range of 1.0 to 5.0 and discovering that this value led to optimal performance. Setting the threshold too high led to slower convergence and setting it too low led to a similar performance as the Joint Loss method. This method avoids the issues of using distance loss from a close camera distance and allows for fast optimization with very optimal image loss and therefore camera location. It is clear
Fig. 6: Distance loss optimization. The final distance from the car bounding box to the camera was 17.2 compared to a target distance of 16.9.
Fig. 7: Joint loss optimization. The final distance from the car bounding box to the camera was 17.1 compared to a target distance of 16.9.
from the results that this method is the most effective for differential camera optimization. The image comparison heat map shown in Fig. 8 helps show the optimal performance of this method as the cars nearly perfectly overlap and achieve the best performance of any of the experiments.
## V Discussion
The results in the previous section clearly show there are advantages to using a joint loss function that not only considers image pixel values but also considers an Euclidean distance metric. The results obtained are heavily variant based on scene selection, parameters, and object detection quality. For instance, while testing this method with more realistic images and scenes, the object position optimization would often fail to converge at all. This is an issue with non-uniform backgrounds and images that make it extremely hard for optimal solutions to be found. This is suspected to be because of comparison of pixel values in non-homogeneous backgrounds can lead to image pixel losses that are heavily variant and can significantly increase loss values even when the object position optimization is moving towards the correct location. In other words, the complicated and realistic scenes cause the data to be very noisy and lack a sufficient signal for optimization.
### _Scene Selection_
The scene used in the experiments and optimization used a translation to convert the target object position to the initial object position, however, no rotation was applied to the initial scene. This was purposefully left out to avoid too much complexity in the object optimization. Adding rotation in as a parameter to optimize for the camera can cause the optimization to fail where it otherwise might have successfully converged. Optimization of many scene parameters at once leads to difficulties and this is an area that could be explored further.
One important finding during experimentation is that object position optimization using only image data and even using a multi-modal method is more difficult on scenes with non-homogeneous backgrounds as well as scenes that do not have sufficient lighting. These issues seem to be derived from using MSE of pixel data which will not be able to establish useful loss values if most of the pixels appear relatively dark or non-homogeneous and have similar values. For instance, the same car scene which is given a much more realistic look using a background object and lighting led to difficulties converging during experimentation. The object position optimization will often lose the referential object entirely. This presents the importance of completing image processing and background masking with segmentation before object position optimization. In order to use this method in realistic scenes, pre-processing of images is a potential area that can be explored further.
### _Hyperparameter Selection_
Results are also heavily affected by the hyperparameters. User-selected hyperparameters for these experiments include setting a learning rate of 0.15, setting a sample per pixel value of 16, choosing alpha to be 10, and setting a threshold to 2.0 for the two-stage loss. The selection of these hyperparameters was tuned by running many experiments and establishing baseline performance results. One further focus that could be implemented related to this method is establishing rules and metrics for how to choose alpha values effectively as well as threshold values in the two-stage method. Finding the right proportion of loss between the image and distance is important for finding an optimal convergence. If one is overweighted, then the solution with converge to a solution similar to either the image loss only or the distance loss only. One difficulty with this selection of alpha is that image loss values can vary heavily from scene to scene.
Fig. 8: Two-stage joint loss optimization. The final distance from the car bounding box to the camera was 18.4 compared to a target distance of 16.9.
### _3D Object Detection_
PointPillars was used for object detection and bounding box generation, which is a very important part of this system. Establishing accurate bounding boxes on the simulated Lidar data is important because a lack of accuracy of the car object in the scene will lead to inaccurate distance measurements and will lead to convergence at a potentially incorrect location. For purposes of the experiment, both the initial and target camera locations used a center bounding box location generated from the target camera location. This was done to avoid issues of incorrect 3D object detection when changing location to the initial location. The PointPillars algorithm is heavily dependent on the point cloud data it is given. Changing location and generating point clouds from the new location can cause the algorithm to miss objects, such as the car in the experiments.
The system could also work by using 3D object detection and bounding box establishment from both locations, it is however subject to more noise and potential differences in location or objects detected. Furthermore, the PointPillars algorithm was pre-trained on the KITTI dataset and it is clear from Fig. 2 that the bounding box is not perfectly enclosing the car object. This is most likely due to differences in the training data and the Lidar data given at inference time. In order to further optimize this portion of the system, more robust 3D object detection algorithms could be tested and used. Another issue with these algorithms is they can detect multiple instances of the same object where there is only one object and this was seen during testing. In order to offset this, the reference object was established as the object detected with the highest confidence and the correct classification.
## VI Conclusion
This paper presents a novel method for performing differentiable multi-modal object position optimization. The method utilizes both image data and synthesized Lidar data to inform the gradients during the optimization and leads to better convergence to the target object position in the experiments when compared with baseline methods. This method furthers the performance of inverse rendering techniques and displays ways to fuse multiple modalities to improve performance. Applications of this technology could include autonomous driving systems and robotics. These methods could improve state estimation and scene understanding for multiple vehicles in proximity to each other, especially if optimization can be completed in a fast and computationally efficient manner on embedded devices.
|
2309.04618 | Simulation-driven engineering for the management of harmful algal and
cyanobacterial blooms | Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and
maritime waters, pose threats to natural environments by producing toxins that
affect human and animal health. In the past, HABs have been assessed mainly by
the manual collection and subsequent analysis of water samples and occasionally
by automatic instruments that acquire information from fixed locations. These
procedures do not provide data with the desirable spatial and temporal
resolution to anticipate the formation of HABs. Hence, new tools and
technologies are needed to efficiently detect, characterize and respond to HABs
that threaten water quality. It is essential nowadays when the world's water
supply is under tremendous pressure because of climate change,
overexploitation, and pollution. This paper introduces DEVS-BLOOM, a novel
framework for real-time monitoring and management of HABs. Its purpose is to
support high-performance hazard detection with Model Based Systems Engineering
(MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic
environments. | José L. Risco-Martín, Segundo Esteban, Jesús Chacón, Gonzalo Carazo-Barbero, Eva Besada-Portas, José A. López-Orozco | 2023-09-08T22:13:48Z | http://arxiv.org/abs/2309.04618v1 | # Simulation-driven engineering for the management of harmful algal and cyanobacterial blooms
###### Abstract
Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and maritime waters, pose threats to natural environments by producing toxins that affect human and animal health. In the past, HABs have been assessed mainly by the manual collection and subsequent analysis of water samples and occasionally by automatic instruments that acquire information from fixed locations. These procedures do not provide data with the desirable spatial and temporal resolution to anticipate the formation of HABs. Hence, new tools and technologies are needed to efficiently detect, characterize and respond to HABs that threaten water quality. It is essential nowadays when the world's water supply is under tremendous pressure because of climate change, overexplotiation, and pollution. This paper introduces DEVS-BLOOM, a novel framework for real-time monitoring and management of HABs. Its purpose is to support high-performance hazard detection with Model Based Systems Engineering (MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic environments.
Harmful Algal and Cyanobacterial Bloom, Modeling and Simulation, Cyber-Physical System, Internet of Things, Digital Twin, Discrete Event System Specification
## 1 Introduction
Harmful Algal and Cyanobacterial Blooms (HABs) constitute an especially relevant public health hazard and ecological risk, due to their frequent production of toxic secondary metabolites. Exposure to cyanotoxins, for instance, can cause severe health effects in humans and animals, as well as significant economic losses in local communities.
HABs typically emerge in a variety of freshwater ecosystems like reservoirs, lakes, and rivers [1]. Their intensity and frequency have increased globally during the last decade, mainly due to the current vulnerability of water resources to environmental changes, such as global warming, population growth, and eutrophication. For example, in 2014, a Microystitis HAB at the water treatment plant intake for Toledo (Ohio, USA) caused the distribution of non-potable water for more than 400000 people during multiple days [2]. The danger is not limited to the closest water environment since extracellular material from freshwater HABs has been observed in the water and the atmosphere at locations far beyond their edges.
During the last 30 years, the data needed to estimate the health of a water body and the possible existence of HABs have been obtained by specialized personnel through manual collection of water samples and subsequent analysis in the laboratory, and, in the best cases, by automatic instruments placed at fixed locations, that acquire data and, in very few cases, samples. Financial and personnel resource restrictions reduce the manual collection to the moments of the year when HABs are more likely to appear at a few geographical points and with minimal frequencies. The delay suffered by analytical results and the limited capacity to interpret the current scenario reduces the reaction (prediction, prevention, and mitigation) capability of the authorities responsible for the distribution of drinking water and its recreational uses [3]. This is critical when deploying Early-Warning Systems (EWSs), whose essential work is to collect water samples and identify the cyanobacterial cell or algae density as soon as possible. Hence, it is crucial to develop new cost-effective monitoring and early detection systems capable of predicting and anticipating when and where HABs form and produce toxins to provide support to water managers/authorities for guiding their policies and protecting the public health through the deployment of effective EWSs.
In this context, Modeling and Simulation (M&S) can be used to clarify the dynamics of HABs, as it has historically done in similar areas [4]. Numerical-based and data-driven machine learning models have been extensively used to simulate HABs in aquatic systems [5, 6]. These techniques try to reach accurate predictions through what we call _base models_. These models have been integrated into more generic software tools like the EE Modeling System (EEMS) [7]. Based on these models and tools, various countries have
attempted to build EWSs with the support of predictive systems [8].
Our vision is, however, oriented to a system of systems architecture, a more holistic and _integrative model_ that includes not only the use of the aforementioned _base models_ but also the infrastructure of the EWS. Figure 1 shows our conception of the simulation framework, tightly coupled to the represented Cyber-Physical Systems (CPS). As Figure 1 illustrates, our framework follows a Internet of Things (IoT)-based architecture through the use of Digital Twins (DTs). Water bodies are monitored in the edge layer by a set of sensors, including those onboard automated boats, called here-after Unmanned Surface Vehicles (USVs), that continuously send data to the server at the nearest Ground Control Station (GCS) in the fog layer. There, domain experts can analyze the data, run some models, tests, or plan the USVs trajectories. The framework supports horizontal scalability, being able to add more water bodies with the support of a cloud layer, where authorities can compare different reports and make high-level decisions.
To simulate and operate this complex model, in this paper we propose DEVS-BLOOM, a novel M&S framework to enable real-time monitoring and hazard prediction of HABs. Our approach is based on the principles of Model Based Systems Engineering (MBSE): (i) model-based since MBSE is based on the use of models to represent and manage information about a system, (ii) system-centric, focusing on the system as a whole, (iii) iterative and incremental process, which involves the development of models over time, (iv) collaboration between stakeholders, including system engineers, domain experts, etc., (v) traceability between requirements, design, and implementation, (vi) reuse of models, components, and other artifacts to improve efficiency and reduce the risk of errors, and (vii) verification and validation to ensure that the system meets its requirements and that it operates as intended [9]. At the same time, we aim to provide high-performance real-time services, such as detecting outliers or executing complex forecasting methods. All this is achieved through the implementation of model-driven technologies and infrastructure based on the IoT and DTs paradigms. As a result, we address three main topics in the sustainable management of water resources under the umbrella of model-driven technologies: (i) provide a robust interface to design intelligent HABs management system prototypes, (ii) provide vertical scalability, modeling the whole pyramidal structure, from the sensors to the authorities, and (iii) provide horizontal scalability, being able of adding more sensors and water bodies with the support of well-grounded M&S methodologies.
The main contributions of this work can be summarized as follows:
* We present a framework where we can model the water body, the infrastructure needed to monitor and manage HABs like sensors or USVs, the computing resources needed to control that infrastructure like workstations or cloud servers, and the actions performed by the human team like operators, domain experts, or water authorities.
* The model can be simulated in virtual mode to analyze the viability of the whole system; in hybrid mode, where some components are virtual, and others like actual sensors are co-simulated to test or calibrate these sensors; or in real mode, where the framework is not a simulator but a fully operational toolkit, where all the components are real.
* The framework supports horizontal scalability, allowing us to incorporate more water bodies, or vertical scalability, allowing us to elaborate more complex models. This is possible with the parallel or distributed execution of the framework, which the internal libraries automatically provide.
DEVS-BLOOM has been developed through the Discrete Event System Specification (DEVS) [10], a well known M&S formalism. To prove the feasibility of each scenario, the framework uses formal models. It can be fed with authentic or synthetic data. Following the MBSE methodology, DEVS-BLOOM has been designed with the main objective that any virtual component is built as a DT and can be replaced by its real-world counterpart [11].
The paper is organized as follows. In the following, we introduce the related work, focused on EWSs, models of HABs behavior, USVs trajectory planning, IoT simulators and all the elements required by the proposed framework. Next, we present the architecture of our framework based on a well-known M&S formalism. Then we illustrate the simulations performed to test our hypotheses and show the results obtained under different initial conditions. Finally, we draw some conclusions and introduce future lines of research.
## Related work
As stated above, HABs pose severe threats to natural environments. To properly detect, assess, and mitigate these threats in inland waters, it is essential to envision water management from the perspective of an integrative IoT-based early warning system. HAB-centric automated EWSs can effectively help to monitor and treat water bodies since, once deployed, mitigation techniques tailored to those systems can be better designed.
Current EWSs are supported by a comprehensive set of accurate _base models_ that describe the behavior of different elements, such as the dynamics of the water (due to currents and wind) and of the cyanobacteria (due to biological growth, their vertical displacements, and the water dynamics). There exist a significant variability of base models. Eulerian models, for instance, have been used since 1970 to simulate eutrophication, water quality, and biogeochemical processes [12]. These models are composed of differential equations that simulate community dynamics in spaces. Lagrangian models introduce the possibility of adding different classes of particles with individualized properties, although conducting Lagrangian simulations with a large number of particles is a computer-intensive process [13]. Machine learning techniques can also be used to clarify the dynamics of HABs. Based on studies from 2008 to 2019, Chen _et al._ show in [14] numerous applications of machine learning models for predicting various water quality variables, such as salinity, pH, electrical conductivity, dissolved oxygen, ammonium nitrogen, etc. Finally, we may also find mechanistic or process-oriented aquatic models
based on knowledge of how target species respond to various ecosystem drivers like nutrient availability, thermal stratification, life cycle characteristics of species, etc. [15]. These models can be more appropriate than statistically based models for future predictions. However, they can be challenging because the incomplete knowledge introduced inside the models forces the incorporation of complex Bayesian networks, adding even more uncertainty to the models.
The previous base models are usually integrated inside more generic software tools with advanced Graphical User Interfaces (GUIs). For instance, EEMS [7] is a GUI that provides a broad range of pre-processing and post-processing tools to assist in developing, calibrating, and analyzing hydrodynamic, sediment-contaminant, and eutrophication models. MIKE Powered by DHI is a range of software products that enable us to accurately analyze, model and simulate any type of challenge in water environments [16]. Delft3D is a set of open source software tools that facilitates modeling subsystems like the hydrodynamic, morphodynamic, waves, water quality, or particle-based subsystems [17].
Finally, the aforementioned base models along with the GUIs are used in EWSs as forecasting tools [18, 19], helping GCS operators to make critical decisions. An example close to the authors of this paper is the Spanish Automatic Water Quality Information System, which is a network of nearly 200 automatic alert stations deployed in critical locations of the Spanish hydrographic system to (i) obtain frequent measurements of representative parameters such as water temperature, pH and dissolved oxygen; (ii) provide valuable information about the general quality of the water; and (iii) alert in real time about pollution episodes [20]. More examples of EWSs can be found in other places and settings. The Southeast Environmental Research Center Water Quality Monitoring Network, property of Florida International University, focuses on coastal monitoring of the southern tip of the Florida peninsula and includes some automatic measuring stations that are rotated between the different sampling sites [21]. United States Geological Survey's National Water Quality Monitoring Network combines data sources and techniques from 110 sites to monitor the U.S. inland waters [22]. Environment and Climate Change Canada, in collaboration with the provincial and territorial governments, runs the Freshwater Quality Monitoring and Surveillance program, which encompasses some manual and automatic monitoring networks distributed through the national territory [23].
The conception, design, and deployment of an EWS can present complex engineering and systems challenges. To properly monitor and foresee the formation of HABs, EWSs must cover large geographical areas, remain functional over long periods, and include a large variety of sensors, USVs, and data in general. Prediction of HABs also involves the use of a plethora of base models. A model-driven approach to designing such a complex and heterogeneous infrastructure would help researchers, domain experts, and water authorities to meet design requirements. It also
Figure 1: Conceptual model of the proposed framework.
would enable a model-driven control, reducing costs while increasing performance and scalability, and in general, all the benefits derived from applying a MBSE approach. There exist cases of success in other areas of research like flood detection [24], water treatment [25], or healthcare [26]. However, to our knowledge, this is the first research related to developing integrative model-driven solutions for HAB management. As mentioned above, our approach is integrative because we do not simulate only the water body but also combine the use of _base models_ with the help of models of the infrastructure like sensors, USVs, GCSs, the cloud layer, and even the operator's behavior through a simulation file, which is the main novelty with respect to other approaches in the literature.
## 0.4 System architecture and design
DEVS-BLOOM's model divides the HAB management system into the three classical IoT layers: edge, fog, and cloud. The _edge_ layer includes all the devices connected to the internet and can generate data. These devices can be sensors, wearables, and other smart devices deployed in the field. The edge layer collects and processes data locally and then sends it to the next layer for further processing. The _fog_ layer is an intermediate layer between the edge and the cloud. This layer includes devices with computing power and storage capabilities to perform basic data processing and analysis. The fog layer is responsible for processing data in real time and reducing the amount of data that needs to be sent to the cloud for further processing. The _cloud_ layer includes cloud servers and data centers that can store and process large amounts of data. The cloud layer performs complex data analytics and machine learning tasks that require significant computing power and storage capacity [27].
Figure 1 has already illustrated the general picture of the framework architecture. Our M&S framework is fed with data that may come from the actual water body or from a database, which can, in turn, store authentic or synthetic data. The virtual/real duality developed into some components, modeled as DTs, allows DEVS-BLOOM to work in virtual, real, or hybrid modes. The framework works in virtual/simulation mode when data come entirely from the database. Real/controller mode is when data come from the real water body, with actual sensors and USV deployed and the system fed with real data. Currently, DEVS-BLOOM is mostly used for infrastructure analysis and design. Thus, data usually come from the database; therefore, DEVS-BLOOM works in virtual/simulation mode. However, sometimes a prototype sensor of USV is tested for validation on the field, and then DEVS-BLOOM works in hybrid mode, where some virtual components are simulated and the actual ones are being controlled.
To clarify the specifics of the DEVS nomenclature, we first describe the basic principles of the formalism. Next, the DEVS-BLOOM system architecture is explained.
### The DEVS formalism
Parallel DEVS is a modular and hierarchical formalism for modeling discrete event systems based on set theory [10]. It includes two types of models, atomic and coupled, that have an interface consisting of input (\(X\)) and output (\(Y\)) ports to communicate with other models. Additionally, in atomic models, every model state (\(S\)) is associated with the time advance function \(ta\), which determines the duration in which the state remains unchanged.
Once the time assigned to the state has passed, an internal transition is triggered and the corresponding function (\(\delta_{\mathrm{int}}:S\to S\)) is invoked, producing a local state change (\(\delta_{\mathrm{int}}(s)=s^{\prime}\)). At that time, the results of the model execution are spread through the output ports of the model by activating an output function (\(\lambda\)).
Furthermore, external input events (received from other models) are collected in the input ports. An external transition function (\(\delta_{\mathrm{ext}}:S\times e\times X\to S\)) specifies how to react to those inputs, using the current state (\(s\)), the elapsed time since the last event (\(e\)) and the input value (\(x\)) (\(\delta_{\mathrm{ext}}((s,e),x)=s^{\prime}\)). Parallel DEVS introduces a confluent function (\(\delta_{\mathrm{con}}((s,ta(s)),x)=s^{\prime}\)), which decides the next state in cases of collision between external and internal transitions.
Coupled models are the aggregation/composition of two or more models (atomic and/or coupled), connected by explicit couplings. This makes DEVS closed under coupling and allows us to use networks of systems as components in larger coupled models, leading to hierarchical and modular designs.
Overall, DEVS provides a framework for information modeling that has several advantages in the analysis and design of complex systems: completeness, verifiability, extensibility, and maintainability.
Once a system is described according to DEVS theory, it can be easily implemented using one of the many DEVS M&S engines available [28].
DEVS-BLOOM is implemented and executed using xDEVS, a cross-platform DEVS simulator. This library includes a set of C, C++, C#, Go, Java, Python, and Rust repositories that provide equivalent DEVS interfaces. The project's final goal is to elaborate the fastest DEVS simulation interface with the capacity to simulate models in virtual and real-time and to run simulations in sequential (single-threaded), parallel (multi-threaded), and distributed (not shared memory) architectures. In particular, DEVS-BLOOM uses the xDEVS/Python module of the project. As in xDEVS, our framework can use virtual or real-time. It can run sequential or parallel simulations without modifying a single line of code in the underlying simulation model.
### Devs-Bloom
The DEVS-BLOOM root coupled model is depicted in Figure 2. The components included in this coupled model are: sensors and USVs at the edge layer, the fog coupled model, and the cloud atomic model.
There exist one singular atomic model labeled as _Simulation file_ in Figure 2. It is just a source that reads from a text file all the events that will be injected into the simulation process through its output port. Output and explicit connections related to this atomic model are not explicitly represented in Figure 2 for simplicity because this atomic model is connected to all the components of DEVS-BLOOM. Each entry in the simulation file represents an input event composed of: a time mark indicating the virtual instant in which this event will be triggered, the command type associated with the event, and the arguments each
command needs. As a result, this file replicates the set of external events that could happen in a real-world scenario. As the excerpt of Figure 2 illustrates, it always begins and ends with the triggering of the initialization and finalization of the simulation experiment (see START and STOP commands). Some services can be triggered in the middle, like outliers detection or HAB prediction. The simulation file is a pure virtual element, which does not have an exact match in the real world. In the following sections, we describe the rest of the components included in DEVS-BLOOM.
#### Edge layer
The atomic models in this layer represent edge devices such as environmental sensors, cameras placed at stationary positions, and USVs. Particularly, sensors are implemented as DTs and can process data from the actual sensor or the database mentioned above. A representation of an atomic sensor model is illustrated in Figure 2, labeled as _Digital Twin_. Data from its real counterpart is received through the \(d_{i}\) input port. In this case, data is just propagated without extra delays to the corresponding output port \(e_{i}\). On the other hand, data from the database is received by the \(d_{i}\) input port. Here the virtual sensor imitates the behavior of the actual sensor, introducing corresponding delays, noise, saturation errors, aging, etc. All these optional parameters are defined through a configuration file. Like most DEVS-BLOOM components, this is a passive atomic model, which is awakened when it receives a START event from the simulation file.
Each DT transmits, at their discretion, events that follow a predefined and generic structure that encapsulates the measurements, commands, or any other relevant information. That generic event structure, represented in Figure 3, carries a timestamp with the actual event time, a source and id that respectively identify the source and the cause of the event, and a payload which contains a set of key-value pairs with the actual measurements (e.g. 'Lat': 47.0, 'Lon': -122.0, 'Depth':-0.2, 'TEM': 19.0). Finally, any time an event is generated, it is transmitted through the corresponding output port \(e_{i}\), which in this case is connected to the fog coupled model of the water body, where the data will be curated and stored in the local fog database.
#### Fog layer
The fog layer is modeled through the _fog_ coupled model, which mainly represents the GCS associated with the water body. Here, operators and domain experts analyze data, make decisions, and take action. It is worthwhile to mention that DEVS-BLOOM can predict the bloom appearance, automatically guide USVs to the zone of interest, or take measurements. Still, all these actions must be validated or complemented by the operators. There can be as many fog-coupled models as water bodies being analyzed by the same cloud infrastructure. Figure 2 represents the first of them. As the Figure shows, the fog coupled model has several input ports that receive the events sent by the DTs located at the edge layer (sensors and USVs). It also has two output ports that send raw data collected by the sensors to the cloud and augmented or fixed sensor data using outliers detection or data analysis services, through \(d_{1}\) and \(\hat{d_{1}}\) ports, respectively. To reduce visual clutter, Figure 2 does not explicitly represent the coupling relations between fog and cloud. It is quite redundant and makes the Figure unnecessarily large. Basically, \(d_{1}\) and \(\hat{d_{1}}\) are connected through two additional external output couplings (from GCS\({}_{1}\) to Fog\({}_{1}\)) and two internal couplings (from Fog\({}_{1}\) to Cloud). The fog coupled model contains several atomic models, detailed below.
The _GCS atomic model_ represents the core of the computing infrastructure of the control station. It is usually a static workstation or laptop connected to the local network. This simplified DT receives simulation commands from the simulation file atomic model, which tell the computer when to start reading data, execute an outliers detection service, an inference over the HAB predictive models, USVs path planning, etc. When the simulation starts, sensor data are received through the \(e_{i}\) input ports and stored in the local database. These data are sent through the \(d_{1}\) fog output port, which is connected to the \(d_{1}\) cloud input port. On the other hand, when a service request is received from the simulation file, it is propagated through the output port \(req_{i}\), which is connected to the corresponding atomic model. This port is drawn in bold in Figure 2 because it represents a set of output ports. Fixed or predicted data are also stored in the local database and regularly sent through the \(\hat{d_{1}}\) output port, connected to the \(\hat{d_{1}}\) cloud input port.
The fog coupled model also has a set of atomic models in charge of executing services. They are currently part of the GCS\({}_{1}\) atomic model in the real system. Still, we have decided to implement them as external atomic models to separate the services, models, or functions that they incorporate. These atomic models receive commands from the _in_ input port and send the results through the _out_ output ports. These output ports are connected back to the GCS or the USV atomic models, controlling the navigation system of the USVs. We have currently deployed four services: one to detect and fix outliers, labeled as _Outliers services_ in Figure 2, another one to perform inference and compute the probability of HAB formation and location in the water body, labeled as _Inference service_, a third one to carry out data analysis over the database and generate reports, named _Data analysis service_, and the last one is the USVs path planner, as labeled in Figure 2, which taking the probabilities computed by the inference service calculates and sends the waypoints and trajectories that USVs must follow.
#### Cloud layer
Finally, the _cloud atomic model_ is located in the cloud layer. It receives all the data from different water bodies (raw and estimated, i.e., fixed or predicted) and stores them in the central cloud database. As in the fog coupled model, the cloud atomic model can run different services but is highly scaled to handle one or several water bodies. These services include executing big data analyses involving all the data stored in the central database or running training services to update current inference models located at the fog-coupled models. In any case, these actions are always triggered by the simulation file. We have not included dedicated atomic models to run services because they are always processes installed in docker containers, i.e., they have a distributed architecture. They do not need to be encapsulated as DEVS models, i.e., the cloud layer is viewed as a centralized entity.
in Figure 2, used to monitor a water body corresponding to an area of Lake Washington.
We provide more details of each atomic model instance included in Figure 4 throughout each use case.
### Monitoring use case
The monitoring scenario is relevant for operators and domain experts in charge of the GCS and local operative decisions, monitoring HABs state and evolution through the use of a USV, i.e., it shows how DEVS-BLOOMS is used to predict the next location of the HAB and to automatically control the USV to follow the position and confirm the prediction.
In this case, the whole water body dataset is synthetic and generated with the EEMS tool7, which incorporates an accurate model of Lake Washington. It allows us the artificial generation of HABs. As a result, DEVS-BLOOM receives EEMS input data (see Figure 4) that includes water speed, water temperature, oxygen and nitrates densities, and for validation of our framework, algae concentration.
Additionally, as Figure 4 shows, we have included a virtual irradiance sensor, which generates synthetic irradiance data taken from PVGIS'. Neither EEMS nor PVGIS give stochastic data, so there is no need to proceed with Monte Carlo simulations.
Footnote 7: [https://re.jrc.ec.europa.eu/prg_tools](https://re.jrc.ec.europa.eu/prg_tools)
Our scenario has at the edge layer a USV that must monitor the water and transmit data to the fog and cloud layers. As Figure 4 depicts, the USV is instrumented with several sensors and units. Some of them take data from the water body to continuously monitor the state of the bloom and feed the inference model, and others from internal component models:
* Temperature sensor: is in charge of measuring the water temperature. This signal influences the calibration of other sensors and the growth dynamics of the bloom.
* Power unit: includes solar panels, chargers, and batteries in charge of recharging the boat's batteries when it receives solar radiation. For this scenario, we have included the following base model: \[prop = K_{p}\cdot\sqrt{e_{lat}^{2}+e_{lon}^{2}}\] \[power = K_{e}+K_{s}\cdot sun-prop\] \[K_{p}=30\text{ is the propulsion constant, }K_{e}=-0.003\] represent the electronic power consumption, \(K_{s}=0.04\) is the sun power factor, \(prop\) is the resultant propulsion, \(e_{lat}\) and \(e_{lon}\) are the latitude and longitude error of the USV with respect to the HAB position, computed by the USV planner atomic model, \(power\) is the battery energy level, and \(sun\) is the normalized irradiance value.
* Flow meter: measures the speed and direction of the water with respect to the ship. We may infer the water's speed and direction by discounting the ship's speed.
* Positioning unit: allows us to measure the position and speed of the ship, following these two equations: \[lat_{usv} = e_{lat}+K_{2d}\cdot wfv\] \[lon_{usv} = e_{lon}+K_{2d}\cdot wfu\] \[K_{2d}=0.01\text{ is the 2D USV displacement constant, and }(wfv,wfu)\text{ is the water speed (north and east components).
* Dissolved oxygen probe: is in charge of measuring the dissolved oxygen density in the water. If there are high levels of oxygen, there may be a bloom of algae that produces oxygen by photosynthesis.
* Nitrogen probe: measures the density of dissolved nitrates in the water. Nitrate is the main food for algae. Therefore, the inference service uses this signal to predict the bloom's growth.
During the simulation, irradiance and USV sensors capture measurements and send them to the fog layer. We utilize the inference service in this layer, shown in Figure 4. It has a predictive model based on differential equations that, using water speed, temperature, coordinates, oxygen and nitrates densities, and solar irradiance, anticipates the emergence and displacement of HABs as follows:
\[\frac{dr(t)}{dt} = K_{1}\cdot photo(t)+K_{2}\cdot breath(t)\] \[-K_{3}\cdot(r(t)-r(0))\] \[\frac{dlat_{loom}(t)}{dt} = K_{v}\cdot wfv(t)\] \[\frac{dlon_{loom}(t)}{dt} = K_{v}\cdot wfu(t)\] \[photo(t) = sun(t)\cdot nox(t)\] \[breath(t) = dox(t)\cdot nox(t)\]
In the previous equation, \(r\) represents the bloom density, while \(photo\) and \(breath\) represents photosynthesis and respiration, respectively. Besides, \((lat,lon)\) are the coordinates (latitude and longitude) of the position of the bloom at a given height, whereas \((wfv,wfu)\) is the water velocity at the same coordinates. \(nox\) and \(dox\) are nitrogen and oxygen concentration, respectively (mg/l). Regarding the constants, \(K_{1}=5.0\) and \(K_{2}=0.05\) represent the HAB growth constant, whereas \(K_{3}=0.17\) is the decay constant. \(K_{v}=0.0167\) represents the percentage of the water velocity transferred to the HAB. The values of the constants are initially obtained by training the system with the least squares method.
Then the USVs planner in Figure 4 generates track points for the USV. In this preliminary version, the planner computes the error between USV and HAB positions as follows:
\[e_{lat} = lat_{loom}-lat_{usv}\] \[e_{lon} = lon_{loom}-lon_{usv}\]
To close the loop, the USV navigates to the track point and retakes measurements. During the simulation, all the data is saved into the fog and cloud databases, which can be plotted and analyzed in real time. The Data Analysis Service depicted in Figure 4 can be activated to automate this process. This atomic model executes a set of functions
to create all the figures and videos of interest for the operator or the domain expert. Details about implementing these automatically generated reports can be found in [30].
In the following, we show the simulation results. Figure 5 shows the lake area where HABs are forming. The lower part of the image shows how a channel flows into the lake in a shallow area. Such areas are known as incubators because they provide ideal conditions for forming blooms and accumulations of nitrates in areas with solar radiation. The inference model is initialized near the incubator at the beginning of the day. It is very likely that the bloom is born in this area, then grows with solar radiation, moves with the water currents, and disperses throughout the rest of the lake.
Figure 6 illustrates the simulation state while tracking a HAB. As mentioned above and depicted at the bottom of Figure 4, at this stage of the project, all the measured data from the water body are from EEMS, except for the irradiance values that are taken from PVGIS since EEMS does not include these. The rest of the data (USVs battery status, bloom displacement prediction, etc.) come from our models. Next, we describe each plot in Figure 6:
* The upper left graph shows the signals measured by the USV and the irradiance sensor as a function of the time of day: sun radiation (blue), water Temperature (red), and ship's electric power (black). At the time of the simulation, Figure 6 shows that the solar panels have fully charged the ship batteries.
* The lower left graph shows the map of water direction and velocity in the surface layer. The ship measures this signal at its position and reports it to the fog layer to estimate the bloom displacement. The simulator also uses the information from this map to perturb the ship dynamics.
* The top center graph shows the map of the dissolved oxygen density in the surface layer. The USV takes this measurement, and the inference model uses it to decide whether there is a bloom or not.
* The bottom middle graph shows the map of nitrate density on the surface. The inference model takes this measurement obtained by the USV to estimate the bloom growth.
* The right graph shows the HAB density map in the surface layer, the inferred bloom (red circle), and the USV position. The HAB density map is data directly taken from EEMS to validate that the inference model is correctly predicting the HAB dynamic.
The full simulation video can be found in [31].
As mentioned above, all the data used in this simulation are synthetic. Consequently, all the sensors work on virtual mode, as DTs. When a sensor must take a
Figure 4: DEVS-BLOOM root coupled model of the use case.
measurement, it searches the database (the EEMS file or the irradiance database), modifies the signal according to its technical characteristics, and generates a message with the signal value. The fog layer receives these signals to perform different calculations like the model inference and periodically uploads them to the cloud layer. Figure 7 shows the signal values recorded by all the sensors of this use case after several (virtual) days of simulation.
Figure 8 shows the evolution of the HAB inference model. The first plot shows a boolean value indicating whether the bloom has been detected or not. The second plot shows the estimated bloom density. The third and fourth plots show the displacement estimation: longitude and latitude. Figure 8 shows how blooms are detected and monitored almost every day. Some of these blooms have significant densities and move around significantly, requiring dynamic monitoring.
Finally, Figure 9 depicts the status of the USV model. The first graph shows the status of the power unit. The second plot shows the velocity of the USV. The third and fourth graphs show the position, longitude, and latitude. On August 30, the Figure shows that the USV runs out of battery since it has been tracking blooms to distant points for four consecutive days.
Figure 5: Lake Washington area.
Figure 6: Frame of bloom tracking simulation: (upper-left) USV measured signals, water temperature, and solar irradiance, (lower-left) water speed. (top-center) oxygen density, (bottom-middle) nitrate density, (right) HAB EEMS given density for validation, HAB prediction as a red circle and ship position as a star.
Figure 8: Bloom Inference model.
Figure 7: Sensors’ signals.
### Prediction use case
The second use case is relevant for water authorities. It consists of predicting HABs in the coming days based on weather forecasts. At the end of the day, the GCS in Figure 4 uploads all this information to the cloud layer. All the data history is available in this layer, allowing us to use the predictive model to analyze medium or long-term events.
To predict future blooms, a _Prediction Service_ atomic model has been implemented in the cloud layer. This service is responsible for predicting the occurrence of upcoming HABs and their evolution from weather forecasts. These predictions depend highly dependent on local conditions, so they must be designed ad hoc. In our case, in this area of the lake, there is a source of nitrates or dissolved sediments, which is activated by rainfall. At ideal water temperatures, these dissolved sediments and the sunlight are the main precursors of HABs. From these precursors, bloom growth can be predicted. On the other hand, surface water currents can be inferred from wind forecasts, which can be used to predict the HAB displacement.
Firstly, the state of water and dissolved sediments are inferred from wind, sun, and rainfall forecasts. Figure 10 shows the results of this inference, comparing it with the results generated with EEMS. The first plot shows the rainfall forecast and the inference of dissolved sediments, which follows a simple exponential model. The second plot shows the bloom precursor signal, Sun-Nitrates, the values generated by EEMS and those inferred by the service. The third plot shows the wind forecast, and the fourth plot shows the inferred values for the water speed.
Next, the _Prediction Service_ atomic model computes the HAB state from the previous results. Figure 11 shows the final output, comparing it to the results simulated with EEMS. The plot on the left shows the HAB density generated by EEMS versus the density predicted by the atomic model. It can be seen that it correctly predicts the 60% of the bloom cases. The graph on the right shows the trajectory of these HABs, predicting where the bloom will move accurately in most cases.
### Integration of real sensors and USV design
DEVS-BLOOM uses the xDEVS/Python library. xDEVS/Python can simulate models in real-time [28]. A scaling factor can be provided, transforming hours into minutes, minutes into seconds, etc. This is important when incorporating hardware in the loop to the virtual framework [32] since, for instance, the previous use case handles periods of 30 minutes, but we may want to perform tests with sensors sending data every minute. Additionally, xDEVS can interrupt the real-time simulation with the arrival of data sent by an external hardware device. To do this, the root coupled model must have an input port to inject data, and an atomic model must handle the arrival of this data through its external transition function.
To demonstrate the ability of DEVS-BLOOM to integrate actual sensors, we have used the xDEVS characteristics mentioned above with the irradiance sensor. Figure 11(a) depicts schematically how the real sensor is connected to the original atomic model shown in Figure 4. To this end, we use the input port \(d_{i}\) explained in Figure
Figure 9: USV model.
2, adding an input \(d_{i}\) port to the root coupled model. xDEVS/Python automatically manages the communication between the sensor and DEVS-BLOOM through a software handler. The procedure is relatively straightforward since the external transition function of the sensor DT is automatically triggered when the actual sensor injects data.
On the other hand, Figure 12b shows a picture of a real-time execution, where data received by the actual sensor is correctly logged by DEVS-BLOOM. This procedure also allows us to validate the virtual sensor model, tuning its parameters (delay, precision, noise, etc.) if necessary. The predictive algorithms automatically manage failures in sensors. There is an outliers detection phase before the prediction, where outliers and missing data are replaced by regression. An alarm is triggered in case of failure, and the domain expert can take action if necessary. The parallel DEVS formalism is of great help when dealing with these issues.
New sensors are acquired and tested through our framework as the project evolves. Currently, the most challenging part is the USV design. Figure 12c shows our first USV prototype with all the sensors embedded, and
Figure 11: Bloom prediction.
Figure 10: Water and dissolved sediments state inferred from wind, sun and rainfall forecasts.
Figure 12d depicts one of the controlled tests to validate the navigation system. As the USV evolves, the DEVS-BLOOM virtual model does the same to match the behavior of the real counterpart [33].
As it can be seen, DEVS-BLOOM can help us to design an integral EWS considering different elements and exploring all the alternatives. Our M&S framework facilitates the elaboration of sustainable and efficient HAB management systems while saving costs with well-dimensioned instruments, USVs, and GCSs.
## Conclusion and future work
HABs induce severe threats to water quality. To properly detect, assess, and mitigate these threats to water infrastructures, it is necessary to envision well-structured and robust methods to perform continuous monitoring and to deploy efficient infrastructure and proactive strategies to reduce their adverse effects. CPS integrative M&S is crucial to reaching these objectives since it provides sustainable mechanisms to analyze algorithms and the infrastructure we may need to deploy such systems. However, current approaches do not combine the analysis of _base_ models and algorithms with the infrastructure.
In this paper, we have introduced DEVS-BLOOM, a novel M&S framework to enable real-time monitoring and hazard prediction of HABs while analyzing the effectiveness of infrastructure deployment. Our framework can automatically manage the design of advanced EWSs and propose decisions over the evolution of HABs. Our approach is based on solid principles of MBSE and the DEVS M&S formalism. Furthermore, the entire infrastructure can be modeled upon
Figure 12: Integration of real sensors and USV design.
the IoT and DT paradigms. DEVS-BLOOM allows an incremental design, assuring reliability and scalability to multiple water bodies and minimizing costs in the conception of the final installations. Additionally, all the predictive models designed in the M&S phase can be later used in the real infrastructure. Our framework also allows different resolution views, for the interpretation of a domain expert at the fog layer and the interpretation of water authorities at the cloud layer, following the IoT nomenclature.
Future work includes, on the one hand, the inclusion of new models (e.g., related to the USVs dynamics) into DEVS-BLOOM, the improvement of its visualization tools, or the validation of the current HAB models against a real scenario. On the other hand, we plan to incrementally replace all the elements in the simulated model with those in a real-world use case, complementing the virtual representation of the system introduced in this paper with its final deployment.
Finally, we want to highlight that having a scientific framework to predict HABs formation and to take management actions also provides an organizing principle for fundamental research. This framework will serve and benefit the engagement of theory with M&S foundations. Complementary HAB research on mathematical models or systems engineering can be easily integrated into our DEVS-BLOOM framework. It will improve the scientific exploitation of discoveries and support the development of new bases for forecasting future effects on water quality and other sustainable water ecological challenges such as wastewater recycling or smart agriculture.
## Acknowledgements
The authors would like to thank Mr. Giordy Alexander Andrade Aimara, who implemented the integration of actual sensors into DEVS-BLOOM as part of his master's thesis. This work has been supported by the Research Projects IA-GES-BLOOM-CM (Y2020/TCS-6420) of the Synergic program of the Comunidad Autonoma de Madrid, SMART-BLOOMS (TED2021-130123B-I00) funded by MCIN/AEI/10.1303/501100011033 and the European Union NextGenerationEU/PRTR, and INSERTION (PID2021-127648OB-C33) of the Knowledge Generation Projects program of the Spanish Ministry of Science and Innovation.
|
2301.01624 | Pattern Recognition Experiments on Mathematical Expressions | We provide the results of pattern recognition experiments on mathematical
expressions.
We give a few examples of conjectured results. None of which was thoroughly
checked for novelty. We did not attempt to prove all the relations found and
focused on their generation. | David Naccache, Ofer Yifrach-Stav | 2022-12-21T10:53:32Z | http://arxiv.org/abs/2301.01624v1 | # Pattern Recognition Experiments on Mathematical Expressions
###### Abstract
We provide the results of pattern recognition experiments on mathematical expressions.
We give a few examples of conjectured results. None of which was thoroughly checked for novelty. We did not attempt to prove all the relations found and focused on their generation.
## 1 Introduction
Pattern recognition is a process that involves identifying rules in data and matching them with particular case information. Pattern recognition can be seen as a type of machine learning, as it uses machine learning algorithms to recognize patterns in data. This process is characterized by the ability to learn from data, recognize familiar patterns, and recognize patterns even if they are partially visible.
Very schematically, there are three main types of pattern recognition heuristics: statistical pattern recognition, syntactic pattern recognition, and neural pattern recognition.
* Statistical pattern recognition involves using particular case data to learn from examples and generalize rules to new observations.
* Syntactic pattern recognition (a.k.a structural pattern recognition), involves identifying patterns based on simpler sub-patterns called primitives. For example, opcodes can be seen as primitives that connect to form programs.
* Neural pattern recognition relies on artificial neural networks, which are made up of many simple processors and their connections. These networks can learn complex nonlinear input-output relationships and adapt to data through sequential training procedures. Most pattern recognition heuristics proceed by two steps:
* An Explorative Stage that seeks to identify patterns
* A Descriptive Stage that categorizes patterns found during exploration
In this work we provide the results of the explorative stage of syntactic pattern recognition on mathematical expressions. Given the nature of the objects we work on (conjectures) the descriptive stage is left to a human.
We give a few examples of conjectured results. None of which was thoroughly checked for novelty. We did not attempt to prove all the relations found and focused on their generation.
## 2 The Pattern Recognition Algorithm
The pattern recognition algorithm has two components called the generalized and the identifier.
The generalizer departs from a known continued fraction or a mathematical expression (a particular case) and automatically parameterizes parts of it. The parameterized parts are target ingredients tagged by the user. For each set of particular parameter values (taken over search space), approximated values of the formula are collected for later analysis.
Target ingredients are replaced by progressions, denoted by \(\mu_{\mathbf{u}}(i)\), which can be constant, (alternating) arithmetic, geometric, harmonic or exponential depending on the parameter choices. Those are captured by the general formula:
\[\mu_{\mathbf{u}}(i)=u_{4}i^{u_{5}}+(u_{0}+iu_{1})^{u_{3}}u_{2}^{i}\]
For instance, the Ramanujan Machine Project [2, 4, 5] re-discovered an already known relation involving \(e^{\pi}\). Namely, that the continued fraction defined by \(b_{n}=n^{2}+4\) and \(a_{n}=2n+1\) converges to:
\[\frac{2\left(e^{\pi}+1\right)}{e^{\pi}-1}=1+\frac{1^{2}+4}{3+\frac{2^{2}+4}{5 +\frac{3^{2}+4}{7+\frac{4^{2}+4}{9+\ddots}}}}\]
A natural tagging query of this identity for search by the user might hence be:
\[Q(\mathbf{u})=\mu_{\mathbf{u}}(0)+\frac{\mu_{\mathbf{v}}(0)}{\mu_{ \mathbf{u}}(1)+\frac{\mu_{\mathbf{v}}(1)}{\mu_{\mathbf{u}}(2)+\frac{\mu_{ \mathbf{v}}(2)}{\mu_{\mathbf{u}}(3)+\frac{\mu_{\mathbf{v}}(3)}{\mu_{\mathbf{u}}( 4)+\ddots}}}}\]
With
\[\mathbf{u}=\{\mathbb{Q},\mathbb{Q},1,1,0,0\}\ \ \text{and}\ \ \mathbf{v}=\{ \mathbb{Z},0,1,1,\mathbb{Q},\mathbb{N}\}\]
That is:
\[\mu_{\mathbf{u}}(i)=(\mathbb{Q}+i\mathbb{Q})\ \ \text{and}\ \ \mu_{\mathbf{v}}(i)=\mathbb{Q}i^{\mathbb{N}}+\mathbb{Z}\]
When this is done, the program varies the progressions' parameters over the chosen search spaces and collects sequences of resulting values. The tests that we list here are of course non limitative and many other variants can be added to the proposed heuristic.
Remark 1: Obviously, we are quickly limited by the increasing complexity due to nested loops running over the parameters of the expressions (i.e. the \(u_{i}\)s).
Remark 2: At the risk of overlooking some gold nuggets, when we explore \(\mathbb{Q}\) we start by exploring \(\mathbb{N}\) and if the search is conclusive, we refine it by increments of \(1/6\) which have the advantage of exploring units, halves and thirds at the cost of a small multiplicative factor of \(6\). If interesting results are found with increments of \(6\) the step is refined to \(1/30\) and to Farey sequences.
The sequences obtained by varying those parameters are fed into the identifier for possible recognition. To detect conjectures the identifier performs a number of tests on the obtained sequences. Tests belong to two categories: morphological tests and serial tests. Morphological tests are applied to very few individual results and try to spot their characteristics. Serial tests are applied to more results and seek to discover relationships between them.
**Algebraic number identification (ANI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\) and, using LLL [3], check if any of those \(Q_{i}\)s is the root of a small degree (\(\leq 10\)) polynomial. If so, check that RNI failed before returning true to avoid multiple alerts as rationals are also algebraic. This
is a morphological test. The degree 10 was chosen arbitrarily and can be changed at wish (provided that the precision is matched to the degree).
**Rational number identification (RNI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\) and, using LLL, check if any of those \(Q_{i}\)s is a good approximation of a rational number having a (abnormally) small numerator and a small denominator. This is a morphological test.
**Constant presence identification (CPI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Consider the 45 pairs \(P_{1},P_{2}\) formed from those \(Q_{i}\)s. Using LLL, check the assumption that there is at least one pair of the form:
\[P_{1}=\frac{a_{1}+b_{1}U}{c_{1}+d_{1}U}\ \ \mbox{and}\ \ P_{2}=\frac{a_{2}+b_{2}U}{c_{2}+d _{2}U}\]
Where \(U\not\in\mathbb{Q}\) and \(a_{1},b_{1},c_{1},d_{1},a_{2},b_{2},c_{2},d_{2}\in\mathbb{Z}\).
Solving for \(U\) and equating we get:
\[a_{2}b_{1}-a_{1}b_{2}+(b_{2}c_{1}-a_{2}d_{1})P_{1}+(a_{1}d_{2}-b_{1}c_{2})P_{2 }+(c_{2}d_{1}-c_{1}d_{2})P_{1}P_{2}=0\]
Hence, when called with on input \(1,P_{1},P_{2},P_{1}P_{2}\) LLL will return an abnormally short vector if the coefficients are small (as is usually the case in remarkable identities). This is a morphological test.
**Constant to exponent identification (CEI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Consider the 7 quadruples \(P_{1},P_{2},P_{3},P_{4}\) formed by successive \(Q_{i}\)s1.
Footnote 1: namely: \(\{0,1,2,3\}\),\(\{1,2,3,4\}\),\(\{2,3,4,5\}\),\(\{3,4,5,6\}\),\(\{4,5,6,7\}\),\(\{5,6,7,8\}\),\(\{6,7,8,9\}\)
Here we assume that at successive ranks the limits are of the form:
\[P_{k}=\frac{a_{k}+b_{k}U^{k}}{c_{k}+d_{k}U^{k}}\]
Which implies that:
\[U^{k}=\frac{a_{k}-c_{k}P_{k}}{d_{k}P_{k}-b_{k}}\]
If follows that:
\[U=\frac{(a_{k+1}-c_{k+1}P_{k+1})(d_{k}P_{k}-b_{k})}{(d_{k+1}Q_{k+1}-b_{k+1})(a _{k}-c_{k}Q_{k})}\]
\[\frac{(a_{k+3}-c_{k+3}P_{k+3})(d_{k+2}P_{k+2}-b_{k+2})}{(d_{k+3}P_{k+3}-b_{k+3 })(a_{k+2}-c_{k+2}P_{k+2})}=\frac{(a_{k+1}-c_{k+1}P_{k+1})(d_{k}P_{k}-b_{k})}{ (d_{k+1}P_{k+1}-b_{k+1})(a_{k}-c_{k}P_{k})}\]
Let:
\[S_{1}=\{P_{k},P_{k+1},P_{k+2},P_{k+3}\}\]
\[S_{2}=\{P_{k}P_{k+1},P_{k}P_{k+2},P_{k+1}P_{k+2},P_{k}P_{k+3},P_{k+1}P_{k+3},P_{k +2}P_{k+3}\}\]
\[S_{3}=\{P_{k}P_{k+1}P_{k+2},P_{k}P_{k+1}P_{k+3},P_{k}P_{k+2}P_{k+3},P_{k+1}P_{k+2 }P_{k+3}\}\]
\[S=S_{1}\cup S_{2}\cup S_{3}\cup\{1,P_{k}P_{k+1}P_{k+2}P_{k+3}\}\]
When called with on input \(S\) LLL will return an abnormally short vector (as is usually the case in remarkable identities). This is a morphological test.
Remark 3: Both CPI and CEI can be generalized to detect the presence of multiple unknown constants in an expression (i.e. \(U_{1},U_{2},\ldots\)) or even the presence of common constants in different continued fractions. We did not implement this generalization. Following those tests we can compute a numerical approximation of \(U\) and attempt to look it \(\mathrm{up}^{2}\).
**Known constant identification (KCI)**: Let \(L\) be the following set of usual constants:
\[L=\{1,\sqrt{\pi},\pi,\pi^{2},\pi^{3},\zeta(3),\zeta(5),\zeta(7),\sqrt{e},e,e^ {2},e^{3},\phi^{2},\gamma,G,\ln 2,\ln 3,\ln 5\}\]
Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Check using LLL if any of the \(Q_{i}\) is a number of the form:
\[Q_{i}\sum_{j}a_{j}L_{j}=\sum_{j}b_{j}L_{i}\ \ \mathrm{for}\ \ a_{1},a_{2},\ldots,b_{1},b_{2}\ldots\in \mathbb{Z}\]
If the solution only involves 1, a false is returned. Note that as \(L\) increases the required precision must also be increased to prevent spotting artefacts. In practice we (manually) select only a subset of \(L\) before running the KCI test according to the nature of the constants appearing the in the particular case. Note that KCI and CPI can have overlapping responses.
**Rational fraction progression (RFP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a ratio of two polynomials in \(\bar{u}\) with integer coefficients. This is done by a non linear model fit. The fit residuals serve as a measure of the verdict's likelihood. This is a serial test.
**Exponential function progression (EFP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(ba^{\bar{u}}\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Inverse exponential progression (IEP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(ba^{1/\bar{u}}\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Power plus constant progression (PCP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(b\bar{u}^{a}+c\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b,c\). The fit residuals serve as a measure of the verdict's likelihood. If \(b=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Root plus constant progression (RCP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(b\sqrt[a]{\bar{u}}+c\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b,c\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
## 3 Continued Fractions Converging to \(2u(e^{u\pi}+1)/(e^{u\pi}-1)\)
It appears that the relation:
\[\frac{2\left(e^{\pi}+1\right)}{e^{\pi}-1}=1+\frac{1^{2}+4}{3+\frac{2^{2}+4}{5+ \frac{3^{2}+4}{7+\frac{4^{2}+4}{9+\ddots}}}}\]
is the first in an infinite family:
Indeed, (RCP) linear variations in \(u\) cause identifiable \(O(\sqrt{u})\) variations in the limit. This is because very quickly:
\[\lim_{u\rightarrow\infty}\frac{e^{u\pi}+1}{e^{u\pi}-1}=1\]
This has the somewhat adverse effect of making the RNI positive very quickly as well.
The final form is detected thanks to the CEI test.
By-product:Because this holds for \(u\in\mathbb{C}^{*}\), we get a few seemingly "mysterious" corollary identities such as:
\[\frac{2\left(e+1\right)}{\pi(e-1)}=1+\frac{1^{2}+4/\pi^{2}}{3+\frac{2^{2}+4/ \pi^{2}}{5+\frac{3^{2}+4/\pi^{2}}{7+\frac{4^{2}+4/\pi^{2}}{9+\ddots}}}}\]
\[\frac{6\ln 2}{\pi}=1+\frac{1^{2}+4\ln^{2}2/\pi^{2}}{3+\frac{2^{2}+4\ln^{2}2/ \pi^{2}}{5+\frac{3^{2}+4\ln^{2}2/\pi^{2}}{7+\frac{4^{2}+4\ln^{2}2/ \pi^{2}}{9+\ddots}}}}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Test Results
## Implementation
```
1f[x_,{m_,d_}]:=m/(d+x);
2For[t = 0, t <= 5,
3den = Table[2 n + 1, {n, 1, 20000}];
4num = Table[n^2 + (2 t)^2, {n, 1, 20000}];
5r = 1 + (Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num,den}]);
6e = 2 t (1 + (E^Pi)^t)/((E^Pi)^t - 1);
7Print[{e, 2 n + 1, n^2 + (2 t)^2, N[{r, e}, 20]}];
8t += 1/2];
```
## 4 Continued Fractions Converging to Polynomial Roots
It is very well known that:
\[\frac{\sqrt{5}-1}{2}=\frac{1}{1}+\frac{1}{1}+\frac{1}{1}+\frac{1}{1}+\frac{1}{1 }+\frac{1}{1}+\frac{1}{1}+\dots\]
We tag3:
Footnote 3: Adding a 1+ by commodity which does not change anything about the infinite convergence.
\[Q(\mathbf{u})=1+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{ \mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u }}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}( 0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+ \frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{ \mu_{\mathbf{u}}(0)}+\dots\]
With:
\[\mathbf{u}=\{\mathbb{Q},0,1,1,0,0\}\Rightarrow\mu_{\mathbf{u}}(i)=\mathbb{Q}\]
It appears that for \(u\in\mathbb{Q}/[-4,0]\) LLL identifies that the limit is a root of a second degree polynomial, namely:
\[Q(u)=1+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u }+\frac{u}{u}+\dots\]
\[Q(u)^{2}+u(Q(u)-1)=0\]
Which is trivial to prove by pushing the \(u\) into the continued fraction.
The CPI is positive because for \(u=1\) and \(u=5\) the respective values of \(Q(u)\) comprise the common value \(\sqrt{5}\).
## 5 Continued Fractions Converging to \(e^{2/\kappa}\)
The following relations are well-known4:
Footnote 4: [https://link.springer.com/content/pdf/bbm:978-94-91216-37-4/1.pdf](https://link.springer.com/content/pdf/bbm:978-94-91216-37-4/1.pdf)
\[e=2+\frac{1}{1}+\frac{1}{2}+\frac{1}{1}+\frac{1}{1}+\frac{1}{4}+\frac{1}{1}+\frac {1}{1}+\frac{1}{6}+\dots\]
\[\sqrt{e}=1+\frac{1}{1}+\frac{1}{1}+\frac{1}{5}+\frac{1}{1}+\frac{1}{1}+\frac{1 }{9}+\frac{1}{1}+\frac{1}{1}+\frac{1}{13}+\dots\]
\[\sqrt[3]{e}=1+\frac{1}{2}+\frac{1}{1}+\frac{1}{1}+\frac{1}{8}+\frac{1}{1}+ \frac{1}{1}+\frac{1}{14}+\frac{1}{1}+\frac{1}{1}+\frac{1}{20}+\dots\]
We hence tag the ones as constants, the progression as arithmetic and let the algorithm monitor the evolution of the limits.
Let \(b_{n}=1\). Define \(\mu(u)=\kappa(u+1/2)-1\) for \(\kappa\in\mathbb{R}\) and:
\[a_{n}=\begin{cases}\mu(n/3)=\frac{\kappa(2n+3)}{6}-1&\text{ if }n\bmod 3\equiv 0 \\ 1&\text{ otherwise.}\end{cases}\]
In other words, \(a_{n}\) is the sequence:
\[a_{n}=\{\mu(0),1,1,\mu(1),1,1,\mu(2),1,1,\mu(3),1,1,\mu(4),1,1,\cdots\}\]
Then we detect that the continued fraction generated by \(a_{n},b_{n}\) converges to \(e^{2/\kappa}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✓ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 2: Test Results
The CEI is positive because, for instance \((e^{2/\kappa})^{2}=e^{2/\kappa^{\prime}}\) implies that \(2/\kappa=\kappa^{\prime}\) which is satisfied for several pairs of integer values.
**Implementation**
```
1f[x_,{m_,d_}]:=m/(d+x);
2For[k=-10,k<=10,
3phi=Table[kn+k/2-1,{n,0,2000-1}];
4num=Table[1,{n,1,2000}];
5den=Take[
6Flatten[Table[{phi[[i]],{1,1}},{i,1,Floor[2000/3]+1}]],{1,
72000}];
8r=1+(Fold[f,Last@num/Last@den,Reverse@Most@Transpose@{num,den}]);
9v=E^(2/k);
10Print[{k,v,N[{r,v},20]}];
11k+=1/2];
```
## 6 Continued Fractions Involving Catalan's Constant
It is well known that:
\[2G=2-\frac{1^{2}}{3}+\frac{2^{2}}{1}+\frac{2^{2}}{3}+\frac{4^{2}}{1}+\frac{4^{2} }{3}+\frac{6^{2}}{1}+\frac{6^{2}}{3}+\frac{8^{2}}{1}+\frac{8^{2}}{3}+\ldots\]
We define:
\[\Delta(u,v)=\frac{1}{2v}\times\left(\frac{1^{2}}{u}+\frac{2^{2}}{v}+\frac{2^{2 }}{u}+\frac{4^{2}}{v}+\frac{4^{2}}{u}+\frac{6^{2}}{v}+\frac{6^{2}}{u}+\frac{8 ^{2}}{v}+\frac{8^{2}}{u}+\ldots\right)\]
For all the following we observe that \(\Delta(u,v)=\Delta(v,u)\).
### For \(u=1\)
An exploration for \(\mathbf{u}=\{0,\mathbb{N},\mathbb{N},\mathbb{N},\mathbb{Z},0\}\) reveals that for \(u_{0}=0,u_{1}=2,u_{2}=1,u_{3}=2,u_{4}=-1,u_{5}=0\) we get identities when \(v=4i^{2}-1\) with the convergence values given in Table 4:
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 3: Test Results
Where the general formula for \(i>1\) is:
\[\Delta(1,4i^{2}-1)=(-1)^{i+1}\left(\sum_{k=0}^{i-1}\frac{(-1)^{k}}{(2k+1)^{2}}-G\right)\]
_Remark 4_.: Note that the denominators of the numbers:
\[\eta(i)=\sum_{k=0}^{i-1}\frac{(-1)^{k}}{(2k+1)^{2}}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(u\) & \(i\) & \(v=4i^{2}-1\) & \(\Delta(u,4i^{2}-1)=\Delta(1,4i^{2}-1)\) \\ \hline \hline
1 & 0 & -1 & \(1-G\) \\ \hline
1 & 1 & 3 & \(-8/9+G\) \\ \hline
1 & 2 & 15 & \(209/225-G\) \\ \hline
1 & 3 & 35 & \(-10016/11025+G\) \\ \hline
1 & 4 & 63 & \(91369/99225-G\) \\ \hline
1 & 5 & 99 & \(-10956424/12006225+G\) \\ \hline
1 & 6 & 143 & \(1863641881/2029052025-G\) \\ \hline \end{tabular}
\end{table}
Table 4: The first convergence values for \(u=1\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\checkcheck{\
are interesting by their own right. At first sight they might seem perfect squares but in reality some may contain very small prime factors to an odd power.
### For \(u=3\)
The exploration in this section is interesting. It was done manually but we would have never had the idea to probe in that specific direction without the insight for the case \(u=1\) produced in the previous section.
The sequence \(f(i)\) is nearly the absolute value of the OEIS sequence A0063095:
Footnote 5: [https://oeis.org/A006309](https://oeis.org/A006309).
\[1,5,21,33,65,85,133,161,261,341,481,533,645,705,901,\tt I2863,1281,\] \[1541,1633,1825,\tt I1645,\tt I1587,2581,3201,3333\ldots\]
An unexplained phenomenon occurs for the "abnormally larger" OEIS sequence A006309 values 12803, 14615, 11537 that remains unmatched by any \(\eta(i)\) value. We do not have an explanation for this phenomenon that requires further research.
#### Implementation
The following implementation was purposely left unoptimized for the sake of clarity. We start by generating the target values for \(u=3\) and store them in an array. Then we re-generate the values for \(u=1\) and match the array's contents.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(u\) & \(i\) & \(f(i)\) & \(\Delta(3,f(i))\) \\ \hline \hline \(3\) & \(0\) & \(1\) & \(\Delta(1,\,-1)\) \\ \hline \(3\) & \(1\) & \(5\) & \(\Delta(1,\,\,\,\,3)\) \\ \hline \(3\) & \(2\) & \(21\) & \(\Delta(1,\,\,\,35)\) \\ \hline \(3\) & \(3\) & \(33\) & \(\Delta(1,\,\,\,63)\) \\ \hline \(3\) & \(4\) & \(65\) & \(\Delta(1,143)\) \\ \hline \(3\) & \(5\) & \(85\) & \(\Delta(1,255)\) \\ \hline \end{tabular}
\end{table}
Table 6: The first convergence values for \(u=3\)
1AbsA06389=
2Abs[{1, 5, -21, 33, -65, 85, -133, 161, 261, -341, -481, 533, -645, 705, 901, -12803, -1281, -1541, 1633, -1825}];
4t={};
5f[x_,{m_, d_}]:= m/(d+x);
6For[i= 1, i<= Length[AbsA06309],
7{u, v}={3, AbsA066309[[i]]};
8num=Take[
9Prepend[Flatten[Table[{(2n)^2, (2n)^2}, {n, 1, 40000}]], 1],
1040000];
11den=Flatten[Table[{u, v}, {n, 1, 40000/2}]];
12r=Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num, den}]/2/
13v;
14AppendTo[{, {AbsA06309[[i]], N[r, 30]}];
15i++};
16
17For[j=1, j<=Length[AbsA06309],
18If[{t[[j, 1]]==12803,
19Print["Exception, the value 12803 is skipped."],
20For[i=1, i<=1000000,
21{u, v}={1,4 i^2 - 1};
22den=Flatten[Table[{u, v}, {n, 1, 40000/2}]];
23r=Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num, den}]/
2/v;
25val=(-1)^(i+1)(Sum[(-1)^k/(2k+1)^2, {k, 0, i - 1}] -
26Catalan);
27If[Abs[{t[[j, 2]]--r]<10^(-6),
28Print[{i, N[r, 30], N[t[[j, 2]], 30}, "Entry", j, ": ", val,
29"matchedwithDelta[3,", t[[j, 1]], "]"];
30i=Infinity];
31i++}];
32j++};
### Subsequent \(u\) values.
Table 7 provides some additional examples for various \(u,v\) combinations.
### Variations in the numerator.
Let, for instance, \((u,v)=(1,3)\). Removing the \(1/(2v)\) factor in \(\Delta\) and replacing the \((2n)^{2}\) by \((n-i)^{2}\) we get convergence to:
\[1,\frac{4}{5},\frac{31}{51},\frac{16}{33},\frac{355}{883},\frac{11524}{33599}, \frac{171887}{575075},\frac{10147688}{3832636},\ldots\]
With the limits being quickly reached after a constant number of terms in the continued fraction.
## 7 Generalized Cloitre Series
In an unpublished note [1], Benoit Cloitre gives a beautiful BBP formula for \(\pi^{2}\) based on the identity:
\[\sum_{k=1}^{\infty}\frac{\cos(ik\pi)\left(2\cos(j\pi)\right)^{k}}{k^{2}}=(\ell \pi)^{2}\]
Here are some \(i,j,\ell\) combinations detected automatically:
A simple rule allowing to generate many identities consists in fixing a factional step \(1/u\), letting \(i=\kappa/u\) for \(\pi/3\leq i\leq 2\pi/3\) and calculating the limit for \(\{i,j\}=\{\kappa u,2-\kappa u\}\) (e.g. Table 8). However, limits for which \(i+j\neq 2\) exist as well (e.g. Table 9).
## 8 Conclusion & further research
The results given in this paper show that pattern matching can obviously be of help in detecting new mathematical conjectures. The very basic processes described in the previous sections can be improved and generalized
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(u\) & \(v\) & \(\Delta(u,v)\) \\ \hline \hline
5 & 7 & \(\Delta(1,\ 15)\) \\ \hline
5 & 39 & \(\Delta(1,143)\) \\ \hline
5 & 51 & \(\Delta(1,255)\) \\ \hline
7 & 9 & \(\Delta(1,\ 35)\) \\ \hline
9 & 11 & \(\Delta(1,\ 63)\) \\ \hline
11 & 13 & \(\Delta(1,\ 99)\) \\ \hline
13 & 15 & \(\Delta(1,143)\) \\ \hline \end{tabular}
\end{table}
Table 7: Other convergence values.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 10: Test Results
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
76 & 16 & 30 \\ \hline
46 & 10 & 18 \\ \hline
41 & 9 & 16 \\ \hline
26 & 6 & 10 \\ \hline
21 & 5 & 8 \\ \hline \end{tabular}
\end{table}
Table 9: Example relations for which \(i+j\neq 2\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
11 & 5 & 8 \\ \hline
14 & 6 & 10 \\ \hline
23 & 9 & 16 \\ \hline
26 & 10 & 18 \\ \hline
22/5 & 8/5 & 3 \\ \hline
31 & 9 & 20 \\ \hline
28 & 8 & 18 \\ \hline
19 & 5 & 12 \\ \hline
16 & 4 & 10 \\ \hline
13 & 3 & 8 \\ \hline \end{tabular}
\end{table}
Table 8: Example relations for which \(i+j=2\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
76 & 16 & 30 \\ \hline
46 & 10 & 18 \\ \hline
41 & 9 & 16 \\ \hline
26 & 6 & 10 \\ \hline
21 & 5 & 8 \\ \hline \end{tabular}
\end{table}
Table 9: Example relations for which \(i+j\neq 2\)
in a number of ways. The first is obviously an enriching of the collection of tests. The second is deeper exploration which is highly dependent on the computational capabilities at hand. Finally the interpretation of results and the early pruning of less probable branches in the potential conjecture tree can also bring efficiency and pertinence in the discovered relations.
|
2308.16653 | Sketches, moves and partitions: counting regions of deformations of
reflection arrangements | The collection of reflecting hyperplanes of a finite Coxeter group is called
a reflection arrangement and it appears in many subareas of combinatorics and
representation theory. We focus on the problem of counting regions of
reflection arrangements and their deformations. Inspired by the recent work of
Bernardi, we show that the notion of moves and sketches can be used to provide
a uniform and explicit bijection between regions of (the Catalan deformation
of) a reflection arrangement and certain non-nesting partitions. We then use
the exponential formula to describe a statistic on these partitions such that
distribution is given by the coefficients of the characteristic polynomial.
Finally, we consider a sub-arrangement of type C arrangement called the
threshold arrangement and its Catalan and Shi deformations. | Priyavrat Deshpande, Krishna Menon | 2023-08-31T11:51:53Z | http://arxiv.org/abs/2308.16653v1 | # Sketches, moves and partitions: counting regions of deformations of reflection arrangements
###### Abstract.
The collection of reflecting hyperplanes of a finite Coxeter group is called a reflection arrangement and it appears in many subareas of combinatorics and representation theory. We focus on the problem of counting regions of reflection arrangements and their deformations. Inspired by the recent work of Bernardi, we show that the notion of moves and sketches can be used to provide a uniform and explicit bijection between regions of (the Catalan deformation of) a reflection arrangement and certain non-nesting partitions. We then use the exponential formula to describe a statistic on these partitions such that distribution is given by the coefficients of the characteristic polynomial. Finally, we consider a sub-arrangement of type C arrangement called the threshold arrangement and its Catalan and Shi deformations.
## 1. Introduction
A _hyperplane arrangement_\(\mathcal{A}\) is a finite collection of affine hyperplanes (i.e., codimension \(1\) subspaces and their translates) in \(\mathbb{R}^{n}\). A _flat_ of \(\mathcal{A}\) is a nonempty intersection of some of the hyperplanes in \(\mathcal{A}\); the ambient vector space is a flat since it is an intersection of no hyperplanes. Flats are naturally ordered by reverse set inclusion; the resulting poset is called the _intersection poset_ and is denoted by \(\operatorname{L}(\mathcal{A})\). The _rank_ of \(\mathcal{A}\) is the dimension of the span of the normal vectors to the hyperplanes. An arrangement in \(\mathbb{R}^{n}\) is called _essential_ if its rank is \(n\). A _region_ of \(\mathcal{A}\) is a connected component of \(\mathbb{R}^{n}\setminus\bigcup\mathcal{A}\). A region is said to be _bounded_ if its intersection with the subspace spanned by the normal vectors to the hyperplanes is bounded. Counting the number of regions of arrangements using diverse combinatorial methods is an active area of research.
The _characteristic polynomial_ of \(\mathcal{A}\) is defined as \(\chi_{\mathcal{A}}(t):=\sum\mu(\hat{0},x)\,t^{\dim(x)}\) where \(x\) runs over all flats in \(\operatorname{L}(\mathcal{A})\), \(\mu\) is its the Mobius function and \(\hat{0}\) corresponds to the flat \(\mathbb{R}^{n}\). Using the fact that every interval of the intersection poset of an arrangement is a geometric lattice, we have
\[\chi_{\mathcal{A}}(t)=\sum_{i=0}^{n}(-1)^{n-i}c_{i}t^{i} \tag{1}\]
where \(c_{i}\) is a non-negative integer for all \(0\leq i\leq n\)[18, Corollary 3.4]. The characteristic polynomial is a fundamental combinatorial and topological invariant of the arrangement and plays a significant role throughout the theory of hyperplane arrangements.
In this article, our focus is on the enumerative aspects of (rational) arrangements in \(\mathbb{R}^{n}\). In that direction we have the following seminal result by Zaslavsky.
**Theorem 1.1** ([20]).: _Let \(\mathcal{A}\) be an arrangement in \(\mathbb{R}^{n}\). Then the number of regions of \(\mathcal{A}\) is given by_
\[r(\mathcal{A})=(-1)^{n}\chi_{\mathcal{A}}(-1)=\sum_{i=0}^{n}c_{i}\]
###### Abstract
We consider the _Weyl group_ of \(\Phi\), where \(\Phi\) is a finite field. We consider the _Weyl group_ of \(\Phi\), where \(\Phi\) is a finite field.
as bounded regions of these arrangements. Bijective proofs for the number of regions of the type \(C\) Catalan arrangement have already been established in [10] and [13]. However, the proofs we present for the other arrangements seem to be new.
The idea used for the bijections is fairly simple but effective. This was used by Bernardi in [6, Section 8] to obtain bijections for the regions of several deformations of the braid arrangement. This idea, that we call'sketches and moves', is to consider an arrangement \(\mathcal{B}\) whose regions we wish to count as a sub-arrangement of an arrangement \(\mathcal{A}\). This is done in such a way that the regions of \(\mathcal{A}\) are well-understood and are usually total orders on certain symbols. These total orders are what we call _sketches_. Since \(\mathcal{B}\subseteq\mathcal{A}\), the regions of \(\mathcal{B}\) partition the regions of \(\mathcal{A}\) and hence define an equivalence on sketches. We define operations called _moves_ on sketches to describe the equivalence classes. In regions of \(\mathcal{A}\), moves correspond to crossing hyperplanes in \(\mathcal{A}\setminus\mathcal{B}\).
Apart from Bernardi's results, the results in [4] and [15] can also be viewed as applications of the sketches and moves idea to count regions of hyperplane arrangements.
When studying an arrangement, another interesting question is whether the coefficients of its characteristic polynomial can be combinatorially interpreted. By Theorem 1.1, we know that the sum of the absolute values of the coefficients is the number of regions. Hence, one could ask if there is a statistic on the regions whose distribution is given by the coefficients of the characteristic polynomial. The characteristic polynomial of the braid arrangement in \(\mathbb{R}^{n}\) is \(t(t-1)\cdots(t-n+1)\)[18, Corollary 2.2]. Hence, the coefficients are the Stirling numbers of the first kind. Consequently, the distribution of the statistic 'number of cycles' on the set of permutations of \([n]\) (which correspond to the regions of the arrangement) is given by the coefficients of the characteristic polynomial.
The paper is structured as follows: In Section 2, we describe the sketches and moves idea mentioned above. We also use it to study the regions of some simple arrangements in Section 3. In Section 4, we reprove the results in [13] about the type \(C\) Catalan arrangement with a modification inspired by [2]. We then use the sketches and moves idea in Section 5 to obtain bijections for the regions of the Catalan arrangements of other types. In Section 6, we describe statistics on the regions of the arrangements we have studied whose distribution is given by the corresponding characteristic polynomials. Finally, in Section 7, we use similar techniques to study an interesting arrangement called the threshold arrangement as well as some of its deformations.
## 2. Sketches, moves and trees: a quick overview of Bernardi's bijection
In his paper [6], Bernardi describes a method to count the regions of any deformation of the braid arrangement using certain objects called _boxed trees_. He also obtains explicit bijections with certain trees for several deformations. The general strategy to establish the bijection is to consider an arrangement \(\mathcal{B}\) whose regions we wish to count as a sub-arrangement of an arrangement \(\mathcal{A}\) whose regions are well-understood. The regions of \(\mathcal{B}\) then define an equivalence on the regions of \(\mathcal{A}\). This is done by declaring two regions of \(\mathcal{A}\) to be equivalent if they lie inside the same region of \(\mathcal{B}\). Now counting the number of regions of \(\mathcal{B}\) is the same as counting the number of equivalence classes of this equivalence on the regions of \(\mathcal{A}\). This is usually done by choosing a canonical representative for each equivalence class, which also gives a bijection between the regions of \(\mathcal{B}\) and certain regions of \(\mathcal{A}\).
In particular, a (transitive) deformation of the braid arrangement is a sub-arrangement of the (extended or) \(m\)-Catalan arrangement (for some large \(m\)) in \(\mathbb{R}^{n}\), whose hyperplanes are
\[\{x_{i}-x_{j}=k\mid 1\leq i<j\leq n,k\in[-m,m]\}.\]
The regions of these arrangements are known to correspond labeled \((m+1)\)-ary trees with \(n\) nodes (see [6, Section 8.1]). Using the idea mentioned above, one can show that the regions a deformation correspond to certain trees. We should mention that while he obtains direct combinatorial arguments to describe this bijection for some transitive deformations (see [6, Section 8.2]), the proof for the general bijection uses much stronger results (see [6, Section 8.3]).
Coming back to the general strategy, which we aim to generalize in order to apply it to deformations of other types. It is clear that any two equivalent regions of \(\mathcal{A}\) have to be on the same side of each hyperplane of \(\mathcal{B}\). However, it turns out that this equivalence is the transitive closure of a simpler relation. This follows from the fact that one can reach a region in an arrangement from another by crossing exactly one hyperplane at a time with respect to which the regions lie on opposite sides. We now prove this result, for which we require the following definition.
**Definition 2.1**.: _Let \(R\) be a region of an arrangement \(\mathcal{A}\). \(A\) determining set of \(R\) is a sub-arrangement \(\mathcal{D}\subseteq\mathcal{A}\) such that the region of the arrangement \(\mathcal{D}\) containing \(R\), denoted \(R_{\mathcal{D}}\), is equal to \(R\)._
Note that a region of \(\mathcal{A}\) always has the entire arrangement \(\mathcal{A}\) as a determining set. Also, if a region \(R^{\prime}\) is on the same side as a region \(R\) for each hyperplane in a determining set of \(R\), then we must have \(R=R^{\prime}\).
Before going forward, we explicitly describe regions of an arrangement. First note that any hyperplane \(H\) in \(\mathbb{R}^{n}\) is a set of the form
\[\{\mathbf{x}\in\mathbb{R}^{n}\mid P_{H}(\mathbf{x})=0\}\]
Figure 1. Bold lines form \(\mathcal{B}\) and the dotted lines form \(\mathcal{A}\setminus\mathcal{B}\). Equivalent \(\mathcal{A}\) regions can be connected by changing one \(\mathcal{A}\setminus\mathcal{B}\) inequality at a time.
where \(P_{H}(\mathbf{x})=a_{1}x_{1}+a_{2}x_{2}+\cdots+a_{n}x_{n}+c\) for some constants \(a_{1},\ldots,a_{n},c\in\mathbb{R}\). Also, the regions of an arrangement \(\mathcal{A}\) are precisely the non-empty intersections of sets of the form
\[\{\mathbf{x}\in\mathbb{R}^{n}\mid P_{H}(\mathbf{x})>0\}\text{ or }\{\mathbf{x}\in\mathbb{R}^{n} \mid P_{H}(\mathbf{x})<0\}\]
where we have one set for each \(H\in\mathcal{A}\). Hence, crossing exactly one hyperplane \(H\) in an arrangement corresponds to changing the inequality chosen for \(H\) in this description of the region.
**Theorem 2.2**.: _If \(\mathcal{D}\) is a minimal determining set of a region \(R\) of an arrangement \(\mathcal{A}\), then changing the inequality in the definition of \(R\) of exactly one \(H\in\mathcal{D}\), and keeping all other inequalities of hyperplanes in \(\mathcal{A}\) the same, describes a non-empty region of \(\mathcal{A}\)._
Before proving this, we will see how it proves the fact mentioned above. Start with two distinct regions \(R\) and \(R^{\prime}\) of an arrangement \(\mathcal{A}\). We want to get from \(R\) to \(R^{\prime}\) by crossing exactly one hyperplane at a time with respect to which the regions lie on opposite sides.
1. Let \(\mathcal{D}\) be a minimal determining set of \(R\).
2. Since \(R\neq R^{\prime}\) there is some \(H\in\mathcal{D}\) for which \(R^{\prime}\) is on the opposite side as \(R\).
3. Change the inequality corresponding to \(H\) in \(R\), call this new region \(R^{\prime\prime}\).
4. The number of hyperplanes in \(\mathcal{A}\) for which \(R^{\prime\prime}\) and \(R^{\prime}\) lie on opposite sides is less than that for \(R\) and \(R^{\prime}\).
5. Repeat this process to get to \(R^{\prime}\) by changing one inequality at a time.
Proof of Theorem 2.2.: Let \(H\in\mathcal{D}\). Since \(\mathcal{D}\) is a minimal determining set, \(\mathcal{E}=\mathcal{D}\setminus\{H\}\) is not a determining set. So \(R\) is strictly contained in \(R_{\mathcal{E}}\). This means that the hyperplane \(H\) intersects \(R_{\mathcal{E}}\) and splits it into two open convex sets, one of which is \(R\).
So we can choose a point \(p\in H\) that lies inside \(R_{\mathcal{E}}\) and an \(n\)-ball centered at \(p\) that does not touch any other hyperplanes of \(\mathcal{A}\) (since \(\mathcal{A}\) is finite). One half of the ball lies in \(R\) and the other half lies in a region \(R^{\prime}\) of \(\mathcal{A}\). Since \(R^{\prime}\) can be reached from \(R\) by just crossing the hyperplane \(H\), we get the required result.
To sum up, we start with an arrangement \(\mathcal{B}\subseteq\mathcal{A}\). We know the regions of \(\mathcal{A}\) and usually represent them by combinatorial objects we call'sketches'. We then define'moves' on these sketches that correspond to changing exactly one inequality of a hyperplane in \(\mathcal{A}\setminus\mathcal{B}\). We define sketches to be equivalent if one can be obtained from another through a series of moves. We then count the number of equivalence classes to obtain the number of regions of \(\mathcal{B}\). Before using this method to study the Catalan arrangements of various types, we first look at some simpler arrangements.
## 3. Counting regions of reflection arrangements
In this section, as a warmup exercise, we illustrate the'sketches-moves' idea to study sub-arrangements of the type \(C\) arrangement. Hence, in the spirit of Bernardi [6], we will define certain sketches corresponding to the region of the type \(C\) arrangement and for any sub-arrangement, we choose a canonical sketch from each region.
### The type C arrangement
This arrangement in \(\mathbb{R}^{n}\) is the set of reflecting hyperplanes of the root system \(C_{n}\). The defining equations of hyperplanes are
\[2x_{i} =0\] \[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). Though we could write \(x_{i}=0\) for the first type of hyperplanes, we think of them as \(x_{i}+x_{i}=0\) to define sketches.
We can write the hyperplanes of the type \(C\) arrangement as follows:
\[x_{i} =x_{j},\qquad 1\leq i<j\leq n\] \[x_{i} =-x_{j},\quad i,j\in[n].\]
Hence, any region of the arrangement is given by a _valid_ total order on
\[x_{1},\dots,x_{n},-x_{1},\dots,-x_{n}.\]
A total order is said to be valid if there is some point in \(\mathbb{R}^{n}\) that satisfies it. We will represent \(x_{i}\) by \(\overset{+}{i}\) and \(-x_{i}\) by \(\overset{-}{i}\) for all \(i\in[n]\).
**Example 3.1**.: _The region \(-x_{2}<x_{3}<x_{1}<-x_{1}<-x_{3}<x_{2}\) is represented as \(\overset{-}{2}\overset{+}{3}\overset{+}{1}\overset{-}{1}\overset{-}{3} \overset{+}{2}\)._
It can be shown that words of the form
\[\overset{w_{1}}{i_{1}}\overset{w_{2}}{i_{2}}\quad\cdots\overset{w_{n}}{i_{n} }\overset{-w_{n}}{i_{n}}\quad\cdots\overset{-w_{2}}{i_{2}}\overset{-w_{1}}{i_{ 1}}\]
where \(\{i_{1},\dots,i_{n}\}=[n]\) are the ones that correspond to regions. Such orders are the only ones that can correspond to regions since negatives reverse order. Also, choosing \(n\) distinct negative numbers, it is easy to construct a point satisfying the inequalities specified by such a word. Hence the number of regions of the type \(C\) arrangement is \(2^{n}n!\). We will call such words _sketches_ (which are basically signed permutations). We will draw a line after the first \(n\) symbols to denote the reflection and call the part of the sketch before the line its first half and similarly define the second half.
**Example 3.2**.: \(\overset{+}{3}\overset{-}{1}\overset{-}{2}\overset{+}{4}\overset{-}{4} \overset{+}{2}\overset{+}{1}\overset{-}{3}\) _is a sketch._
We now study some sub-arrangements of the type \(C\) arrangement. For each such arrangement, we will define the moves that we can apply to the sketches (which represent changing exactly one inequality corresponding to a hyperplane not in the arrangement) and then choose a canonical representative from each equivalence class. By Theorem 2.2, this gives a bijection between these canonical sketches and the regions of the sub-arrangement.
### The Boolean arrangement
One of first examples one encounters when studying hyperplane arrangements is the Boolean arrangement. The Boolean arrangement in \(\mathbb{R}^{n}\) has hyperplanes \(x_{i}=0\) for all \(i\in[n]\). It is fairly straightforward to see that the number of regions is \(2^{n}\). We will do this using the idea of moves on sketches.
The hyperplanes missing from the type \(C\) arrangement in the Boolean arrangement are
\[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). Hence, the Boolean moves are as follows:
1. Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{j}\) as well as \(\overset{+}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
2. Swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(\overset{-}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
The first kind of move corresponds to changing inequality corresponding to the hyperplane \(x_{i}+x_{j}=0\) and keeping all the other inequalities the same. Similarly, the second kind of move corresponds to changing only the inequality corresponding to \(x_{i}-x_{j}=0\).
**Example 3.3**.: _We can use a series of Boolean moves on a sketch as follows:_
\[\overset{-}{4}\overset{+}{1}\overset{+}{2}\overset{+}{3}\overset{-}{3} \overset{+}{3}\overset{-}{2}\overset{-}{1}\overset{+}{4}\longrightarrow \overset{-}{4}\overset{+}{2}\overset{+}{1}\overset{-}{3}\overset{-}{1} \overset{-}{2}\overset{+}{4}\longrightarrow\overset{-}{4}\overset{+}{2} \overset{-}{3}\overset{+}{1}\overset{-}{1}\overset{-}{3}\overset{-}{2} \overset{+}{4}\longrightarrow\overset{-}{4}\overset{-}{3}\overset{+}{2} \overset{+}{1}\overset{-}{1}\overset{-}{2}\overset{+}{3}\overset{+}{4}\]
It can be shown that for any sketch, we can use Boolean moves to convert it to a sketch where the order of absolute values in the second half is \(1,2,\ldots,n\) (since adjacent transpositions generate the symmetric group). Also, since the signs of the numbers in the second half do not change there is exactly one such sketch in each equivalence class. Hence the number of Boolean regions is the number of ways of assigning signs to the numbers \(1,2,\ldots,n\) which is \(2^{n}\).
### The type D arrangement
The type \(D\) arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). The hyperplanes missing from missing from the type \(C\) arrangement are
\[2x_{i}=0\]
for all \(i\in[n]\). Hence a type \(D\) move, which we call a \(D\) move, is swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
**Example 3.4**.: \(\overset{+}{4}\overset{+}{1}\overset{-}{3}\overset{+}{2}\overset{-}{1} \overset{-}{2}\overset{+}{3}\overset{-}{1}\overset{-}{4}\)\(\overset{D\ move}{4}\overset{+}{1}\overset{-}{3}\overset{-}{2}\)\(\overset{+}{2}\overset{+}{3}\overset{-}{1}\overset{-}{4}\)
In a sketch the only such pair is the last term of the first half and the first term of the second half. Hence \(D\) moves actually define an involution on the sketches. Hence the number of regions of the type \(D\) arrangement is \(2^{n-1}n!\). We could also choose a canonical sketch in each type \(D\) region to be the one where the first term of the second half is positive.
### The braid arrangement
The braid arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[2x_{i} =0\] \[x_{i}+x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence the braid moves are as follows:
1. (\(D\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
2. Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{j}\) as well as \(\overset{+}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
**Example 3.5**.: _We can use a series of braid moves on a sketch as follows:_
\[\begin{array}{
Such orders will be represented by using the symbol \(\alpha_{i}^{(s)}\) for \(x_{i}+s\) and \(\alpha_{-i}^{(-s)}\) for \(-x_{i}-s\) for all \(i\in[n]\) and \(s\in\{0,1\}\). Let \(C(n)\) be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in\{0,1\}\}\cup\{\alpha_{i}^{(s)}\mid-i\in[ n],\ s\in\{-1,0\}\}.\]
Hence, we use orders on the letters of \(C(n)\) to represent regions of \(\mathcal{C}_{n}\).
**Example 4.1**.: _The total order_
\[x_{1}<-x_{2}-1<x_{1}+1<x_{2}<-x_{2}<-x_{1}-1<x_{2}+1<-x_{1}\]
_is represented as \(\alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\)._
Considering \(-x_{i}\) as \(x_{-i}\), the letter \(\alpha_{i}^{(s)}\) represents \(x_{i}+s\) for any \(\alpha_{i}^{(s)}\in C(n)\). For any \(\alpha_{i}^{(s)}\in C(n)\), we use \(\overline{\alpha_{i}^{(s)}}\) to represent the letter \(\alpha_{-i}^{(-s)}\), which we call the _conjugate_ of \(\alpha_{i}^{(s)}\).
**Definition 4.2**.: _A symmetric sketch is an order on the letters in \(C(n)\) such that the following hold for any \(\alpha_{i}^{(s)},\alpha_{j}^{(t)}\in C(n)\):_
1. _If_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_, then_ \(\overline{\alpha_{j}^{(t)}}\) _appears before_ \(\overline{\alpha_{i}^{(s)}}\)_._
2. _If_ \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{j}^{(t-1)}\)_, then_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_._
3. \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{i}^{(s)}\)_._
**Proposition 4.3**.: _An order on the letters of \(C(n)\) corresponds to a region of \(\mathcal{C}_{n}\) if and only if it is a symmetric sketch._
Proof.: The idea of the proof is the same as that of [2, Lemma 5.2]. It is clear that any order that corresponds to a region must satisfy the properties in Definition 4.2 and hence be a symmetric sketch. For the converse, we show that there is a point in \(\mathbb{R}^{n}\) satisfying the inequalities given by a symmetric sketch.
We prove this using induction on \(n\), the case \(n=1\) being clear. Let \(n\geq 2\) and \(w\) be a symmetric sketch. Without loss of generality, we can assume that the first letter of \(w\) is \(\alpha_{n}^{(0)}\). Deleting the letters with subscript \(n\) and \(-n\) from \(w\) gives a symmetric sketch \(w^{\prime}\) in the letters \(C(n-1)\). Using the induction hypothesis, we can choose a point \(\mathbf{x}^{\prime}\in\mathbb{R}^{n-1}\) satisfying the inequalities given by \(w^{\prime}\). Suppose the letter before \(\alpha_{n}^{(1)}\) in \(w\) is \(\alpha_{i}^{(s)}\) and the letter after it is \(\alpha_{j}^{(t)}\). We choose \(x_{n}\neq-1\) such that \(x_{i}^{\prime}+s<x_{n}+1<x_{j}^{\prime}+t\) in such a way that \(x_{n}+1\) is also in the correct position with respect \(0\) specified by \(w\). This is possible since \(\mathbf{x}^{\prime}\) satisfies \(w^{\prime}\).
We show that \((x_{1}^{\prime},\ldots,x_{n-1}^{\prime},x_{n})\) satisfies the inequalities given by \(w\). We only have to check that \(x_{n}\) and \((x_{n}+1)\) are in the correct relative position with respect to the other letters since property (1) of Definition 4.2 will then show that \(-x_{n}\) and \(-x_{n}-1\) are also in the correct relative position. By the choice of \(x_{n}\), we see that \(x_{n}+1\) in the correct position. We have to show that \(x_{n}\) is less than \(\pm x_{i}^{\prime}\) and \(\pm(x_{i}^{\prime}+1)\) for all \(i^{\prime}\in[n-1]\). If \(x_{n}>x_{1}^{\prime}\), then \(x_{n}+1>x_{1}^{\prime}+1\) and since \(x_{n}+1\) satisfies the inequalities specified by \(w\), \(\alpha_{1}^{(1)}\) must be before \(\alpha_{n}^{(1)}\) in \(w\). But by property (2) of Definition 4.2, this means that \(\alpha_{1}^{(0)}\) must be before \(\alpha_{n}^{(0)}\) in \(w\), which is a contradiction. The same logic can be used to show that \(x_{n}\) satisfies the other inequalities given by \(w\).
We now derive some properties of symmetric sketches. A symmetric sketch has \(4n\) letters, so we call the word made by the first \(2n\) letters its first half. Similarly we define its second half.
**Lemma 4.4**.: _The second half of a symmetric sketch is completely specified by its first half. In fact, it is the'mirror' of the first half, i.e., it is the reverse of the first half with each letter replaced with its conjugate._
Proof.: For any symmetric sketch, the letter \(\alpha_{i}^{(s)}\) is in the first half if and only if the letter \(\overline{\alpha_{i}^{(s)}}\) is in the second half. This property can be proved as follows: Suppose there is a pair of conjugates in the first half of a symmetric sketch. Since conjugate pairs partition \(C(n)\), this means that there is a pair of conjugates in the second half as well. But this would contradict property (1) of a symmetric sketch in Definition 4.2.
Hence, the set of letters in the second half are the conjugates of the letters in the first half. The order in which they appear is forced by property (1) of Definition 4.2, that is, the conjugates appear in the opposite order as the corresponding letters in the first half. So if the first half of a symmetric sketch is \(a_{1}\cdots a_{2n}\) where \(a_{i}\in C(n)\) for all \(i\in[2n]\), the sketch is
\[a_{1}\quad a_{2}\quad\cdots\quad a_{2n}\quad\overline{a_{2n}}\quad\cdots\quad \overline{a_{2}}\quad\overline{a_{1}}.\]
We draw a vertical line between the \(2n^{th}\) and \((2n+1)^{th}\) letter in a symmetric sketch to indicate both the mirroring and the change in sign (note that if the \(2n^{th}\) letter is \(\alpha_{i}^{(s)}\), we have \(x_{i}+s<0<-x_{i}-s\) in the corresponding region).
**Example 4.5**.: \(\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}.\)__
A letter in \(C(n)\) is called an \(\alpha\)-_letter_ if it is of the form \(\alpha_{i}^{(0)}\) or \(\alpha_{-i}^{(-1)}\) where \(i\in[n]\). The other letters are called \(\beta\)-_letters_. The \(\beta\)-letter 'corresponding' to an \(\alpha\)-letter is the one with the same subscript. Hence, in a symmetric sketch, an \(\alpha\)-letter always appears before its corresponding \(\beta\)-letter by property (3) in Definition 4.2. The order in which the subscripts of the \(\alpha\)-letters appear is the same as the order in which the subscripts of the \(\beta\)-letters appear by property (2) of Definition 4.2. The proof of the following lemma is very similar to that of the previous lemma.
**Lemma 4.6**.: _The order in which the subscripts of the \(\alpha\)-letters in a symmetric sketch appear is of the form_
\[\begin{matrix}i_{1}&i_{2}&\cdots&i_{n}&-i_{n}&\cdots&-i_{2}&-i_{1}\end{matrix}\]
_where \(\{|i_{1}|,\ldots,|i_{n}|\}=[n]\)._
Using Lemmas 4.4 and 4.6, to specify the sketch, we only need to specify the following:
1. The \(\alpha,\beta\)-word corresponding to the first half.
2. The signed permutation given by the first \(n\)\(\alpha\)-letters.
The \(\alpha,\beta\)-word corresponding to the first half is a word of length \(2n\) in the letters \(\{\alpha,\beta\}\) such that the \(i^{th}\) letter is an \(\alpha\) if and only if the \(i^{th}\) letter of the symmetric sketch is an \(\alpha\)-letter.
There is at most one sketch corresponding to a pair of an \(\alpha,\beta\)-word and a signed permutation. This is because the signed permutation tells us, by Lemma 4.6, the order in which the subscripts of the \(\alpha\)-letters (and hence \(\beta\)-letters) appears. Using this and the \(\alpha,\beta\)-word, we can construct the first half and, by Lemma 4.4, the entire sketch.
**Example 4.7**.: _To the symmetric sketch_
\[\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}\]
_we associate the pair consisting of the following:_
1. \(\alpha,\beta\)_-word:_ \(\alpha\beta\alpha\alpha\beta\alpha\)_._
2. _Signed permutation:_ \(-3\quad 1\quad-2\)_._
_If we are given the \(\alpha,\beta\)-word and signed permutation above, the unique sketch corresponding to it is the one given above._
The next proposition characterizes the pairs of \(\alpha,\beta\)-words and signed permutations that correspond to symmetric sketches.
**Proposition 4.8**.: _A pair consisting of_
1. _an_ \(\alpha,\beta\)_-word of length_ \(2n\) _such that any prefix of the word has at least as many_ \(\alpha\)_-letters as_ \(\beta\)_-letters and_
2. _any signed permutation_
_corresponds to a symmetric sketch and all symmetric sketches correspond to such pairs._
Proof.: By property (3) of Definition 4.2, any \(\alpha,\beta\)-word corresponding to the first half of a sketch should have at least as many \(\alpha\)-letters as \(\beta\)-letters in any prefix.
We now prove that given such a pair, there is a symmetric sketch corresponding to it. If the given \(\alpha,\beta\)-word is \(l_{1}l_{2}\cdots l_{2n}\) and the given signed permutation is \(i_{1}i_{2}\cdots i_{n}\), we construct the symmetric sketch as follows:
1. Extend the \(\alpha,\beta\)-word to the one of length \(4n\) given by \[l_{1}\quad l_{2}\quad\cdots\quad l_{2n}\quad\overline{l_{2n}}\quad\cdots \quad\overline{l_{2}}\quad\overline{l_{1}}\] where \(\overline{l_{i}}=\alpha\) if and only if \(l_{i}=\beta\) for all \(i\in[2n]\).
2. Extend the signed permutation to the sequence of length \(2n\) given by \[i_{1}\quad i_{2}\quad\cdots\quad i_{n}\quad-i_{n}\quad\cdots\quad-i_{2}\quad- i_{1}.\]
3. Label the subscripts of the \(\alpha\)-letters of the extended \(\alpha,\beta\)-word in the order given by the extended signed permutation and similarly label the \(\beta\)-letters.
If we show that the word constructed is a symmetric sketch, it is clear that it will correspond to the given \(\alpha,\beta\)-word and signed permutation. We have to check that the constructed word satisfies the properties in Definition 4.2.
The way the word was constructed, we see that it is of the form
\[a_{1}\quad a_{2}\quad\cdots\quad a_{2n}\quad\overline{a_{2n}}\quad\cdots \quad\overline{a_{2}}\quad\overline{a_{1}}\]
where \(a_{i}\in C(n)\) for all \(i\in[2n]\). Since the conjugate of the \(i^{th}\)\(\alpha\) is the \((2n-i+1)^{th}\)\(\beta\) and vice-versa, the first half of the word cannot have a pair of conjugates. Hence the word has all letters of \(C(n)\). This shows that property (1) of Definition 4.2 holds. Property (2) is taken care of since, by construction, the subscripts of the \(\alpha\)-letters appear in the same order as those of the \(\beta\)-letters.
To show that property (3) holds, it suffices to show that any prefix of the word has at least as many \(\alpha\)-letters as \(\beta\)-letters. This is already true for the first half. To show that this is true for the entire word, we consider \(\alpha\) as \(+1\) and \(\beta\) as \(-1\). Hence, the condition is that any prefix has a non-negative sum. Since any prefix of size greater than \(2n\) is of the form
\[l_{1}\quad l_{2}\quad\cdots\quad l_{2n}\quad\overline{l_{2n}}\quad\cdots \quad\overline{l_{k}}\]
for some \(k\in[2n]\), the sum is \(l_{1}+\cdots+l_{k-1}\geq 0\). So property (3) holds as well and hence the constructed word is a symmetric sketch.
We use this description to count symmetric sketches.
**Lemma 4.9**.: _The number of \(\alpha,\beta\)-words of length \(2n\) having at least as many \(\alpha\)-letters as \(\beta\)-letters in any prefix is \(\binom{2n}{n}\)._
Proof.: We consider these \(\alpha,\beta\)-words as lattice paths. Using the step \(U=(1,1)\) for \(\alpha\) and the step \(D=(1,-1)\) for \(\beta\), we have to count those lattice paths with each step \(U\) or \(D\) that start at the origin, have \(2n\) steps, and never fall below the \(x\)-axis.
Using the reflection principle (for example, see [11]), we get that the number of such lattice paths that end at \((2n,2k)\) for \(k\in[0,n]\) is given by
\[\binom{2n}{n+k}-\binom{2n}{n+k+1}.\]
The (telescoping) sum over \(k\in[0,n]\) gives the required result.
The above lemma and Proposition 4.8 immediately give the following.
**Theorem 4.10**.: _The number of symmetric sketches and hence regions of \(\mathcal{C}_{n}\) is_
\[2^{n}n!\binom{2n}{n}.\]
In [2], Athanasiadis obtains bijections between several classes of non-nesting partitions and regions of certain arrangements. We will mention the one for the arrangement \(\mathcal{C}_{n}\), which gives a bijection between the \(\alpha,\beta\)-words associated to symmetric sketches and certain non-nesting partitions.
**Definition 4.11**.: _A symmetric non-nesting partition is a partition of \([-2n,2n]\setminus\{0\}\) such that the following hold:_
1. _Each block is of size_ \(2\)_._
2. _If_ \(B=\{a,b\}\) _is a block, so is_ \(-B=\{-a,-b\}\)_._
3. _If_ \(\{a,b\}\) _is a block and_ \(c,d\in[-2n,2n]\setminus\{0\}\) _are such that_ \(a<c<d<b\)_, then_ \(\{c,d\}\) _is not a block._
Symmetric non-nesting partitions are usually represented using arc-diagrams. This is done by using \(4n\) dots to represent the numbers in \([-2n,2n]\setminus\{0\}\) in order and joining dots in the same block using an arc. The properties of these partitions imply that there are no nesting arcs and that the diagram is symmetric, which we represent by drawing a line after \(2n\) dots.
**Example 4.12**.: _The arc diagram associated to the symmetric non-nesting partition of \([-6,6]\setminus\{0\}\)_
\[\{-6,-3\},\{-5,-1\},\{-4,2\},\{-2,4\},\{1,5\},\{3,6\}\]
_is given in Figure 2._
Figure 2. The symmetric non-nesting partition of Example 4.12.
It can also be seen that there are exactly \(n\) pairs of blocks of the form \(\{B,-B\}\) with no block containing both a number and its negative. Also, the first \(n\) blocks, with blocks being read in order of the smallest element in it, do not have a pair of the form \(\{B,-B\}\). Hence, we can label the first \(n\) blocks with a signed permutation and label the block \(-B\) with the negative of the label of \(B\) to obtain a labeling of all blocks. We call such objects _labeled symmetric non-nesting partitions._ In the arc diagram, the labeling is done by replacing the dots representing the elements in a block with its label.
We can obtain a labeled symmetric non-nesting partition from a symmetric sketch by joining the letters \(\alpha_{i}^{(0)}\) and \(\alpha_{i}^{(1)}\) and similarly \(\alpha_{-i}^{(-1)}\) and \(\alpha_{-i}^{(0)}\) with arcs and replacing each letter in the sketch with its subscript. It can be shown that this construction is a bijection between symmetric sketches and labeled symmetric non-nesting partitions. In particular, the \(\alpha,\beta\)-words associated with symmetric sketches are in bijection with symmetric non-nesting partitions.
**Example 4.13**.: _To the symmetric sketch_
\[\alpha_{3}^{(0)}\alpha_{2}^{(0)}\alpha_{-1}^{(-1)}\alpha_{3}^{(1)}\alpha_{1} ^{(0)}\alpha_{2}^{(1)}|\alpha_{-2}^{(-1)}\alpha_{-1}^{(0)}\alpha_{-3}^{(-1)} \alpha_{1}^{(1)}\alpha_{-2}^{(0)}\alpha_{-3}^{(0)}\]
_we associate the labeled symmetric non-nesting partition in Figure 3._
We now describe another way to represent the regions. We have already seen that a sketch corresponds to a pair consisting of an \(\alpha,\beta\)-word and a signed permutation. We represent the \(\alpha,\beta\)-word as a lattice path just as we did in the proof of Lemma 4.9. We specify the signed permutation by labeling the first \(n\) up-steps of the lattice path.
**Example 4.14**.: _The lattice path associated to the symmetric sketch_
\[\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}\]
_is given in Figure 4._
These representations for the regions of \(\mathcal{C}_{n}\) also allow us to determine and count which regions are bounded.
Figure 4. Lattice path associated to the symmetric sketch in Example 4.14.
Figure 3. Arc diagram associated to the symmetric sketch in Example 4.13.
**Theorem 4.15**.: _The number of bounded regions of the arrangement \(\mathcal{C}_{n}\) is_
\[2^{n}n!\binom{2n-1}{n}.\]
Proof.: First note that the arrangement \(\mathcal{C}_{n}\) has rank \(n\) and is hence essential. From the bijection defined above, it can be seen that the arc diagram associated to any region \(R\) of \(\mathcal{C}_{n}\) can be obtained by plotting a point \((x_{1},\ldots,x_{n})\in R\) on the real line. This is done by marking \(x_{i}\) and \(x_{i}+1\) on the real line using \(i\) for all \(i\in[n]\) and then joining them with an arc and similarly marking \(-x_{i}-1\) and \(-x_{i}\) using \(-i\) and joining them with an arc.
This can be used to show that a region of \(\mathcal{C}_{n}\) is bounded if and only if the arc diagram is 'interlinked'. For example, Figure 3 shows an arc diagram that is interlinked and Figure 5 shows one that is not. In terms of lattice paths, the bounded regions are those whose corresponding lattice path never touches the \(x\)-axis except at the origin.
This shows that the number of bounded regions of \(\mathcal{C}_{n}\) is \(2^{n}n!\) times the number of unlabeled lattice paths of length \(2n\) that never touch the \(x\)-axis apart from at the origin. Deleting the first step (which is necessarily an up-step) gives a bijection between such paths and those of length \(2n-1\) that never fall below the \(x\)-axis. Using the same idea as in the proof of Lemma 4.9, it can be checked that the number of such paths is \(\binom{2n-1}{n}\). This proves the required result.
**Remark 4.16**.: _In [13], the authors study the type \(C\) Catalan arrangement directly, i.e., without using the translation \(\mathcal{C}_{n}\) mentioned above. Hence, using the same logic, they use orders on the letters_
\[\{\alpha_{i}^{(s)}\mid i\in[-n,n]\setminus\{0\},\ s\in\{0,1\}\}\]
_to represent the regions of the type \(C\) Catalan arrangement. They claim that these orders are those such that the following hold for any \(i,j\in[-n,n]\setminus\{0\}\) and \(s\in\{0,1\}\):_
1. _If_ \(\alpha_{i}^{(0)}\) _appear before_ \(\alpha_{j}^{(0)}\)_, then_ \(\alpha_{i}^{(1)}\) _appears before_ \(\alpha_{j}^{(1)}\)_._
2. \(\alpha_{i}^{(0)}\) _appears before_ \(\alpha_{i}^{(1)}\)_._
3. _If_ \(\alpha_{i}^{(0)}\) _appears before_ \(\alpha_{j}^{(s)}\)_, then_ \(\alpha_{-j}^{(0)}\) _appears before_ \(\alpha_{-i}^{(s)}\)_._
_Though this can be shown to be true, the method used in [13] to construct a point satisfying the inequalities given by such an order does not seem to work in general. We describe their method and then exhibit a case where it does not work._
_Let \(w=w_{1}\cdots w_{4n}\) be an order satisfying the properties given above. Then construct \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) as follows: Let \(z_{0}=0\) (or pick \(z_{0}\) arbitrarily). Then define \(z_{p}\) for \(p=1,2,\ldots,4n\) in order as follows: If \(w_{p}=\alpha_{i}^{(0)}\) then set \(z_{p}=z_{p-1}+\frac{1}{2n+1}\) and \(x_{i}=z_{p}\), and if \(w_{p}=\alpha_{i}^{(1)}\) then set \(z_{p}=x_{i}+1\). Here we consider \(x_{-i}=-x_{i}\) for any \(i\in[n]\). Then \(\mathbf{x}\) satisfies the inequalities given by \(w\)._
Figure 5. Arc diagram associated to the symmetric sketch of Example 4.7.
_The following example shows that this method does not always work; in fact \(\mathbf{x}\) is not always well-defined. Consider the order \(w=\alpha_{-2}^{(0)}\alpha_{1}^{(0)}\alpha_{-2}^{(1)}\alpha_{1}^{(1)}\alpha_{-1}^ {(0)}\alpha_{2}^{(0)}\alpha_{-1}^{(1)}\alpha_{2}^{(1)}\). Following the above procedure, we would get that \(x_{1}\) is both \(\frac{2}{5}\) as well as \(-1-\frac{3}{5}\)._
### Extended type C Catalan
Fix \(m,n\geq 1\). The type \(C\)\(m\)-Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[2X_{i} =0,\pm 1,\pm 2,\ldots,\pm m\] \[X_{i}+X_{j} =0,\pm 1,\pm 2,\ldots,\pm m\] \[X_{i}-X_{j} =0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\). We will study the arrangement obtained by performing the translation \(X_{i}=x_{i}+\frac{m}{2}\) for all \(i\in[n]\). The translated arrangement, which we call \(\mathcal{C}_{n}^{(m)}\), has hyperplanes
\[2x_{i} =-2m,-2m+1,\ldots,0\] \[x_{i}+x_{j} =-2m,-2m+1,\ldots,0\] \[x_{i}-x_{j} =0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\). Note that \(\mathcal{C}_{n}=\mathcal{C}_{n}^{(1)}\). The regions of \(\mathcal{C}_{n}^{(m)}\) are given by valid total orders on
\[\{x_{i}+s\mid i\in[n],\ s\in[0,m]\}\cup\{-x_{i}-s\mid i\in[n],\ s\in[0,m]\}.\]
Just as we did for \(\mathcal{C}_{n}\), such orders will be represented by using the symbol \(\alpha_{i}^{(s)}\) for \(x_{i}+s\) and \(\alpha_{-i}^{(-s)}\) for \(-x_{i}-s\) for all \(i\in[n]\) and \(s\in[0,m]\). Let \(C^{(m)}(n)\) be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in[0,m]\}\cup\{\alpha_{i}^{(s)}\mid-i\in[n],\ s\in[-m,0]\}.\]
For any \(\alpha_{i}^{(s)}\in C^{(m)}(n)\), \(\overline{\alpha_{i}^{(s)}}\) represents \(\alpha_{-i}^{(-s)}\) and is called the conjugate of \(\alpha_{s}^{(i)}\). Letters of the form \(\alpha_{i}^{(0)}\) or \(\alpha_{-i}^{(-m)}\) for any \(i\in[n]\) are called \(\alpha\)-letters. The others are called \(\beta\)-letters.
**Definition 4.17**.: _An order on the letters in \(C^{(m)}(n)\) is called a symmetric \(m\)-sketch if the following hold for all \(\alpha_{i}^{(s)},\alpha_{j}^{(t)}\in C^{(m)}(n)\):_
1. _If_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_, then_ \(\overline{\alpha_{j}^{(t)}}\) _appears before_ \(\overline{\alpha_{i}^{(s)}}\)_._
2. _If_ \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{j}^{(t-1)}\)_, then_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_._
3. \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{i}^{(s)}\)_._
The following result can be proved just as Proposition 4.3.
**Proposition 4.18**.: _An order on the letters in \(C^{(m)}(n)\) corresponds to a region of \(\mathcal{C}_{n}^{(m)}\) if and only if it is a symmetric \(m\)-sketch._
Similar to Lemma 4.6, it can be shown that the order in which the subscripts of the \(\alpha\)-letters appear in a symmetric \(m\)-sketch is of the form
\[i_{1}\quad i_{2}\quad\cdots\quad i_{n}\quad-i_{n}\quad\cdots\quad-i_{2}\quad-i _{1}\]
where \(\{|i_{1}|,\ldots,|i_{n}|\}=[n]\). Just as in the case of symmetric sketches, we associate an \(\alpha,\beta\)-word and signed permutation to a symmetric \(m\)-sketch which completely determines it.
**Example 4.19**.: _To the symmetric 2-sketch_
\[\alpha_{2}^{(0)}\alpha_{-1}^{(-2)}\alpha_{2}^{(1)}\alpha_{-1}^{(-1)}\alpha_{ 1}^{(0)}\alpha_{-2}^{(-2)}\mid\alpha_{2}^{(2)}\alpha_{-1}^{(0)}\alpha_{1}^{(1) }\alpha_{-2}^{(-1)}\alpha_{1}^{(2)}\alpha_{-2}^{(0)}\]
_we associate the pair consisting of the following:_
1. \(\alpha,\beta\)_-word:_ \(\alpha\alpha\beta\beta\alpha\alpha\)_._
2. _Signed permutation:_ \(2\)__\(-1\)_._
The set of \(\alpha,\beta\)-words associated to symmetric \(m\)-sketches for \(m>1\) does not seem to have a simple characterization like those for symmetric sketches (see Proposition 4.8). However, looking at symmetric \(m\)-sketches as labeled non-nesting partitions as done in [2], we see that such objects have already been counted bijectively (refer [10]).
**Definition 4.20**.: _A symmetric \(m\)-non-nesting partition is a partition of \([-(m+1)n,(m+1)n]\setminus\{0\}\) such that the following hold:_
1. _Each block is of size_ \((m+1)\)_._
2. _If_ \(B\) _is a block, so is_ \(-B\)_._
3. _If_ \(a,b\) _are in some block_ \(B\)_,_ \(a<b\) _and there is no number_ \(a<c<b\) _such that_ \(c\in B\)_, then if_ \(a<c<d<b\)_,_ \(c\) _and_ \(d\) _are not in the same block._
Just as we did for the \(m=1\) case, we can obtain a labeled symmetric \(m\)-non-nesting partition from a symmetric \(m\)-sketch by joining the letters \(\alpha_{i}^{(0)},\alpha_{i}^{(1)},\ldots,\alpha_{i}^{(m)}\) and similarly \(\alpha_{-i}^{(-m)},\alpha_{-i}^{(-m+1)},\ldots,\alpha_{-i}^{(0)}\) with arcs and labeling each such chain with the subscript of the letters being joined.
**Example 4.21**.: _To the symmetric 2-sketch in Example 4.19, we associate the labeled 2-non-nesting partition of Figure 6._
The number of various classes of non-nesting partitions have been counted bijectively. In terms of [10] or [2], the symmetric \(m\)-non-nesting partitions defined above are called type \(C\) partitions of size \((m+1)n\) of type \((m+1,\ldots,m+1)\) where this is an \(n\)-tuple representing the size of the (nonzero) block pairs \(\{B,-B\}\). The number of such partitions is
\[\binom{(m+1)n}{n}.\]
Hence we get the following theorem.
**Theorem 4.22**.: _The number of symmetric \(m\)-sketches, which is the number of regions of \(\mathcal{C}_{n}^{(m)}\) is_
\[2^{n}n!\binom{(m+1)n}{n}.\]
## 5. Catalan deformations of other types
We will now use'sketches and moves', as in [6], to count the regions of Catalan arrangements of other types. Depending on the context, we represent the regions of arrangements
Figure 6. A labeled 2-non-nesting partition
using sketches, arc diagrams, or lattice paths and frequently make use of the bijections identifying them. We usually use sketches to define moves and use arc diagrams and lattice paths to count regions as well as bounded regions.
### Type D Catalan
Fix \(n\geq 2\). The type \(D\) Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\) for all \(i\in[n]\), we get the arrangement \(\mathcal{D}_{n}\) with hyperplanes
\[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for \(1\leq i<j\leq n\). Figure 7 shows \(\mathcal{D}_{2}\) as a sub-arrangement of \(\mathcal{C}_{2}\). It also shows how the regions of \(\mathcal{D}_{2}\) partition the regions of \(\mathcal{C}_{2}\).
We use the idea of moves to count the regions of \(\mathcal{D}_{n}\) by considering it as a sub-arrangement of \(\mathcal{C}_{n}\). The hyperplanes from \(\mathcal{C}_{n}\) that are missing in \(\mathcal{D}_{n}\) are
\[2x_{i}=-2,-1,0\]
Figure 7. The arrangement \(\mathcal{C}_{2}\) with the hyperplanes in \(\mathcal{D}_{2}\) in bold. Two regions of \(\mathcal{C}_{2}\) are labeled with their symmetric labeled non-nesting partition.
for all \(i\in[n]\). Hence, the type \(D\) Catalan moves on symmetric sketches (regions of \(\mathcal{C}_{n}\)), which we call \(\mathcal{D}\) moves, are as follows:
1. Swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter.
2. Swapping the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters if they are adjacent, along with the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)-letters.
The first move covers the inequalities corresponding to the hyperplanes \(x_{i}+1=-x_{i}-1\) and \(x_{i}=-x_{i}\) for all \(i\in[n]\) since the only conjugates that are adjacent, by Lemma 4.4, are the \(2n^{th}\) and \((2n+1)^{th}\) letter.
The second move covers the inequalities corresponding to the hyperplanes \(x_{i}=-x_{i}-1\) (equivalently, \(x_{i}+1=-x_{i}\)) for all \(i\in[n]\). This is due to the fact that the only way \(\alpha_{i}^{(0)}\) and \(\alpha_{-i}^{(-1)}\) as well as \(\alpha_{i}^{(1)}\) and \(\alpha_{-i}^{(0)}\) can be adjacent is, by Lemma 4.6, when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent. Also, by Lemma 4.4, the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent if and only if the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)-letters are adjacent.
**Example 5.1**.: \(A\) _series of \(\mathcal{D}\) moves applied to a symmetric sketch is given below:_
\[\alpha_{-1}^{(-1)}\alpha_{2}^{(0)}\alpha_{-2}^{(-1)}\alpha_{-1}^ {(0)} \mid\alpha_{1}^{(0)}\alpha_{2}^{(1)}\alpha_{-2}^{(0)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {2}^{(0)}\alpha_{-2}^{(-1)}\alpha_{1}^{(0)} \mid\alpha_{-1}^{(0)}\alpha_{2}^{(1)}\alpha_{-2}^{(0)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {-2}^{(-1)}\alpha_{2}^{(0)}\alpha_{1}^{(0)} \mid\alpha_{-1}^{(0)}\alpha_{-2}^{(0)}\alpha_{2}^{(1)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {-2}^{(-1)}\alpha_{2}^{(0)}\alpha_{-1}^{(0)} \mid\alpha_{1}^{(0)}\alpha_{-2}^{(0)}\alpha_{2}^{(1)}\alpha_{1}^{(1)}\]
To count the regions of \(\mathcal{D}_{n}\), we have to count the number of equivalence classes of symmetric sketches where two sketches are equivalent if one can be obtained from the other via a series of \(\mathcal{D}\) moves. In Figure 7, the two labeled regions of \(\mathcal{C}_{2}\) are adjacent and lie in the same region of \(\mathcal{D}_{2}\). They are related by swapping of the fourth and fifth letters of their sketches, which is a \(\mathcal{D}\) move.
The fact about these moves that will help with the count is that a series of \(\mathcal{D}\) moves do not change the sketch too much. Hence we can list the sketches that are \(\mathcal{D}\) equivalent to a given sketch.
First, consider the case when the \(n^{th}\)\(\alpha\)-letter of the symmetric sketch is not in the \((2n-1)^{th}\) position. In this case, the \(n^{th}\)\(\alpha\)-letter is far enough from the \(2n^{th}\) letter that a \(\mathcal{D}\) move of the first kind (swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter) will not affect the letter after the \(n^{th}\)\(\alpha\)-letter. Hence it does not change whether the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent.
Let \(w\) be a sketch where the \(n^{th}\)\(\alpha\)-letter is not in the \((2n-1)^{th}\) position. The number of sketches \(\mathcal{D}\) equivalent to \(w\) is \(4\) when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent. They are illustrated below:
\[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\cdots\alpha_{j}^{(s)}\mid\alpha_{-j}^ {(-s)}\cdots\alpha_{-i}^{(0)}\alpha_{i}^{(1)}\cdots\] \[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\alpha_{i}^{(0)}\alpha_{i}^{(1)}\cdots\] \[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\cdots\alpha_{j}^{(s)} \mid\alpha_{-j}^{(-s)}\cdots\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\] \[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\]
The number of sketches \(\mathcal{D}\) equivalent to \(w\) is \(2\) when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letter are not adjacent. They are illustrated below:
\[\cdots\alpha_{j}^{(s)}\mid\alpha_{-j}^{(-s)}\cdots\quad\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\]
Notice also that the equivalent sketches also satisfy the same properties (\(n^{th}\)\(\alpha\)-letter not being in the \((2n-1)^{th}\) position and whether the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent).
In case the \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position of the symmetric sketch, it can be checked that it has exactly 4 equivalent sketches all of which also have the \(n^{th}\)\(\alpha\)-letter in the \((2n-1)^{th}\) position:
\[\cdots\alpha_{i}^{(0)}\alpha_{i}^{(1)}\mid\alpha_{-i}^{(-1)}\alpha_{-i}^{(0)}\cdots\]
\[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\mid\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\]
\[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\mid\alpha_{-i}^{(0)}\alpha_{i}^{(1)}\cdots\]
\[\cdots\ \alpha_{-i}^{(-1)}\alpha_{-i}^{(0)}\mid\alpha_{i}^{(0)}\alpha_{i}^{(1)}\cdots\]
Figure 7 shows that each region of \(\mathcal{D}_{2}\) contains exactly 2 or 4 regions of \(\mathcal{C}_{2}\), as expected from the above observations.
**Theorem 5.2**.: _The number of \(\mathcal{D}\) equivalence classes on symmetric sketches and hence the number of regions of \(\mathcal{D}_{n}\) is_
\[2^{n-1}\cdot\frac{(2n-2)!}{(n-1)!}\cdot(3n-2).\]
Proof.: By the observations made above, the number of sketches equivalent to a given sketch only depends on its \(\alpha,\beta\)-word (see Proposition 4.8). So, we need to count the number of \(\alpha,\beta\)-words of length \(2n\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters that are of the following types:
1. The \(n^{th}\)\(\alpha\)-letter is not in the \((2n-1)^{th}\) position and 1. the letter after the \(n^{th}\)\(\alpha\)-letter is an \(\alpha\). 2. the letter after the \(n^{th}\)\(\alpha\)-letter is a \(\beta\).
2. The \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position.
We first count the second type of \(\alpha,\beta\)-words. If the \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position, the first \((2n-2)\) letters have \((n-1)\)\(\alpha\)-letters and \((n-1)\)\(\beta\)-letters and hence form a ballot sequence. This means that there is no restriction on the \(2n^{th}\) letter; it can be \(\alpha\) or \(\beta\). So, the total number of such \(\alpha,\beta\)-words is
\[2\cdot\frac{1}{n}\binom{2n-2}{n-1}.\]
The number of both the types 1(a) and 1(b) of \(\alpha,\beta\)-words mentioned above are the same. This is because changing the letter after the \(n^{th}\)\(\alpha\)-letter is an involution on the set of \(\alpha,\beta\)-word of length \(2n\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters. We have just counted such words that have the \(n^{th}\)\(\alpha\)-letter in the \((2n-1)^{th}\) position. Hence, using Lemma 4.9, we get that the number of words of type 1(a) and 1(b) are both equal to
\[\frac{1}{2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{n-1}\right].\]
Combining the observations made above, we get that the number of regions of \(\mathcal{D}_{n}\) is
\[2^{n}n!\cdot\left(\frac{1}{4}\cdot\left[\frac{2}{n}\binom{2n-2}{n-1}+\frac{1} {2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{n-1}\right]\right]+\frac{ 1}{2}\cdot\left[\frac{1}{2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{ n-1}\right]\right]\right)\]
which simplifies to the required formula.
Just as we did for \(\mathcal{C}_{n}\), we can describe and count which regions of \(\mathcal{D}_{n}\) are bounded.
**Theorem 5.3**.: _The number of bounded regions of \(\mathcal{D}_{n}\) is_
\[2^{n-1}\cdot\frac{(2n-3)!}{(n-2)!}\cdot(3n-4).\]
Proof.: For \(n\geq 2\), both \(\mathcal{C}_{n}\) and \(\mathcal{D}_{n}\) have rank \(n\). Hence, a region of \(\mathcal{D}_{n}\) is bounded exactly when all the regions of \(\mathcal{C}_{n}\) it contains are bounded.
We have already seen in Theorem 4.15 that a region of \(\mathcal{C}_{n}\) is bounded exactly when its corresponding lattice path does not touch the \(x\)-axis except at the origin. Such regions are not closed under \(\mathcal{D}\) moves. However, if we include regions whose corresponding lattice paths touch the \(x\)-axis only at the origin and \((2n,0)\), this set of regions, which we call \(S\), is closed under the action of \(\mathcal{D}\) moves because such lattice paths are closed under the action of changing the \(2n^{th}\) step. Denote by \(S_{\mathcal{D}}\) the set of equivalence classes that \(\mathcal{D}\) moves partition \(S\) into, i.e., \(S_{\mathcal{D}}\) is the set of regions of \(\mathcal{D}_{n}\) that contain regions of \(S\).
Just as in the proof of Theorem 5.2, one can check that the set \(S\) is closed under the action of changing the letter after the \(n^{th}\)\(\alpha\)-letter. Also, note that the lattice paths in \(S\) do not touch the \(x\)-axis at \((2n-2,0)\), and hence the \(n^{th}\)\(\alpha\)-letter cannot be in the \((2n-1)^{th}\) position. Using the above observations and the same method to count regions of \(\mathcal{D}_{n}\) as in the proof of Theorem 5.2, we get the number of regions in \(S_{\mathcal{D}}\) is
\[2^{n}n!\cdot\frac{3}{8}\left(\binom{2n-1}{n}+\frac{1}{n}\binom{2n-2}{n-1}\right).\]
It can also be checked that each unbounded region in \(S\) is \(\mathcal{D}\) equivalent to exactly one other region of \(S\), and this region is bounded. This is because the lattice paths corresponding to these unbounded regions touch the \(x\)-axis at \((2n,0)\). Hence, they cannot have the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters being adjacent and changing the \(2n^{th}\) letter to an \(\alpha\) gives a bounded region. Since the unbounded regions in \(S\) correspond to Dyck paths of length \((2n-2)\) (by deleting the first and last step), we get that the number of unbounded regions in \(S_{\mathcal{D}}\) is
\[2^{n}n!\cdot\frac{1}{n}\binom{2n-2}{n-1}.\]
Combining the above results, we get that the number of bounded regions of \(\mathcal{D}_{n}\) is
\[2^{n}n!\left(\frac{3}{8}\left(\binom{2n-1}{n}+\frac{1}{n}\binom{2n-2}{n-1} \right)-\frac{1}{n}\binom{2n-2}{n-1}\right).\]
This simplifies to give our required result.
As mentioned earlier, we can choose a specific sketch from each \(\mathcal{D}\) equivalence class to represent the regions of \(\mathcal{D}_{n}\). It can be checked that symmetric sketches that satisfy the following are in bijection with regions of \(\mathcal{D}_{n}\):
1. The last letter is a \(\beta\)-letter.
2. The \(n^{th}\)\(\alpha\)-letter must have a negative label if the letter following it is an \(\alpha\)-letter or the \(n^{th}\)\(\beta\)-letter.
We will call such sketches type \(D\) sketches. They will be used in Section 6 to interpret the coefficients of \(\chi_{\mathcal{D}_{n}}\). Note that the type \(D\) sketches that correspond to bounded regions of \(\mathcal{D}_{n}\) are those, when converted to a lattice path, do not touch the \(x\)-axis apart from at the origin.
### Pointed type C Catalan
The type \(B\) and type \(BC\) Catalan arrangements we are going to consider now are not sub-arrangements of the type \(C\) Catalan arrangement. While it is possible to consider these arrangements as sub-arrangements of the type \(C\)\(2\)-Catalan arrangement (see Section 4.1), this would add many extra hyperplanes. This would make defining moves and counting equivalence classes difficult. Also, we do not have a simple characterization of \(\alpha,\beta\)-words associated to symmetric \(2\)-sketches, as we do for symmetric sketches (see Proposition 4.8).
We instead consider them as a sub-arrangements of the arrangement \(\mathcal{P}_{n}\) in \(\mathbb{R}^{n}\) that has hyperplanes
\[x_{i} =-\frac{5}{2},-\frac{3}{2},-1,-\frac{1}{2},0,\frac{1}{2},\frac{3}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). It can be checked that the regions of \(\mathcal{P}_{n}\) are given by valid total orders on
\[\{x_{i}+s\mid i\in[n],\ s\in\{0,1\}\}\cup\{-x_{i}-s\mid i\in[n],\ s\in\{0,1\} \}\cup\{-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\}.\]
**Remark 5.4**.: _The arrangement \(\mathcal{P}_{n}\) is the arrangement \(\mathcal{C}_{n}(\lambda)\) defined in [2, Equation (4)] with \(\lambda_{i}=2\) for all \(i\in[n]\) and \(m=2\)._
We now define sketches that represent such orders. Just as beofre, we represent \(x_{i}+s\) as \(\alpha_{i}^{(s)}\) and \(-x_{i}-s\) as \(\alpha_{-i}^{(-s)}\) for any \(i\in[n]\) and \(s\in\{0,1\}\). The numbers \(-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\) will be represented as \(\alpha_{-}^{(-1.5)}\), \(\alpha_{-}^{(-0.5)}\), \(\alpha_{+}^{(0.5)}\), \(\alpha_{+}^{(1.5)}\) respectively.
**Example 5.5**.: _The total order_
\[-\frac{3}{2}<x_{2}<-x_{1}-1<-\frac{1}{2}<x_{1}<x_{2}+1<-x_{2}-1<-x_{1}<\frac{ 1}{2}<x_{1}+1<-x_{2}<\frac{3}{2}\]
_is represented as \(\alpha_{-}^{(-1.5)}\)\(\alpha_{2}^{(0)}\)\(\alpha_{-1}^{(-1)}\)\(\alpha_{-}^{(-0.5)}\)\(\alpha_{1}^{(0)}\)\(\alpha_{2}^{(1)}\)\(\alpha_{-2}^{(-1)}\)\(\alpha_{-1}^{(0.5)}\)\(\alpha_{+}^{(1)}\)\(\alpha_{-2}^{(0)}\)\(\alpha_{+}^{(1.5)}\)._
Set \(B(n)\) to be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in\{0,1\}\}\cup\{\alpha_{i}^{(s)}\mid-i\in [n],\ s\in\{-1,0\}\}\cup\{\alpha_{-}^{(-1.5)},\alpha_{-}^{(-0.5)},\alpha_{+}^{ (0.5)},\alpha_{+}^{(1.5)}\}.\]
We define _pointed symmetric sketches_ to be the words in \(B(n)\) that correspond to regions of \(\mathcal{P}_{n}\) (this terminology will become clear soon). Denote by \(\overline{\alpha_{x}^{(s)}}\) the letter \(\alpha_{-x}^{(-s)}\) for any \(\alpha_{x}^{(s)}\in B(n)\). We have the following characterization of pointed symmetric sketches:
**Proposition 5.6**.: _A word in the letters \(B(n)\) is a pointed symmetric sketch if and only if the following hold for any \(\alpha_{x}^{(s)},\alpha_{y}^{(t)}\in B(n)\):_
1. _If_ \(\alpha_{x}^{(s)}\) _appears before_ \(\alpha_{y}^{(t)}\) _then_ \(\overline{\alpha_{y}^{(t)}}\) _appears before_ \(\overline{\alpha_{x}^{(s)}}\)_._
2. _If_ \(\alpha_{x}^{(s-1)}\) _appears before_ \(\alpha_{y}^{(t-1)}\) _then_ \(\alpha_{x}^{(s)}\) _appears before_ \(\alpha_{y}^{(t)}\)_._
3. \(\alpha_{x}^{(s-1)}\) _appears before_ \(\alpha_{x}^{(s)}\)_._
4. _Each letter of_ \(B(n)\) _appears exactly once._
Just as was done in the proof of Proposition 4.3, we can inductively construct a point in \(\mathbb{R}^{n}\) satisfying the inequalities specified by a pointed sketch. Also, just as for type \(C\) sketches, it can be shown that these sketches are symmetric about the center. We also represent such
sketches using arc diagrams in a similar manner. Note that in this case we also inlcude an arc between \(\alpha_{-}^{(-0.5)}\) and \(\alpha_{+}^{(0.5)}\).
**Example 5.7**.: _To the pointed sketch given below, we associate the arc diagram in Figure 8._
\[\alpha_{-}^{(-1.5)}\;\alpha_{2}^{(0)}\;\alpha_{-1}^{(-1)}\;\alpha_{-}^{(-0.5)}\; \alpha_{1}^{(0)}\;\alpha_{2}^{(1)}\;|\;\alpha_{-2}^{(-1)}\;\alpha_{-1}^{(0)}\; \alpha_{+}^{(0.5)}\;\alpha_{1}^{(1)}\;\alpha_{-2}^{(0)}\;\alpha_{+}^{(1.5)}\]
To a pointed symmetric sketch, we can associate a pointed \(\alpha,\beta\)-word of length \((2n+2)\) and a signed permutation as follows:
1. For the letters in the first half of the pointed sketch of the form \(\alpha_{i}^{(0)}\), \(\alpha_{-i}^{(-1)}\) or \(\alpha_{-}^{(-1.5)}\), we write \(\alpha\) and for the others we write \(\beta\) (\(\alpha\) corresponds to 'openers' in the arc diagram and \(\beta\) to 'closers'). The \(\beta\) corresponding to \(\alpha_{-}^{(0.5)}\) is pointed to.
2. The subscripts of the first \(n\)\(\alpha\)-letters other than \(\alpha_{-}^{(-1.5)}\) gives us the signed permutation.
**Example 5.8**.: _To the pointed sketch in Example 5.7, we associate the following pair:_
1. _Pointed_ \(\alpha,\beta\)_-word:_ \(\alpha\alpha\alpha\beta\alpha\beta\)_._
2. _Signed permutation:_ \(2\)__\(-1\)_._
As was done for symmetric sketches, we can see that the method given above to get a signed permutation does actually give a signed permutation. Also, such a pair has at most one pointed sketch associated to it. We now characterize the pointed \(\alpha,\beta\)-words and signed permutations associated to pointed sketches.
**Proposition 5.9**.: _A pair consisting of_
1. _a pointed_ \(\alpha,\beta\)_-word of length_ \((2n+2)\) _satisfying the property that in any prefix, there are at least as many_ \(\alpha\)_-letters as_ \(\beta\)_-letters and that the number of_ \(\alpha\)_-letters before the pointed_ \(\beta\) _is_ \((n+1)\)_, and_
2. _any signed permutation_
_corresponds to a pointed symmetric sketch and all pointed sketches correspond to such pairs._
Proof.: Most of the proof is the same as that for type \(C\) sketches. The main difference is pointing to the \(\beta\)-letter corresponding to \(\alpha_{-}^{(-0.5)}\). The property we have to take care of is that there is no nesting in the arc joining \(\alpha_{-}^{(0.5)}\) to \(\alpha_{+}^{(0.5)}\). This is the same as specifying when an arc drawn from a \(\beta\)-letter in the first half to its mirror image in the second half does not cause any nesting.
Figure 8. Arc diagram associated to the pointed symmetric sketch in Example 5.7.
Denote by \(N_{\alpha,b}\) the number of \(\alpha\)-letters before the \(\beta\) under consideration, \(N_{\alpha,a}\) the number of \(\alpha\)-letters in the first half after the \(\beta\) and similarly define \(N_{\beta,b}\) and \(N_{\beta,a}\). The condition that we do not want an arc inside the one joining the \(\beta\) to its mirror is given by
\[N_{\alpha,b}\geq N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
This is because of the symmetry of the arc diagram and the fact that we want any \(\beta\)-letter between the pointed \(\beta\) and its mirror to have its corresponding \(\alpha\) before the pointed \(\beta\). Similarly, the condition that we do not want the arc joining the \(\beta\) to its mirror to be contained in any arc is given by
\[N_{\alpha,b}\leq N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
This is because of the symmetry of the arc diagram and the fact that we want any \(\alpha\)-letter before the pointed \(\beta\) to have its corresponding \(\beta\) before the mirror of the pointed \(\beta\).
Combining the above observations, we get
\[N_{\alpha,b}=N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
But this says that the number of \(\alpha\)-letters before the pointed \(\beta\) should be equal to the number of remaining letters in the first half. Since the total number of letters in the first half is \((2n+2)\), we get that the arc joining a \(\beta\) in the first half to its mirror does not cause nesting problems if and only if the number of \(\alpha\)-letters before it is \((n+1)\).
Just as we used lattice paths for symmetric sketches, we use pointed lattice paths to represent pointed symmetric sketches. The one corresponding to the sketch in Example 5.7 is given in Figure 10.
**Theorem 5.10**.: _The number of pointed symmetric sketches, which is the number of regions of \(\mathcal{P}_{n}\), is_
\[2^{n}n!\binom{2n+2}{n}.\]
Figure 10. Pointed lattice path corresponding to the pointed sketch in Example 5.7.
Figure 9. Arc from \(\beta\) to its mirror image.
Proof.: Since there is no condition on the signed permutations, we just have to count the \(\alpha,\beta\)-words of the form mentioned in Proposition 5.9. We show that these words are in bijection with \(\alpha,\beta\)-words of length \((2n+2)\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters that have at least \((n+2)\)\(\alpha\)-letters. This means that their corresponding lattice paths do not end on the \(x\)-axis. This will prove the required result since the number of such words, using Lemma 4.9 and the fact that Catalan numbers count Dyck paths, is
\[\binom{2n+2}{n+1}-\frac{1}{n+2}\binom{2n+2}{n+1}=\binom{2n+2}{n}.\]
Given a pointed \(\alpha,\beta\)-word, we replace the pointed \(\beta\)-letter with an \(\alpha\)-letter to obtain an \(\alpha,\beta\)-word of the type described above. Starting with an \(\alpha,\beta\)-word with at least \((n+2)\)\(\alpha\)-letters, changing the \((n+2)^{th}\)\(\alpha\)-letter to a \(\beta\) and pointing to it gives a pointed \(\alpha,\beta\)-word. This gives us the required bijection.
**Theorem 5.11**.: _The number of bounded regions of \(\mathcal{P}_{n}\) is_
\[2^{n}n!\binom{2n+1}{n+1}.\]
Proof.: Just as for type \(C\) regions, the region corresponding to a pointed sketch is bounded if and only if its arc diagram is interlinked. Also, the signed permutation does not play a role in determining if a region is bounded. Note that in this case, there is an arc joining a \(\beta\)-letter between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter to its mirror image. If the arc diagram obtained by deleting this arc from the pointed \(\beta\)-letter is interlinked, then clearly so was the initial arc diagram. However, even if the arc diagram consists of two interlinked pieces when the arc from the pointed \(\beta\)-letter is removed (one on either side of the reflecting line), the corresponding region would still be bounded. Examining the bijection between arc diagrams and lattice paths, it can be checked that this means that pointed lattice paths corresponding to bounded regions are those that never touch the \(x\)-axis after the origin except maybe at \((2n+2,0)\).
Using the bijection mentioned in the proof of Theorem 5.10, we can see that the pointed \(\alpha,\beta\)-words corresponding to bounded regions are in bijection \(\alpha,\beta\)-words whose lattice paths never touch the \(x\)-axis after the origin. We have already counted such paths in Theorem 4.15 and their number is
\[\binom{2n+1}{n+1}.\]
This gives the required result.
### Type B Catalan
Fix \(n\geq 1\). The type \(B\) Catalan arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[X_{i} =-1,0,1\] \[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), we get the arrangement \(\mathcal{B}_{n}\) with hyperplanes
\[x_{i} =-\frac{3}{2},-\frac{1}{2},\frac{1}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). We consider \(\mathcal{B}_{n}\) as a sub-arrangement of \(\mathcal{P}_{n}\). The hyperplanes missing from \(\mathcal{P}_{n}\) are
\[x_{i}=-\frac{5}{2},-1,0,\frac{3}{2}\]
for all \(i\in[n]\). Hence the moves on pointed sketches corresponding to changing one of the inequalities associated to these hyperplanes are as follows:
1. Corresponding to \(x_{i}=0\), \(x_{i}=-1\): Swapping to \((2n+2)^{th}\) and \((2n+3)^{th}\) letter if they are not \(\alpha_{-}^{(-0.5)}\) and \(\alpha_{+}^{(0.5)}\).
2. Corresponding to \(x_{i}=-\frac{5}{2}\), \(x_{i}=\frac{3}{2}\): Swapping the pointed \(\beta\), that is, \(\alpha_{-}^{(-0.5)}\) and a \(\beta\)-letter immediately before or after it (and making the corresponding change in the second half).
We can see that such moves change the pointed \(\alpha,\beta\)-word associated to a sketch by at most changing the last letter or changing which of the \(\beta\)-letters between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter is pointed to. So if we force that the last letter of the sketch has to be a \(\beta\)-letter and that the \(\beta\)-letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter has to be pointed to, we get a canonical sketch in each equivalence class. We will call such sketches type \(B\) sketches.
**Theorem 5.12**.: _The number of type \(B\) sketches, which is the number of regions of \(\mathcal{B}_{n}\), is_
\[2^{n}n!\binom{2n}{n}.\]
Proof.: Since there is no condition on the signed permutation, we count the \(\alpha,\beta\)-words associated to type \(B\) sketches. From Proposition 5.9, we can see that the \(\alpha,\beta\)-words we need to count are those that satisfy the following properties:
1. Length of the word is \((2n+2)\).
2. In any prefix, there are at least as many \(\alpha\)-letters as \(\beta\)-letters.
3. The letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter is a \(\beta\) (pointed \(\beta\)).
4. The last letter is a \(\beta\).
We exhibit a bijection between these words and \(\alpha,\beta\)-words of length \(2n\) that satisfy property 2. We already know, from Lemma 4.9, that the number of such words is \(\binom{2n}{n}\) and so this will prove the required result.
If the \((n+1)^{th}\)\(\alpha\)-letter is at the \((2n+1)^{th}\) position, deleting the last two letters gives us an \(\alpha,\beta\)-word of length \(2n\) with \(n\)\(\alpha\)-letters that satisfies property 2. If the \((n+1)^{th}\)\(\alpha\)-letter is not at the \((2n+1)^{th}\) position, we delete the \(\beta\)-letter after it as well as the last letter of the word. This gives us an \(\alpha,\beta\)-word of length \(2n\) with more than \(n\)\(\alpha\)-letters that satisfies property 2. The process described gives us the required bijection.
**Theorem 5.13**.: _The number of bounded regions of \(\mathcal{B}_{n}\) is_
\[2^{n}n!\binom{2n-1}{n}.\]
Proof.: Both \(\mathcal{B}_{n}\) and \(\mathcal{P}_{n}\) have rank \(n\). Hence a region of \(\mathcal{B}_{n}\) if bounded if and only if all regions of \(\mathcal{P}_{n}\) that it contains are bounded.
In the proof of Theorem 5.11 we have characterized the pointed \(\alpha,\beta\)-words associated to bounded regions of \(\mathcal{P}_{n}\). These are the pointed lattice paths of length \((2n+2)\) that satisfy the following properties (irrespective of the position of the pointed \(\beta\)):
1. The step after the \((n+1)^{th}\) up-step is a down step (for there to exist a pointed \(\beta\)).
2. The path never touches the \(x\)-axis after the origin expect maybe at \((2n+2,0)\).
We noted in Theorem 5.3 that lattice paths satisfying property 2 are closed under action of changing the letter after the \((n+1)^{th}\) up-step as well as the action of changing the last step. This shows that the regions of \(\mathcal{P}_{n}\) that lie inside a region of \(\mathcal{B}_{n}\) are either all bounded or all unbounded. Hence the number of bounded regions of \(\mathcal{B}_{n}\) is just the number of type \(B\) sketches whose corresponding lattice path satify property 1 and 2, which is
\[2^{n}n!\cdot\frac{1}{4}\cdot\left(\binom{2n+1}{n+1}+\frac{1}{n+1}\binom{2n}{n} \right).\]
This simplifies to give the required result.
### Type BC Catalan
The type \(BC\) Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[X_{i} =-1,0,1\] \[2X_{i} =-1,0,1\] \[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), we get the arrangement \(\mathcal{BC}_{n}\) with hyperplanes
\[x_{i} =-\frac{3}{2},-1,-\frac{1}{2},0,\frac{1}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Again, we consider this arrangement as a sub-arrangement of \(\mathcal{P}_{n}\). To define moves on pointed sketches, note that the hyperplanes missing from \(\mathcal{P}_{n}\) are
\[x_{i}=-\frac{5}{2},\frac{3}{2}\]
for all \(i\in[n]\). Hence, the moves on pointed sketches corresponding to changing the inequalities associated to these hyperplanes are of the following form: Swapping the pointed \(\beta\), that is, \(\alpha_{-}^{(-0.5)}\) and a \(\beta\)-letter immediately before or after it (and making the corresponding change in the second half).
We can see that such moves change the pointed \(\alpha,\beta\)-word associated to a sketch by at most changing which of the \(\beta\)-letters between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter is pointed to. So if we force that the \(\beta\)-letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter has to be pointed
to, we get a canonical sketch in each equivalence class. We will call such sketches type \(BC\) sketches.
**Theorem 5.14**.: _The number of type \(BC\) sketches, which is the number of regions \(\mathcal{BC}_{n}\) is_
\[2^{n-1}n!\binom{2n+2}{n+1}.\]
Proof.: Since there is no condition on the signed permutation for type \(BC\) sketches, we count the number of \(\alpha,\beta\)-words that satisfy the following properties:
1. Length of the word is \((2n+2)\).
2. In any prefix, there are at least as many \(\alpha\)-letters as \(\beta\)-letters.
3. The letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter is a \(\beta\) (pointed \(\beta\)).
Using the involution on the set of words satisfying properties 1 and 2 of changing the letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter and the fact that there are \(\binom{2n+2}{n+1}\) words satisfying properties 1 and 2, we get that the number of words satisfying the required properties is
\[\frac{1}{2}\cdot\binom{2n+2}{n+1}.\]
This gives the required result.
**Theorem 5.15**.: _The number of bounded regions \(\mathcal{BC}_{n}\) is_
\[2^{n}n!\binom{2n}{n}.\]
Proof.: The proof of this result is very similar to that of Theorem 5.13. Since type \(BC\) sketches don't have the condition that the \(2n^{th}\) letter should be a \(\beta\)-letter, the number of bounded regions of \(\mathcal{BC}_{n}\) is
\[2^{n}n!\cdot\frac{1}{2}\cdot\left(\binom{2n+1}{n+1}+\frac{1}{n+1}\binom{2n}{n} \right).\]
This simplifies to give the required result.
## 6. Statistics on regions via generating functions
As mentioned in Section 1, the characteristic polynomial of an arrangement \(\mathcal{A}\) in \(\mathbb{R}^{n}\) is of the form
\[\chi_{\mathcal{A}}(t)=\sum_{i=0}^{n}(-1)^{n-i}c_{i}t^{i}\]
where \(c_{i}\) is a non-negative integer for all \(0\leq i\leq n\) and Zaslavsky's theorem tells us that
\[r(\mathcal{A}) =(-1)^{n}\chi_{\mathcal{A}}(-1)\] \[=\sum_{i=0}^{n}c_{i}.\]
In this section, we interpret the coefficients of the characteristic polynomials of the arrangements we have studied. More precisely, for each arrangement we have studied, we first define a statistic on the objects that we have seen correspond to its regions. We then show that the distribution of this statistic is given by the coefficients of the characteristic polynomial.
We do this by giving combinatorial meaning to the exponential generating functions for the characteristic polynomials of the arrangements we have studied. To obtain these generating functions, we use [18, Exercise 5.10], which we state and prove for convenience.
**Definition 6.1**.: _A sequence of arrangements \((\mathcal{A}_{1},\mathcal{A}_{2},\ldots)\) is called a Generalized Exponential Sequence of Arrangements (GESA) if_
* \(\mathcal{A}_{n}\) _is an arrangement in_ \(\mathbb{R}^{n}\) _such that every hyperplane is parallel to one of the form_ \(x_{i}=cx_{j}\) _for some_ \(c\in\mathbb{R}\)_._
* _For any_ \(k\)_-subset_ \(I\) _of_ \([n]\)_, the arrangement_ \[\mathcal{A}_{n}^{I}=\{H\in\mathcal{A}_{n}\mid H\text{ is parallel to }x_{i}=cx_{j}\text{ for some }i,j\in I\text{ and some }c\in\mathbb{R}\}\] _satisfies_ \(\operatorname{L}(\mathcal{A}_{n}^{I})\cong\operatorname{L}(\mathcal{A}_{k})\) _(isomorphic as posets)._
Note that all the arrangements we have studied are GESAs.
**Proposition 6.2**.: _Let \((\mathcal{A}_{1},\mathcal{A}_{2},\ldots)\) be a GESA, and define_
\[F(x) =\sum_{n\geq 0}(-1)^{n}r(\mathcal{A}_{n})\frac{x^{n}}{n!}\] \[G(x) =\sum_{n\geq 0}(-1)^{\operatorname{rank}(\mathcal{A}_{n})}b( \mathcal{A}_{n})\frac{x^{n}}{n!}.\]
_Then, we have_
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\frac{G(x)^{(t+1)/ 2}}{F(x)^{(t-1)/2}}.\]
Proof.: The idea of the proof is the same as that of [18, Theorem 5.17]. By Whitney's Theorem [18, Theorem 2.4], we have for all \(n\),
\[\chi_{\mathcal{A}_{n}}(t)=\sum_{\mathcal{B}\subseteq\mathcal{A},\;\bigcap \mathcal{B}\neq\phi}(-1)^{\#\mathcal{B}}t^{n-\operatorname{rank}(\mathcal{B})}.\]
To each \(\mathcal{B}\subseteq\mathcal{A}_{n}\), such that \(\bigcap\mathcal{B}\neq\phi\), we associate a graph \(G(\mathcal{B})\) on the vertex set \([n]\) where there is an edge between the vertices \(i\) and \(j\) if there is a hyperplane in \(\mathcal{B}\) parallel to a hyperplane of the form \(x_{i}=cx_{j}\) for some \(c\in\mathbb{R}\).
Using [17, Corollary 5.1.6], we get
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\exp\sum_{n \geq 1}\tilde{\chi}_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}\]
where for any \(n\) we define
\[\tilde{\chi}_{\mathcal{A}_{n}}(t)=\sum_{\begin{subarray}{c}\mathcal{B}\subseteq \mathcal{A},\;\bigcap\mathcal{B}\neq\phi\\ G(\mathcal{B})\;\text{connected}\end{subarray}}(-1)^{\#\mathcal{B}}t^{n- \operatorname{rank}(\mathcal{B})}.\]
Note that if \(G(\mathcal{B})\) is connected, then any point in \(\bigcap\mathcal{B}\) is determined by any one of its coordinates, say \(x_{1}\). This is because any path from the vertex \(1\) to a vertex \(i\) in \(G(\mathcal{B})\) can be used to determine \(x_{i}\). This shows us that \(\operatorname{rank}(\mathcal{B})\) is either \(n\) or \(n-1\). Hence,
\(c_{n}t+d_{n}\) for some \(c_{n},d_{n}\in\mathbb{Z}\). Setting
\[\exp\sum_{n\geq 1}c_{n}\frac{x^{n}}{n!} =\sum_{n\geq 0}b_{n}\frac{x^{n}}{n!}\] \[\exp\sum_{n\geq 1}d_{n}\frac{x^{n}}{n!} =\sum_{n\geq 0}a_{n}\frac{x^{n}}{n!}\]
we get
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\left(\sum_{n\geq 0}b_{ n}\frac{x^{n}}{n!}\right)^{t}\left(\sum_{n\geq 0}a_{n}\frac{x^{n}}{n!}\right).\]
Substituting \(t=1\) and \(t=-1\) and using Theorem 1.1, we obtain expressions for the exponential generating functions of \(\{b_{n}\}\) and \(\{c_{n}\}\) and this gives us the required result.
Before looking at the characteristic polynomials of these arrangements, we recall a few results from [17]. Suppose that \(c:\mathbb{N}\to\mathbb{N}\) is a function and for each \(n,j\in\mathbb{N}\), we define
\[c_{j}(n)=\sum_{\{B_{1},\ldots,B_{j}\}\in\Pi_{n}}c(|B_{1}|)\cdots c(|B_{j}|)\]
where \(\Pi_{n}\) is the set of partitions of \([n]\). Define for each \(n\in\mathbb{N}\),
\[h(n)=\sum_{j=0}^{n}c_{j}(n).\]
From [17, Example 5.2.2], we know that in such a situation,
\[\sum_{n,j\geq 0}c_{j}(n)t^{j}\frac{x^{n}}{n!}=\left(\sum_{n\geq 0}h(n)\frac{x^{n} }{n!}\right)^{t}.\]
Informally, we consider \(h(n)\) to be the number of "structures" that can be placed on an \(n\)-set where each structure can be uniquely broken up into a disjoint union of "connected sub-structures". Here \(c(n)\) denotes the number of connected structures on an \(n\)-set and \(c_{j}(n)\) denotes the number of structures on an \(n\)-set with exactly \(j\) connected sub-structures. We will call such structures _exponential structures_.
In fact, in most of the computations below, we will be dealing with generating functions of the form
\[\left(\sum_{n\geq 0}h(n)\frac{x^{n}}{n!}\right)^{\frac{t+1}{2}}. \tag{3}\]
We can interpret such a generating function as follows. Suppose that there are two types of connected structures, say positive and negative connected structures. Also, suppose that the number of positive connected structures on \([n]\) is the same as the number of negative ones, i.e., \(c(n)/2\). Then the coefficient of \(t^{j}\frac{x^{n}}{n!}\) in the generating function given above is the number of structures on \([n]\) that have \(j\) positive connected sub-structures.
Also, note that since the coefficients of the characteristic polynomial alternate in sign, the distribution of any appropriate statistic we define would be
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(-t)\frac{(-x)^{n}}{n!}.\]
### Reflection arrangements
Before defining statistics for the Catalan arrangements, we first do so for the reflection arrangements we studied in Section 3. As we will see, the same statistic we define for sketches (regions of the type \(C\) arrangement) works for the canonical sketches we have chosen for the other arrangements as well.
#### 6.1.1. The type \(C\) arrangement
We have seen that the regions of the type \(C\) arrangement in \(\mathbb{R}^{n}\) correspond to sketches (Section 3.1) of length \(2n\). We use the second half of the sketch to represent the regions, and call them signed permutations on \([n]\).
A statistic on signed permutations whose distribution is given by the coefficients of the characteristic polynomial is given in [9, Section 2]. We define a similar statistic. First break the signed permutation into _compartments_ using right-to-left minima as follows: Ignoring the signs, draw a line before the permutation and then repeatedly draw a line immediately following the least number after the last line drawn. This is repeated until a line is drawn at the end of the permutation. It can be checked that compartments give signed permutations an exponential structure. A _positive compartment_ of a signed permutations is one where the last term is positive.
**Example 6.3**.: _The signed permutation given by_
\[\overset{+}{3}\overset{+}{1}\overset{-}{6}\overset{-}{7}\overset{-}{5}\overset {+}{2}\overset{-}{4}\]
_is split into compartments as_
\[|\overset{+}{3}\overset{+}{1}|\overset{-}{6}\overset{-}{7}\overset{-}{5}\overset {+}{2}|\overset{-}{4}|\]
_and hence has \(3\) compartments, \(2\) of which are positive._
By the combinatorial interpretation of (3), the distribution of the statistic 'number of positive compartments' on signed permutations is given by
\[\left(\frac{1}{1-2x}\right)^{\frac{t+1}{2}}.\]
Note that for the type \(C\) arrangement, in terms of Proposition 6.2, we have
\[F(x) =\left(\frac{1}{1+2x}\right),\] \[G(x) =1.\]
Hence, we get that the distribution of the statistic 'number of positive compartments' on signed permutations is given by the coefficients of the characteristic polynomial.
For the arrangements that follow, we have described canonical sketches and hence signed permutations that correspond to regions in Section 3. For each arrangement, we will show that the distribution of the statistic 'number of positive compartments' on these canonical signed permutations is given by the characteristic polynomial.
#### 6.1.2. The Boolean arrangement
The signed permutations that correspond to the Boolean arrangement in \(\mathbb{R}^{n}\) (Section 3.2) are those that have all compartments of size \(1\), i.e., the underlying permutation is \(1\ 2\ \cdots\ n\). Just as before, it can be seen that the distribution of the statistic 'number of positive compartments' on such signed permutations is given by
\[(e^{2x})^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 using \(F(x)=e^{-2x}\) and \(G(x)=1\).
#### 6.1.3. The type \(D\) arrangement
From Section 3.3, we can see that the regions of the type \(D\) arrangement in \(\mathbb{R}^{n}\) correspond to signed permutations on \([n]\) where the first sign is positive. Given \(i\in[n]\) and a signed permutation \(\sigma\) of \([n]\setminus\{i\}\), the signed permutation of \([n]\) obtained by appending \(\bar{i}\) to the start of \(\sigma\) has the same number of positive compartments as \(\sigma\). This shows that the distribution of the statistic on signed permutations whose first term is positive is
\[(1-x)\left(\frac{1}{1-2x}\right)^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1+x}{1+2x}\right),\] \[G(x) =1+x.\]
Note that the expression for \(G(x)\) is due to the fact that the type \(D\) arrangement in \(\mathbb{R}^{1}\) is empty.
#### 6.1.4. The braid arrangement
From Section 3.4, we get that the regions of the brain arrangement in \(\mathbb{R}^{n}\) corresponds to the signed permutations on \([n]\) where all terms are positive. Hence, the number of positive compartments is just the number of compartments in the underlying permutation. Since compartments give permutations an exponential structure, the distribution of this statistic is
\[\left(\frac{1}{1-x}\right)^{t}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since \(F(x)=\frac{1}{1+x}\) and \(G(x)=1+x\).
We summarize the results of this section as follows. For any reflection arrangement \(\mathcal{A}\), we use \(\mathcal{A}\)-signed permutation to mean those described above to represent the regions of \(\mathcal{A}\).
**Theorem 6.4**.: _For any reflection arrangement \(\mathcal{A}\), the absolute value of the coefficient of \(t^{j}\) in \(\chi_{\mathcal{A}}(t)\) is the number of \(\mathcal{A}\)-signed permutations that have \(j\) positive compartments._
### Catalan deformations
We start with defining a statistic for the extended type \(C\) Catalan arrangements. Using Proposition 6.2, we then show that the generating function for the statistic and the characteristic polynomials match.
Fix \(m\geq 1\). We define a statistic on labeled symmetric non-nesting partitions and show that its distribution is given by the characteristic polynomial. To do this, we first recall some definitions and results about the type \(A\) extended Catalan arrangement.
**Definition 6.5**.: _An \(m\)-non-nesting partition of size \(n\) is a partition of \([(m+1)n]\) such that the following hold:_
1. _Each block is of size_ \((m+1)\)_._
2. _If_ \(a,b\) _are in the same block_ \(B\) _and_ \([a,b]\cap B=\{a,b\}\)_, then for any_ \(c,d\) _such that_ \(a<c<d<b,c\) _and_ \(d\) _are not in the same block._
Just as before, such partitions can be represented using arc diagrams.
**Example 6.6**.: _The arc diagram corresponding to the \(2\)-non-nesting partition of size \(3\)_
\[\{1,2,4\},\{3,5,6\},\{7,8,9\}\]
_is given in Figure 11._
It is known (for example, see [2, Theorem 2.2]) that the number of \(m\)-non-nesting partitions of size \(n\) is
\[\frac{1}{mn+1}\binom{(m+1)n}{n}.\]
These numbers are called the Fuss-Catalan numbers or generalized Catalan numbers. Setting \(m=1\) gives us the usual Catalan numbers. Labeling the \(n\) blocks distinctly using \([n]\) gives us labeled \(m\)-non-nesting partitions. These objects correspond to the regions of the type \(A\)\(m\)-Catalan arrangement in \(\mathbb{R}^{n}\) whose hyperplanes are
\[x_{i}-x_{j}=0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\) (for example, see [6, Section 8.1]).
We now define a statistic on labeled non-nesting partitions similar to the one defined in [7, Section 4]. The statistic defined in [7] is for labeled \(m\)-Dyck paths but these objects are in bijection with labeled \(m\)-non-nesting partitions.
A labeled non-nesting partition can be broken up into interlinked pieces, say \(P_{1},P_{2},\ldots,P_{k}\). We group these pieces into _compartments_ as follows. If the label \(1\) is in the \(r^{th}\) interlinked piece, then the interlinked pieces \(P_{1},P_{2},\ldots,P_{r}\) form the first compartment. Let \(j\) be the smallest number in \([n]\setminus A\) where \(A\) is the set of labels in first compartment. If \(j\) is in the \(s^{th}\) interlinked piece then interlinked pieces \(P_{r+1},P_{r+2},\ldots,P_{s}\) form the second compartment. Continuing this way, we break up a labeled non-nesting partition into compartments.
**Example 6.7**.: _The labeled non-nesting partition in Figure 12 has \(3\) interlinked pieces. The first compartment consists of just the first interlinked piece since it contains the label \(1\). The smallest label in the rest of the diagram is \(3\) which is in the last interlinked piece. Hence, this labeled non-nesting partition has \(2\) compartments._
A non-nesting partition labeled with distinct integers (not necessarily of the form \([n]\)) can be broken up into compartments in the same way. Here the first compartment consists of the interlinked pieces up to the one containing the smallest label.
Figure 11. Arc diagram corresponding to the \(2\)-non-nesting partition in Example 6.6.
Figure 12. A labeled non-nesting partition with \(3\) interlinked pieces and \(2\) compartments.
It can be checked that compartments give labeled non-nesting partitions an exponential structure. This is because the order in which they appear can be determined by their labels. A labeled non-nesting partition is said to be _connected_ if it has only one compartment.
We now define a similar statistic for labeled symmetric non-nesting partitions. To a symmetric non-nesting partition we can associate a pair consisting of
1. an interlinked symmetric non-nesting partition, which we call the _bounded part_ and
2. a non-nesting partition, which we call the _unbounded part_.
This is easy to do using arc diagrams, as illustrated in the following example. The terminology becomes clear when one considers the boundedness of the coordinates in the region corresponding to a labeled symmetric non-nesting partition.
**Example 6.8**.: _To the symmetric \(2\)-non-nesting partition in Figure 13 we associate_
1. _the interlinked symmetric_ \(2\)_-non-nesting partition marked_ \(A\) _and_
2. _the_ \(2\)_-non-nesting partition marked_ \(B\)_._
_Here \(A\) is the bounded part and \(B\) is the unbounded part. We can obtain the original arc diagram back from \(A\) and \(B\) by placing a copy of \(B\) on either side of \(A\)._
This is a bijection between symmetric non-nesting partitions and such pairs. Given a labeled symmetric non-nesting partition, we define the statistic using just the unbounded part. Ignoring the signs, we break the unbounded part into compartments just as we did for non-nesting partitions. A _positive compartment_ is one whose last element has a positive label.
**Example 6.9**.: _Suppose the arc diagram in Figure 14 is the unbounded part of some symmetric non-nesting partition. Notice that ignoring the signs, this arc diagrams breaks up into compartments just as Figure 12. But only the first compartment is positive since its last element has label \(6\) which is positive._
We claim that the statistic 'number of positive compartments' meets our requirements. To prove that the distribution of this statistic is given by the characteristic polynomial, we
Figure 14. The unbounded part of a symmetric non-nesting partition that has \(1\) positive compartment.
Figure 13. Break up of a symmetric \(2\)-non-nesting partition.
apply Proposition 6.2 to the sequence of arrangements \(\{\mathcal{C}_{n}^{(m)}\}\). Using the bijection between labeled symmetric \(m\)-non-nesting partitions and regions of \(\mathcal{C}_{n}^{(m)}\), we note that those arc diagrams that are interlinked are the ones that correspond to bounded regions. Hence, using the notations form Proposition 6.2, and [17, Proposition 5.1.1], we have
\[F(-x)=G(-x)\cdot\left(\sum_{n\geq 0}\frac{2^{n}n!}{mn+1}\binom{(m+1)n}{n} \frac{x^{n}}{n!}\right). \tag{4}\]
Note that \(\operatorname{rank}(\mathcal{C}_{n}^{(m)})=n\). This gives us
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(-t)\frac{(-x)^{n}}{n!}=G(-x)\cdot \left(\sum_{n\geq 0}\frac{2^{n}n!}{mn+1}\binom{(m+1)n}{n}\frac{x^{n}}{n!} \right)^{\frac{t+1}{2}}.\]
Using the combinatorial interpretation of (3), we see that the right hand side of the above equation is the generating function for the distribution of the statistic.
We also obtain corresponding statistics on symmetric sketches using the bijection in Section 4.1. This gives us the following result.
**Theorem 6.10**.: _The absolute value of the coefficient of \(t^{j}\) in \(\chi_{\mathcal{C}_{n}^{(m)}}(t)\) is the number of symmetric \(m\)-sketches of size \(n\) that have \(j\) positive compartments._
For the arrangements \(\mathcal{D}_{n}\), \(\mathcal{P}_{n}\), \(\mathcal{B}_{n}\), and \(\mathcal{BC}_{n}\) as well, the analogue of (4) holds. That is, for each of these arrangements, using the notation of Proposition 6.2, we have
\[F(-x)=G(-x)\cdot\left(\sum_{n\geq 0}\frac{2^{n}n!}{n+1}\binom{2n}{n}\frac{x^{ n}}{n!}\right).\]
This can be proved using the definitions of type \(D\), pointed, type \(B\), and type \(BC\) sketches and the description of which sketches correspond to bounded regions.
There is a slight difference in the proof for the sequence of arrangements \(\{\mathcal{D}_{n}\}\). The arrangement \(\mathcal{D}_{1}\) is empty and hence
\[G(-x)=1-x+\sum_{n\geq 2}b(\mathcal{D}_{n})\frac{x^{n}}{n!}.\]
However, from the definition of type \(D\) sketches, we see that we must not allow those symmetric non-nesting partitions where the bounded part is empty and the first interlinked piece of the unbounded part is of size \(1\) with negative label. Hence, we still get the required expression for \(F(-x)\).
Just as we did for the extended type \(C\) Catalan arrangements, we define positive compartments for the arc diagrams corresponding to the regions of these arrangements, which gives corresponding statistics on the sketches.
**Example 6.11**.: _The arc diagram in Figure 15 corresponds to a pointed sketch with \(2\) positive compartments._
The following result can be proved just as before.
**Theorem 6.12**.: _The absolute value of the coefficient of \(t^{j}\) in \(\chi_{A}(t)\) for \(\mathcal{A}=\mathcal{D}_{n}\) (respectively \(\mathcal{P}_{n}\), \(\mathcal{B}_{n},\mathcal{B}\mathcal{C}_{n}\)) is the number of type \(D\) (respectively pointed, type \(B\), type \(BC\)) sketches of size \(n\) that have \(j\) positive compartments._
## 7. Deformations of the threshold arrangement
The threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes \(x_{i}+x_{j}=0\) for \(1\leq i<j\leq n\). These arrangements are of interest because their regions correspond to certain labeled graphs called _threshold graphs_ which have been extensively studied (see [12]). In this section, we study this arrangement and some of its deformations using similar techniques as in previous sections.
### Sketches and moves
We use the sketches and moves idea to study the regions of the threshold arrangement by considering it as a sub-arrangement of the type \(C\) arrangement (Section 3.1). Before doing that, we first study the arrangement obtained by adding the coordinate hyperplanes to the threshold arrangement.
#### 7.1.1. Fubini arrangement
We define the Fubini arrangement in \(\mathbb{R}^{n}\) to be the one with hyperplanes
\[2x_{i} =0\] \[x_{i}+x_{j} =0\]
for all \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[x_{i}-x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence a Fubini move, which we call an \(F\) move, is swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(\overset{-}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
**Example 7.1**.: _We can use a series of \(F\) moves on a sketch as follows:_
\[\overset{-}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1}\overset{+}{4}\overset{ -}{5}\overset{+}{4}\overset{-}{1}\overset{+}{2}\overset{+}{6}\overset{+}{3} \longrightarrow\overset{-}{6}\overset{-}{3}\overset{-}{2}\overset{+}{1} \overset{+}{4}\overset{-}{5}\overset{+}{5}\overset{+}{4}\overset{-}{1} \overset{+}{2}\overset{+}{3}\overset{+}{6}\longrightarrow\overset{-}{6} \overset{-}{3}\overset{-}{2}\overset{+}{4}\overset{+}{1}\overset{+}{5} \overset{+}{1}\overset{-}{4}\overset{+}{2}\overset{+}{3}\overset{+}{6}\]
We define a _block_ to be the set of absolute values in a maximal string of contiguous terms in the second half of a sketch that have the same sign. The blocks of the initial sketch in Example 7.1 are \(\{5\},\{1,4\},\{2,3,6\}\) (these blocks appear in this order with the first one being positive). It can be checked that \(F\) moves do not change the sequence of signs (above the numbers) and that they can only be used to reorder the elements in a block. Hence, each equivalence class has a unique sketch where the numbers in each block appear in ascending order. The last sketch in Example 7.1 is the unique such sketch in its equivalence class.
Figure 15. Arc diagram corresponding to a pointed sketch with \(2\) positive compartments.
The number of such sketches is equal to the number of ways of choosing an ordered partition of \([n]\) (which correspond to the blocks of the sketch in order) and then choosing a sign for the first block. Hence the number of regions of the Fubini arrangement is \(2\cdot a(n)\) where \(a(n)\) is the \(n^{th}\) Fubini number, which is the number of ordered partitions of \([n]\) listed as A000670 in the OEIS [16].
#### 7.1.2. Threshold arrangement
The threshold arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[x_{i}+x_{j}=0\]
for all \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[2x_{i} =0\] \[x_{i}-x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence the threshold moves, which we call \(T\) moves, are as follows:
(1) (\(D\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
(2) (\(F\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(j\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
For any sketch, there is a \(T\) equivalent sketch for which the first block has more than \(1\) element. This is because, if the sketch has first block of size \(1\), applying a \(D\) move (swapping the \(n^{th}\) and \((n+1)^{th}\) term), will result in a sketch where the first block has size greater than \(1\) (first step in Example 7.2).
**Example 7.2**.: _We can use a series of \(T\) moves on a sketch as follows: \(\overset{+}{5}\overset{-}{4}\overset{-}{1}\overset{+}{2}\overset{+}{6}\overset {-}{3}\overset{-}{1}\overset{+}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1} \overset{+}{4}\overset{-}{5}\xrightarrow{D\ move}\overset{+}{5}\overset{+}{4} \overset{-}{1}\overset{+}{2}\overset{+}{6}\overset{+}{3}\overset{-}{1} \overset{-}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1}\overset{+}{4} \overset{-}{5}\xrightarrow{F\ moves}\overset{+}{5}\overset{-}{4}\overset{-}{1} \overset{+}{6}\overset{+}{3}\overset{-}{2}\overset{+}{2}\overset{-}{3} \overset{-}{6}\overset{-}{1}\overset{+}{4}\overset{-}{5}\]_
To obtain a canonical sketch for each threshold region, we will need a small lemma.
**Lemma 7.3**.: _Two \(T\) equivalent sketches that have their first block of size greater than 1 have the same blocks which appear in the same order with the same signs._
Proof.: Looking at what the \(T\) moves do to the sequence of signs (above the numbers), we can see that they at most swap the \(n^{th}\) and \((n+1)^{th}\) sign (\(D\) move). Hence, if we require the first blocks to have size greater than \(1\), both the sketches have the same number of blocks and the number of elements in the corresponding blocks are the same. An \(F\) move can only reorder elements in the same block of a sketch. A \(D\) move changes the sign of the first element of the second half. So if there are \(k>1\) elements in the first block of a \(T\) equivalent sketch, then the set of absolute values of the first \(k\) elements of the second half remains the same in all \(T\) equivalent sketches. This gives us the required result.
Using the above lemma, we can see that for any sketch there is a unique \(T\) equivalent sketch where the size of the first block is greater than \(1\) and the elements of each block are in ascending order. The last sketch in Example 7.2 is the unique such sketch in its equivalence class. Similar to the count for Fubini regions, we get that the number of regions of the threshold arrangement is
\[2\cdot(a(n)-n\cdot a(n-1))\]
where, as before, \(a(n)\) is the \(n^{th}\) Fubini number. The number of regions of the threshold arrangement is listed as A005840 in the OEIS [16].
**Remark 7.4**.: _The regions of the threshold arrangement in \(\mathbb{R}^{n}\) are known to be in bijection with labeled threshold graphs on \(n\) vertices (see [18, Exercise 5.25]). Labeled threshold graphs on \(n\) vertices are inductively constructed starting from the empty graph. Vertices labeled \(1,\ldots,n\) are added in a specified order. At each step, the vertex added is either 'dominant' or'recessive'. A dominant vertex is one that is adjacent to all vertices added before it and a recessive vertex is one that is isolated from all vertices added before it. It is not difficult to see that the canonical sketches described above are in bijection with threshold graphs._
### Statistics
The characteristic polynomial of the threshold arrangement and a statistic on its regions whose distribution is given by the characteristic polynomial has been studied in [9]. This is done by directly looking at the coefficients of the characteristic polynomial. In fact, even the coefficients of the characteristic polynomial of the Fubini arrangement (Section 7.1.1) have already been combinatorially interpreted in [9, Section 4.1]. This can be used to define an appropriate statistic on the regions of the Fubini arrangement. Here, just as in Section 6, we use Proposition 6.2 to combinatorially interpret the generating functions of the characteristic polynomials for the Fubini and threshold arrangements. Just as before, we will show that the statistic 'number of positive compartments' works for our purposes.
#### 7.2.1. Fubini arrangement
We will use the second half of the canonical sketches described in Section 7.1.1 to represent the regions. We define blocks for signed permutations just as we did for sketches. Hence, the regions of the Fubini arrangement in \(\mathbb{R}^{n}\) correspond to signed permutations on \([n]\) where each block is increasing.
In this special class of signed permutations as well, compartments give them an exponential structure. This is because there is no condition relating the signs of the last element of a compartment and the first element of the compartment following it. This is because the last element of a compartment is necessarily smaller in absolute value than the element following it. Also, suppose we are given a signed permutation such that each block is increasing. It can be checked that the signed permutation obtained by changing all the signs also satisfies this property.
Using the above observations and the combinatorial interpretation of (3), we get that
\[\left(\frac{e^{x}}{2-e^{x}}\right)^{\frac{t+1}{2}}\]
is the exponential generating function for signed permutations where each block is increasing where \(t\) keeps track of the number of positive compartments. This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1}{2e^{x}-1}\right),\] \[G(x) =1.\]
#### 7.2.2. Threshold arrangement
From Section 7.1.2, we can see that the regions of the threshold arrangement in \(\bar{\mathbb{R}}^{n}\) correspond to signed permutations on \([n]\) where each block is increasing and the first block has size greater than \(1\). If such a permutation starts with \(\bar{1}\), we instead use the signed permutation obtained by changing \(\bar{1}\) to \(\bar{1}\) to represent the region. Similar to how we obtained the generating function for the statistic for type \(D\) from the
one for type \(C\), we obtain our generating function from the one we have for the Fubini arrangement.
Suppose that we are given \(i\in[n]\) and a signed permutation \(\sigma\) on \([n]\setminus\{i\}\) whose blocks are increasing. If \(i=1\) we construct the signed permutation on \([n]\) obtained by appending \(\bar{1}\) to the front of \(\sigma\). If \(i>1\), and the first element of \(\sigma\) is \(\frac{\pm}{j}\). We construct the signed permutation on \([n]\) obtained by appending \(\frac{\mp}{i}\) to the start of \(\sigma\). In both cases, it can be checked that the number of positive compartment of the new signed permutation constructed is the same as that for \(\sigma\).
This shows that the distribution of the statistic 'number of positive compartments' on the signed permutations that correspond to regions of the threshold arrangement is
\[(1-x)\left(\frac{e^{x}}{2-e^{x}}\right)^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1+x}{2e^{x}-1}\right),\] \[G(x) =1+x.\]
### Some deformations
Deformations of the threshold arrangement have not been as well-studied as those of the braid arrangement. However, the finite field method has been used to compute the characteristic polynomial for some deformations. In [14, 15], Seo computed the characteristic polynomials of the so called Shi and Catalan threshold arrangements. Expressions for the characteristic polynomials of more general deformations have been computed in [5].
In this section, we use the sketches and moves technique to obtain certain non-nesting partitions that are in bijection with the regions of the Catalan and Shi threshold arrangements. We do this by considering these arrangements as sub-arrangements of the type \(C\) Catalan arrangement (Section 4). Unfortunately, we were not able to directly count the non-nesting partitions we obtained since their description is not as simple as the ones we have seen before.
Fix \(n\geq 2\) throughout this section. Recall that we studied the type \(C\) Catalan arrangement by considering a translation of it called \(\mathcal{C}_{n}\) whose hyperplane are given by (2) and whose regions correspond to symmetric sketches of size \(n\) (see Definition 4.2). Symmetric sketches can also be viewed as labeled symmetric non-nesting partitions (see Example 4.13).
#### 7.3.1. Catalan threshold
The Catalan threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes
\[X_{i}+X_{j}=-1,0,1\]
for all \(1\leq i<j\leq n\). The translated arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), which we call \(\mathcal{CT}_{n}\), has hyperplanes
\[x_{i}+x_{j}=-2,-1,0\]
for all \(1\leq i<j\leq n\). We consider this arrangement as a sub-arrangement of \(\mathcal{C}_{n}\). Using Bernardi's idea of moves, we can define an equivalence on the symmetric sketches such that two sketches are equivalent if they lie in the same region of \(\mathcal{CT}_{n}\).
An \(\alpha_{+}\) letter is an \(\alpha\)-letter whose subscript is positive. We similarly define \(\alpha_{-},\beta_{+}\) and \(\beta_{-}\) letters. The'mod-value' of a letter \(\alpha_{i}^{(s)}\) is \(|i|\).
The hyperplanes in \(\mathcal{C}_{n}\) that are not in \(\mathcal{CT}_{n}\) are
\[2x_{i} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
where \(1\leq i<j\leq n\). Changing the inequality corresponding to exactly one of these hyperplanes is given by the following moves on a sketch, which we call \(\mathcal{CT}\) moves.
1. Swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter. This corresponds to changing the inequality corresponding to a hyperplane of the form \(2x_{i}=-2\) or \(2x_{i}=0\).
2. Swapping the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letter if they are consecutive (along with the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)). This corresponds to changing the inequality corresponding to a hyperplane of the form \(2x_{i}=-1\).
3. Swapping consecutive \(\alpha_{+}\) and \(\beta_{+}\) letters (along with their negatives). This corresponds to changing the inequality corresponding to a hyperplane of the form \(x_{i}-x_{j}=1\).
4. Swapping \(\{\alpha_{i}^{(0)},\alpha_{j}^{(0)}\}\) as well as \(\{\alpha_{i}^{(1)},\alpha_{j}^{(1)}\}\) if both pairs are consecutive (as well as their negatives) where \(i,j\in[n]\) are distinct. This corresponds to changing the inequality corresponding to the hyperplane \(x_{i}-x_{j}=1\).
Two sketches are in the same region of \(\mathcal{CT}_{n}\) if and only if they are related by a series of \(\mathcal{CT}\) moves. We call such sketches \(\mathcal{CT}\) equivalent.
Consider the sketches to be ordered in the lexicographic order induced by the following order on the letters.
\[\alpha_{n}^{(0)}\succ\cdots\succ\alpha_{1}^{(0)}\succ\alpha_{-1}^{(-1)}\succ \cdots\succ\alpha_{-n}^{(-1)}\succ\alpha_{n}^{(1)}\succ\cdots\succ\alpha_{1}^ {(1)}\succ\alpha_{-1}^{(0)}\succ\cdots\succ\alpha_{-n}^{(0)}\]
In other words, the \(\alpha\)-letters are greater than the \(\beta\)-letters and for letters of the same type, the order is given by comparing the subscripts.
A sketch is called \(\mathcal{CT}\) maximal if it is greater (in the lexicographic order) than all sketches to which it is \(\mathcal{CT}\) equivalent. Hence the regions of \(\mathcal{CT}_{n}\) are in bijection with the \(\mathcal{CT}\) maximal sketches.
**Theorem 7.5**.: _A symmetric sketch is \(\mathcal{CT}\) maximal if and only if the following hold._
1. _If a_ \(\beta\)_-letter is followed by an_ \(\alpha\)_-letter, they should be of opposite signs and different mod-values._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ X\end{array}\quad\Longrightarrow\begin{array}{c}\text{$X$ and $Y$ of opposite sign}\\ \text{and different mod value}.\end{array}\]
2. _If two_ \(\alpha\)_-letters and their corresponding_ \(\beta\)_-letters are both consecutive and of the same sign then the subscript of the first one should be greater._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ a_{1}\end{array}\quad\cdots\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ a_{2}\end{array}\quad\text{and $a_{1},a_{2}$ same sign}\implies a_{1}>a_{2}.\]
3. _If the_ \(n^{th}\) _and_ \((n+1)^{th}\)__\(\alpha\)_-letters are consecutive, then so are the_ \((n-1)^{th}\) _and_ \(n^{th}\) _with the_ \(n^{th}\)__\(\alpha\)_-letter being positive. In such a situation, if the_ \((n-1)^{th}\)__\(\alpha\)_-letter is negative and the_ \((n-1)^{th}\) _and_ \(n^{th}\)__\(\beta\)_-letters are consecutive, the_ \((n-1)^{th}\)__\(\alpha\)_-letter should have a subscript greater than that of the_ \((n+1)^{th}\)__\(\alpha\)_._
4. _If the_ \((2n-1)^{th}\) _and_ \((2n+1)^{th}\) _letters are both_ \(\beta\)_-letters of the same sign and their corresponding_ \(\alpha\)_-letters are consecutive, the subscript of the_ \((2n-1)^{th}\) _letter should be greater than that of the_ \((2n+1)^{th}\)_._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ X\end{array}\quad\text{and $X,Y$ same sign}\implies X>Y.\]
_Hence the regions of \(\mathcal{CT}_{n}\) are in bijection with sketches of the form described above._
**Remark 7.6**.: _The idea of ordering sketches and choosing the maximal sketch in each region of \(\mathcal{CT}_{n}\) to represent it is the same one used by Bernardi [6] to study certain deformations of the braid arrangement. In fact, [6, Lemma 8.13] shows that in this case, any sketch that is locally maximal (greater than any sketch that can be obtained by applying a single move) is maximal. Note that the sketches described in the above theorem are precisely the \(2\)-locally maximal sketches. That is, these are the sketches that can neither be converted into a greater sketch by applying a single \(\mathcal{CT}\) move nor by applying two \(\mathcal{CT}\) moves. It is clear that any \(\mathcal{CT}\) maximal sketch is \(2\)-locally maximal. The theorem states the converse is true as well._
Proof of Theorem 7.5.: We first show that these conditions are required for a sketch to be \(\mathcal{CT}\) maximal.
1. The first condition is necessary since the \(\mathcal{CT}\) moves of type (a) or (c) would result in a greater sketch if it were false.
2. The second condition corresponds to \(\mathcal{CT}\) moves of type (d).
3. The part about the \(n^{th}\)\(\alpha\)-letter being positive if the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are consecutive is due to \(\mathcal{CT}\) moves of type (c). Suppose the letter before the \(n^{th}\)\(\alpha\)-letter is a \(\beta\)-letter. Then it can't be positive since we have already seen that condition (1) of the theorem statement must be satisfied. But if it is negative, we can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: Hence the letter before the \(n^{th}\)\(\alpha\)-letter has to be an \(\alpha\)-letter. Now, suppose that the \((n-1)^{th}\)\(\alpha\)-letter is negative and the \((n-1)^{th}\)\(n^{th}\)\(\beta\)-letters are consecutive. Let the subscript of the \((n-1)^{th}\)\(\alpha\)-letter be \(-k\) and that of the \((n+1)^{th}\)\(\alpha\)-letter be \(-i\) for some \(k,i\in[n]\). If \(-k<-i\), we can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: Hence we must have \(-k>-i\) in this case.
4. Suppose the \((2n-1)^{th}\) and \((2n+1)^{th}\) letters are both \(\beta\)-letters of the same sign and their corresponding \(\alpha\)-letters are consecutive but the subscript \(X\) of the \((2n-1)^{th}\) letter is less than the subscript \(Y\) of the \((2n+1)^{th}\) letter. We can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: We now have to prove that these conditions are sufficient for a sketch to be \(\mathcal{CT}\) maximal. Suppose \(w\) is a symmetric sketch that satisfies the four properties mentioned in the statement of the theorem. Suppose there is a sketch \(w^{\prime}\) which is \(\mathcal{CT}\) equivalent to \(w\) but larger in the lexicographic order. This means that if \(w=w_{1}\cdots w_{4n}\) and \(w^{\prime}=w^{\prime}_{1}\cdots w^{\prime}_{4n}\), there is some \(p\in[4n]\) such that \[w_{i}=w^{\prime}_{i}\text{ for }i\in[p-1]\text{ and }w_{p}\prec w^{\prime}_{p}.\] The possible ways in which this can happen are listed below. 1. \(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter. 2. \(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter. 3. \(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter. 4. \(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter. 5. \(w_{p}\) and \(w^{\prime}_{p}\) are both \(\alpha_{+}\) letters. 6. \(w_{p}\) and \(w^{\prime}_{p}\) are both \(\alpha_{-}\) letters. 7. \(w_{p}\) is an \(\alpha_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter.
The case of both \(w_{p}\) and \(w^{\prime}_{p}\) being \(\beta\)-letters is not possible since, by the properties of a sketch, this would mean \(w_{p}=w^{\prime}_{p}\). Since \(\alpha_{-}\prec\alpha_{+}\) we cannot have \(w_{p}\) being an \(\alpha_{+}\) letter
and \(w^{\prime}_{p}\) being an \(\alpha_{-}\) letter. We will now show that each case leads to a contradiction, which will complete the proof of the theorem.
Before going forward, we formulate the meaning of \(w\) and \(w^{\prime}\) being \(\mathcal{CT}\) equivalent in terms of sketches. Since they have to be in the same region of \(\mathcal{CT}_{n}\), the inequalities corresponding to the hyperplanes
\[x_{i}+x_{j}=-2,-1,0\]
for all \(1\leq i<j\leq n\) are the same in both sketches. This means that the relationship between the pairs of the form
\[\{\alpha_{i}^{(1)},\alpha_{-j}^{(-1)}\},\ \{\alpha_{i}^{(1)},\alpha_{-j}^{(0)}\},\ \{\alpha_{i}^{(0)},\alpha_{-j}^{(-1)}\},\text{ and }\{\alpha_{i}^{(0)},\alpha_{-j}^{(0)}\}\]
for any distinct \(i,j\in[n]\) are the same in both \(w\) and \(w^{\prime}\). This can be written as follows:
\[\begin{array}{l}\text{The relationship between letters of opposite sign and}\\ \text{different mod value have to be the same in both $w$ and $w^{\prime}$.}\end{array} \tag{5}\]
**Case 1:**\(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(1)}\cdots\] \[w^{\prime} =w^{\prime}_{1}\cdots w^{\prime}_{p-1}\alpha_{l}^{(0)}\cdots\]
for some \(k,l\in[n]\). Hence, \(\alpha_{l}^{(0)}\) appears after \(\alpha_{k}^{(1)}\) in \(w\). By (5), every letter between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\) should be positive or one of \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\). If all the letters are positive, since \(\alpha_{k}^{(1)}\) is a \(\beta_{+}\) letter and \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter, there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\) in \(w\), which is a contradiction to property (1).
Now suppose \(\alpha_{-l}^{(0)}\) is between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\). It cannot be immediately before \(\alpha_{l}^{(0)}\) since this would contradict property (1). But if it is not immediately before \(\alpha_{l}^{(0)}\), since \(\alpha_{-l}^{(0)}\) and \(\alpha_{l}^{(0)}\) are negatives of each other, there should be some negative letter between them. But this letter cannot be \(\alpha_{-l}^{(-1)}\) (since this should be before \(\alpha_{-l}^{(0)}\)). This is a contradiction to (5). Hence \(\alpha_{-l}^{(0)}\) cannot be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\).
So we must have \(\alpha_{-l}^{(-1)}\) between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\). Again, \(\alpha_{-l}^{(-1)}\) cannot be immediately before \(\alpha_{l}^{(0)}\) since this would contradict property (3). This means that there is at least one letter between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(0)}\) and all such letters are positive. If one of them is a \(\beta_{+}\) letter, since \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter, there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which is a contradiction to property (1). Hence all the letters between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(0)}\) are \(\alpha_{+}\) letters. But this is impossible by Lemma 4.6.
**Case 2:**\(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(0)}\cdots\] \[w^{\prime} =w^{\prime}_{1}\cdots w^{\prime}_{p-1}\alpha_{-l}^{(-1)}\cdots\]
for some \(k,l\in[n]\). Hence, \(\alpha_{-l}^{(-1)}\) appears after \(\alpha_{-k}^{(0)}\) in \(w\). By (5), each letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) in \(w\) has to be negative or one of \(\alpha_{l}^{(0)}\) and \(\alpha_{l}^{(1)}\). Just as before, all letters between
\(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) cannot be negative. The fact that \(\alpha_{l}^{(1)}\) cannot be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) also has a similar proof as in the last case.
So we must have \(\alpha_{l}^{(0)}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\). All the letters between \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) have to be negative. There are no \(\beta_{-}\) letters between them, otherwise there would be consecutive letters of the form \(\beta_{-}\alpha_{-}\), which contradicts property (1). So if there are letters between \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) they should all be \(\alpha_{-}\) letters, but this cannot happen by Lemma 4.6. So \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive. By property (3), the letter before \(\alpha_{l}^{(0)}\) should be an \(\alpha\)-letter. And by (5), it is an \(\alpha_{-}\) letter. But since \(\alpha_{-k}^{(0)}\) is a \(\beta_{-}\) letter and all letters between \(\alpha_{-k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are negative, there will be a consecutive pair of the form \(\beta_{-}\alpha_{-}\), which is a contradiction to property (1).
**Case 3:**\(w_{p}\) is a \(\beta_{+}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{-}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\]
for some \(k,l\in[n]\). If \(k\neq l\), this will contradict (5) since \(\alpha_{k}^{(1)}\) will be before \(\alpha_{-l}^{(-1)}\) in \(w\) but not in \(w^{\prime}\). So \(\alpha_{-k}^{(-1)}\) appears after \(\alpha_{k}^{(1)}\) in \(w\) and all letters between them are negative by (5) (note that \(\alpha_{k}^{(0)}\) is before \(\alpha_{k}^{(1)}\)). Again, \(\alpha_{-k}^{(-1)}\) cannot be immediately after \(\alpha_{k}^{(1)}\) since this would contradict property (1) and if there were some letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-k}^{(-1)}\), at least one of them would be negative, which contradicts (5).
**Case 4:**\(w_{p}\) is a \(\beta_{-}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{+}\) letter.
Arriving at a contradiction in this case follows using the same method as in the last case.
**Case 5:**\(w_{p}\) and \(w_{p}^{\prime}\) are both \(\alpha_{+}\) letters.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\]
for some \(1\leq k<l\leq n\). We split this case into two possibilities depending on whether or not \(\alpha_{l}^{(0)}\) is before \(\alpha_{k}^{(1)}\).
**Case 5(a):**\(\alpha_{l}^{(0)}\) is before \(\alpha_{k}^{(1)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\alpha_{l}^{(0)}\cdots \alpha_{k}^{(1)}\cdots\alpha_{l}^{(1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\,.\]
By (5), each letter between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) in \(w\) is positive or one of \(\alpha_{-l}^{(-1)}\) or \(\alpha_{-l}^{(0)}\). Just as in the **Case 1**, we can prove that \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\) cannot between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\). Hence all the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are positive. In fact, they all have to be \(\alpha\)-letters. Otherwise we would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which contradicts property (1).
Each letter between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) is positive or one of \(\alpha_{-k}^{(-1)}\), \(\alpha_{-l}^{(-1)}\), \(\alpha_{-k}^{(0)}\) or \(\alpha_{-l}^{(0)}\). Neither \(\alpha_{-k}^{(0)}\) nor \(\alpha_{-l}^{(0)}\) can be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), since this would mean that \(\alpha_{-k}^{(-1)}\) or \(\alpha_{-l}^{(-1)}\) is between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\), which cannot happen since we have already seen that there are only positive \(\alpha\)-letters between them.
If \(\alpha_{-k}^{(-1)}\) were between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), it could not be immediately after \(\alpha_{k}^{(1)}\) since this would contradict property (1). If there were some letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-k}^{(-1)}\), at least one of them would be a negative letter other than \(\alpha_{-l}^{(-1)}\), which contradicts (5) (since \(\alpha_{l}^{(1)}\) is after \(\alpha_{-k}^{(-1)}\)).
So the only negative letter that can be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) is \(\alpha_{-l}^{(-1)}\). First, suppose that all letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) are positive. Then all of them would have to be \(\beta_{+}\) letters (otherwise there would be consecutive \(\beta_{+}\alpha_{+}\) which contradicts property (1)). Then we would have that all letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are \(\alpha_{+}\) letters and all letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) are \(\beta_{+}\) letters and repeated application of property (2) would give \(k>l\), which is a contradiction.
Next, suppose \(\alpha_{-l}^{(-1)}\) is between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\). If \(\alpha_{-l}^{(-1)}\) is not immediately before \(\alpha_{l}^{(1)}\), there will be some negative letter other than \(\alpha_{-l}^{(-1)}\) between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), which we have already shown is not possible. So \(\alpha_{-l}^{(-1)}\) is immediately before \(\alpha_{l}^{(1)}\) and all the letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-l}^{(-1)}\) are positive and they have to all be \(\beta_{+}\) letters (otherwise there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\)). If \(\alpha_{k^{\prime}}^{(1)}\) is the \(\beta_{+}\) letter before \(\alpha_{-l}^{(-1)}\) (\(k^{\prime}\) could be \(k\)), then \(\alpha_{k^{\prime}}^{(0)}\) is the letter before \(\alpha_{l}^{(0)}\) and hence we get that the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{k^{\prime}}^{(0)}\) are all \(\alpha_{+}\) letters and their corresponding \(\beta\)-letters are consecutive and so by property (2), \(k\geq k^{\prime}\). But property (4) tells us that \(k^{\prime}>l\). So we get \(k>l\), which is a contradiction.
**Case 5(b):**\(\alpha_{l}^{(0)}\) is after \(\alpha_{k}^{(1)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\alpha_{k}^{(1)}\cdots \alpha_{l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots.\]
By (5), each letter between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) in \(w\) is positive or one of \(\alpha_{-l}^{(-1)}\) or \(\alpha_{-l}^{(0)}\). Just as in **Case 1**, we can prove that \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\) cannot between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\). Hence all the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are positive. Since \(\alpha_{k}^{(1)}\) is a \(\beta_{+}\) letter and \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter and all letters in between are positive, there is a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which is a contradiction to property (1).
**Case 6:**\(w_{p}\) and \(w_{p}^{\prime}\) are both \(\alpha_{-}\) letters.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\]
for some \(1\leq l<k\leq n\). We split this case into two possibilities depending on whether or not \(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\).
**Case 6(a):**\(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\alpha_{-l}^{(-1)} \cdots\alpha_{-k}^{(0)}\cdots\alpha_{-l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\,.\]
By (5), each letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is negative or one of \(\alpha_{l}^{(0)}\) or \(\alpha_{l}^{(1)}\). If \(\alpha_{l}^{(1)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), it should not be immediately before \(\alpha_{-l}^{(-1)}\) since this would contradict property (1). But then there would be some positive letter other than \(\alpha_{l}^{(0)}\) between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(1)}\) which would contradict (5).
First, suppose \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). Just as before, using property (1) and Lemma 4.6, we can show that \(\alpha_{l}^{(0)}\) has to be immediately before \(\alpha_{-l}^{(-1)}\). Also, all the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{l}^{(0)}\) have to be negative by (5). By property (3), the letter before \(\alpha_{l}^{(0)}\) has to be an \(\alpha\)-letter and hence here it is an \(\alpha_{-}\) letter. Hence, the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{l}^{(0)}\) have to be \(\alpha_{-}\) letters since otherwise there be a consecutive pair of the form \(\beta_{-}\alpha_{-}\).
By (5), each letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is negative or one of \(\alpha_{k}^{(0)}\), \(\alpha_{l}^{(0)}\), \(\alpha_{k}^{(1)}\) or \(\alpha_{l}^{(1)}\). Now, \(\alpha_{k}^{(1)}\) cannot be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) since this would mean \(\alpha_{k}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), which we have already shown is not possible. We have already assumed \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) and hence it cannot also be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\). If \(\alpha_{k}^{(0)}\) were between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), it could not have been immediately after \(\alpha_{-k}^{(0)}\) since this would contradict property (1). But then there would be some positive letter other than \(\alpha_{l}^{(1)}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{k}^{(0)}\) (since \(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\) and hence \(\alpha_{l}^{(1)}\) is after \(\alpha_{k}^{(0)}\)), which is a contradiction to (5). This means that the only positive letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is \(\alpha_{l}^{(1)}\) which is between them since \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). Since \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive, so are \(\alpha_{l}^{(1)}\) and \(\alpha_{-l}^{(0)}\). The letters between \(\alpha_{-k}^{(0)}\) and \(\alpha_{l}^{(1)}\) are all negative and should be \(\beta_{-}\) letters or else it would cause a contradiction to property (1).
Hence, the situation in the case that \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is the following: There is a string of consecutive \(\alpha_{-}\) letters starting with \(\alpha_{-k}^{(-1)}\) ending before \(\alpha_{l}^{(0)}\) which is immediately before \(\alpha_{-l}^{(-1)}\) and the corresponding \(\beta\)-letters for all these \(\alpha\)-letters are consecutive. If \(\alpha_{-k}^{(-1)}\) is the \(\alpha_{-}\) letter immediately before \(\alpha_{l}^{(0)}\) (\(k^{\prime}\) could be \(k\)), then property (3) gives that \(-k^{\prime}>-l\) and property (2) gives that \(-k\geq-k^{\prime}\) and hence we get \(-k>-l\), which is a contradiction.
Next, suppose that all the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) are negative. All of them should be \(\alpha_{-}\) letters by property (1). It can be shown, just as before, that the only possible positive letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is \(\alpha_{l}^{(0)}\). If \(\alpha_{l}^{(0)}\) is not between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), property (2) leads to a contradiction just as in **Case 5(a)**. If \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), it should be immediately before \(\alpha_{-l}^{(0)}\) and again, following a method similar to **Case 5(a)**, this leads to a contradiction using property (4).
**Case 6(b):**\(\alpha_{-l}^{(-1)}\) is after \(\alpha_{-k}^{(0)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\alpha_{-k}^{(0)}\cdots \alpha_{-l}^{(-1)}\cdots\alpha_{-l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots.\]
By (5), each letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is negative or one of \(\alpha_{l}^{(0)}\) or \(\alpha_{l}^{(1)}\). Just as before, \(\alpha_{l}^{(1)}\) cannot be between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) is not between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), then all the letters between them are negative and there is a \(\beta_{-}\) letter, namely \(\alpha_{-k}^{(0)}\), between them and this would result in a consecutive pair of the form \(\beta_{-}\alpha_{-}\), which contradicts property (1).
So \(\alpha_{l}^{(0)}\) is the only positive letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) is before \(\alpha_{-k}^{(0)}\), we would get a consecutive pair of the form \(\beta_{-}\alpha_{-}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) which contradicts property (1). So \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) were not consecutive, we would get a contradiction to property (1) if there were some \(\beta_{-}\) letter between them and if all were \(\alpha_{-}\) letters, this would contradict Lemma 4.6. So \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive, and by property (3), the letter before \(\alpha_{l}^{(0)}\) should be an \(\alpha\)-letter and in this case an \(\alpha_{-}\) letter, say \(\alpha_{-k^{\prime}}^{(-1)}\). But then we would get a consecutive pair of the form \(\beta_{-}\alpha_{-}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-k^{\prime}}^{(-1)}\) which contradicts property (1).
**Case 7:**\(w_{p}\) is a \(\alpha_{-}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{+}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\]
for some \(k,l\in[n]\). If \(k\neq l\), we would get a contradiction to (5) since \(\alpha_{-k}^{(-1)}\) is before \(\alpha_{l}^{(0)}\) is \(w\) but not in \(w^{\prime}\). So \(\alpha_{k}^{(0)}\) appears after \(\alpha_{-k}^{(-1)}\) in \(w\) and each letter between them is positive or \(\alpha_{-k}^{(0)}\). Just as before \(\alpha_{-k}^{(0)}\) being between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) would either contradict property (1) or (5). So all letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) are positive. If there is some \(\beta_{+}\) letter between them, there will be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which would contradict property (1). Hence, all letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) are \(\alpha_{+}\) letters. But this contradicts Lemma 4.6.
#### 7.3.2. Shi threshold
The Shi threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes
\[X_{i}+X_{j}=0,1\]
for all \(1\leq i<j\leq n\). The translated arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), which we call \(\mathcal{ST}_{n}\), has hyperplanes
\[x_{i}+x_{j}=-1,0\]
for all \(1\leq i<j\leq n\). We use the same method as before to study the regions of this arrangement by considering \(\mathcal{ST}_{n}\) as a sub-arrangement of \(\mathcal{C}_{n}\).
The hyperplanes in \(\mathcal{C}_{n}\) that are not in \(\mathcal{ST}_{n}\) are
\[2x_{i} =-2,-1,0\] \[x_{i}+x_{j} =-2\] \[x_{i}-x_{j} =-1,0,1\]
where \(1\leq i<j\leq n\). Changing the inequality corresponding to exactly one of these hyperplanes are given by the \(\mathcal{CT}\) moves as well as the move corresponding to \(x_{i}+x_{j}=-2\) where \(i\neq j\) are in \([n]\): Swapping consecutive \(\beta_{+}\) and \(\alpha_{-}\) letters (along with their negatives).
Two sketches are in the same region of \(\mathcal{ST}_{n}\) if and only if they are related by a series of such moves and we call such sketches \(\mathcal{ST}\) equivalent. A sketch is called \(\mathcal{ST}\) maximal if it is greater (in the lexicographic order) than all sketches to which it is \(\mathcal{ST}\) equivalent. Hence the regions of \(\mathcal{ST}_{n}\) are in bijection with the \(\mathcal{ST}\) maximal sketches. The following result can be proved just as Theorem 7.5.
**Theorem 7.7**.: _A symmetric sketch is \(\mathcal{ST}\) maximal if and only if the following hold._
1. _If a_ \(\beta\)_-letter is followed by an_ \(\alpha\)_-letter, the_ \(\beta\)_-letter should be negative and the_ \(\alpha\)_-letter should be positive with different mod-values._
2. _If two_ \(\alpha\)_-letters and their corresponding_ \(\beta\)_-letters are both consecutive and of the same sign then the subscript of the first one should be greater._
3. _If the_ \(n^{th}\) _and_ \((n+1)^{th}\)__\(\alpha\)_-letters are consecutive, then so are the_ \((n-1)^{th}\) _and_ \(n^{th}\) _with the_ \(n^{th}\)__\(\alpha\)_-letter being positive. In such a situation, if the_ \((n-1)^{th}\)__\(\alpha\)_-letter is negative and the_ \((n-1)^{th}\) _and_ \(n^{th}\)__\(\beta\)_-letters are consecutive, the_ \((n-1)^{th}\)__\(\alpha\)_-letter should have a subscript greater than that of the_ \((n+1)^{th}\)__\(\alpha\)_-letter._
4. _If the_ \((2n-1)^{th}\) _and_ \((2n+1)^{th}\) _letters are both negative_ \(\beta\)_-letters and their corresponding_ \(\alpha\)_-letters are consecutive, the subscript of the_ \((2n-1)^{th}\) _letter should be greater than that of the_ \((2n+1)^{th}\)_._
_Hence the regions of \(\mathcal{ST}_{n}\) are in bijection with sketches of the form described above._
## 8. Concluding remarks
We end the paper with some open questions. Bernardi [6] has dealt with arbitrary deformations of the braid arrangement. The first (ambitious) problem is to generalize all the results in his paper to arbitrary deformations of all reflection arrangements. This is easier said then done! Bernardi proves that the number of regions is equal to the signed sum of
certain "boxed trees". So the first step is to generalize the notion of boxed trees to certain decorated forests and then prove the counting formula, this is a work in progress. For certain well-behaved arrangements called "transitive deformations" Bernardi establishes an explicit bijection between the regions and the corresponding trees, via sketches. We don't have trees for all deformations of reflection arrangements but, we do have sketches that are in bijection with regions of (extended) Catalan deformations.
The main motivation behind Bernardi's work is an interesting pattern concerning certain statistic on labeled binary trees. Ira Gessel observed that the multivariate generating function for this statistic specializes to region counts of certain deformations of the braid arrangement. So a new research direction could be to try and define a statistic on non-nesting partitions (of all types) such that the associated generating function specializes to region counts.
Another aspect of Bernardi's work that has not been discussed in the present paper is the coboundary and Tutte polynomials. Using either, the finite field method or the method inspired by statistical mechanics one should get a closed form expression for these polynomials of the deformations we have considered. Moreover, the expression should be in terms of either sketches or non-nesting partitions.
Having a combinatorial model for the coefficients of the characteristic polynomial could be quite useful. Especially to derive various inequalities that they satisfy. For example, denote by \(C(m,n,j)\) be the number of symmetric \(m\)-non-nesting partitions of size \(n\) with \(j\) positive compartments. Then following inequalities are not difficult to prove:
1. \(C(m,n,j)\leq C(m,n+1,j)\)
2. \(C(m,n,j)\leq C(m,n+1,j+1)\)
3. \(C(m,n,j)\geq\sum_{k\geq j+1}{k\choose j}C(m,n,k)\).
A research direction here is to develop a case-free strategy to obtain more such information. For example, we know that the coefficients are unimodal so, identify the peak in each case.
Recall the Raney numbers that are defined by
\[A_{n}(m,r):=\frac{r}{n(m+1)+r}{n(m+1)+r\choose n}\]
for all positive integers \(n,m,r\). The Catalan numbers are a special case of Raney numbers, obtained by setting \(m=n=1\). It was shown in [8] that the number of regions of the hyperplane arrangement
\[\{x_{i}=0\mid i\in[n]\}\cup\{x_{i}=2^{k}x_{j}\mid k\in[-m,m],1\leq i<j\leq n\}\]
is equal to \(n!A_{n}(m,2)\). Note that these arrangements define a GESA. Find a family of arrangements which is GESA and the number of regions is \(n!A_{n}(m,r)\). One can use tuples of labeled Dyck paths to enumerate these regions. So one can try and apply techniques from this paper to find a static for these objects.
## 9. Acknowledgements
The authors are partially supported by a grant from the Infosys Foundation. The computer algebra system SageMath [19] provided valuable assistance in studying examples.
|
2309.08744 | Personalized Food Image Classification: Benchmark Datasets and New
Baseline | Food image classification is a fundamental step of image-based dietary
assessment, enabling automated nutrient analysis from food images. Many current
methods employ deep neural networks to train on generic food image datasets
that do not reflect the dynamism of real-life food consumption patterns, in
which food images appear sequentially over time, reflecting the progression of
what an individual consumes. Personalized food classification aims to address
this problem by training a deep neural network using food images that reflect
the consumption pattern of each individual. However, this problem is
under-explored and there is a lack of benchmark datasets with individualized
food consumption patterns due to the difficulty in data collection. In this
work, we first introduce two benchmark personalized datasets including the
Food101-Personal, which is created based on surveys of daily dietary patterns
from participants in the real world, and the VFNPersonal, which is developed
based on a dietary study. In addition, we propose a new framework for
personalized food image classification by leveraging self-supervised learning
and temporal image feature information. Our method is evaluated on both
benchmark datasets and shows improved performance compared to existing works.
The dataset has been made available at:
https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html | Xinyue Pan, Jiangpeng He, Fengqing Zhu | 2023-09-15T20:11:07Z | http://arxiv.org/abs/2309.08744v1 | # Personalized Food Image Classification: Benchmark Datasets and New Baseline
###### Abstract
Food image classification is a fundamental step of image-based dietary assessment, enabling automated nutrient analysis from food images. Many current methods employ deep neural networks to train on generic food image datasets that do not reflect the dynamism of real-life food consumption patterns, in which food images appear sequentially over time, reflecting the progression of what an individual consumes. Personalized food classification aims to address this problem by training a deep neural network using food images that reflect the consumption pattern of each individual. However, this problem is under-explored and there is a lack of benchmark datasets with individualized food consumption patterns due to the difficulty in data collection. In this work, we first introduce two benchmark personalized datasets including the Food101-Personal, which is created based on surveys of daily dietary patterns from participants in the real world, and the VFN-Personal, which is developed based on a dietary study. In addition, we propose a new framework for personalized food image classification by leveraging self-supervised learning and temporal image feature information. Our method is evaluated on both benchmark datasets and shows improved performance compared to existing works. The dataset has been made available at: [https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html](https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html)
Food image classification, personalized classifier, image-based dietary assessment, self-supervised learning
## I Introduction
Food image classification is crucial for image-based dietary assessment, which aims to provide an accurate profile of foods consumed and their portion sizes based on an individual's habitual dietary intake [1]. Given the widespread use of mobile devices, many individuals now utilize food logging apps to daily track their food intake, aiding in maintaining a healthy diet over time [2, 3].
Although existing works [4, 5, 6, 7, 8, 9, 10] have demonstrated promising results using static food datasets, food image classification is much more challenging in real-world settings where data comes sequentially overtime [11, 12, 13, 14, 15, 16]. The most recent work focuses on addressing this issue for each individual by designing a personalized food classifier [17, 18]. In such contexts, individuals capture food images in sequence, thereby documenting their dietary habits chronologically. We refer to this sequential data as a "food consumption pattern". A Food consumption pattern typically exhibits unbalanced food distribution, diverse cooking styles, and previously unseen food classes over time [19]. The main objective of personalized food classification is to classify each food image as it appears sequentially over time in a food consumption pattern. This ensures enhanced classification accuracy tailored to a person's unique dietary progression. Fig. 1 shows an illustration of personalized food image classification, which learns the food class appeared in a food consumption pattern over time. However, there exist two major challenges. The first is a lack of publicly available benchmark personalized food image datasets. This is mainly due to the difficulty in collecting food consumption patterns from different individuals over time. The second is a lack of exploration into learning sequential image data streams containing previously unseen food classes and associated contextual information from the food consumption pattern.
Our work aims to address both aforementioned challenges by creating benchmark datasets encapsulating personalized food consumption patterns and developing a novel personalized classifier to improve the performance of existing methods [17, 18, 20]. To address the first challenge of lacking available datasets, we first introduce two benchmark personalized food consumption datasets by leveraging two public food image datasets [5, 21] with food categories and proceed as follows. For both datasets, we have short-term food consumption patterns from volunteers' input. We then extend and simulate different patterns using a method based on [22].
Existing personalized food classification methods [17, 18] store food image features extracted from pre-trained models and employ the nearest-class-mean classifier and nearest neighbor classifier to adapt each individual's eating habit. The most recent work [23] further improves the performance by sharing food records across multiple food consumption patterns. Nonetheless, these approaches exhibit several limitations. Firstly, while processing each image in a consumption pattern, the pre-trained feature extractor remains static, unable to learn and update dynamically using new food images. Secondly, existing work only considers the short-term frequency of food occurrence, lacking the exploration into temporal contextual information, which provides diet change over the time.
Fig. 1: An illustration of personalized food image classification. The objective is to train a personalized classifier based on food consumption patterns to improve food classification performance.
In this work, we introduce a personalized classifier that addresses all the aforementioned limitations. By enhancing the image feature extraction with self-supervised learning, our model updates dynamically with each new food image. Moreover, we enrich the temporal context by concatenating image features within a sliding window, facilitating a deeper consideration of image feature-based temporal nuances. The main contributions of our work can be summarized as follows:
* We introduce two new benchmark datasets for personalized food image classification including the **Food101-Personal** and the **VFN-Personal**, and we have made it open to the public.
* We propose a novel personalized classifier through feature extraction update using self-supervised learning and a sliding window technique to capture temporal contextual information based on image features.
## II Benchmark Datasets
In this section, we introduce two benchmark personalized datasets including Food101-Personal and VFN-Personal. Unlike existing food image classification methods that are trained on public food datasets [21, 5, 24], there is no publicly available personalized food image dataset due to the challenges in obtaining food consumption patterns for each individual, which reflects their dietary habits over time. Our work addresses this gap by first collecting short-term food consumption patterns through surveys or dietary studies and then simulating the long-term personalized food consumption patterns following the method in [22] where a modified Markov chain is used to capture temporal contextual information based on the provided initial short-term food consumption pattern.
**Food101-Personal:** We conducted an online survey using the Food-101 dataset [21], where participants were asked to simulate one week of food consumption patterns by selecting foods from the 101 classes in Food-101. We collected 20 participants' patterns, each with over 20 food records, and simulated long-term patterns using the method described in [22]. To develop a more representative benchmark, we cluster food images within each food class from the Food-101 dataset and employ a Gaussian distribution model as described in [22] to create intra-class dissimilarities within each class in a pattern. Overall, the benchmark includes 20 patterns with 300 images each and an average of 44 food classes per pattern.
**VFN-Personal:** For the VFN dataset [5], we conducted a dietary study from healthy participants aged 18 to 65 using the image-based dietary assessment system [25]. Participants captured images of foods they consumed for three days. We collected data from over 70 participants, retaining 26 short-term patterns which have at least 15 records each. Similar to the Food101-Personal dataset, we employed the method in [22] to simulate long-term food consumption patterns. Overall, the VFN-Personal dataset comprises 26 patterns, each containing 300 images and an average of 29 food classes per pattern.
## III Method
In this section, we introduce a novel method to improve the accuracy of personalized food image classification. Our approach consists of two key components: (1) employing self-supervised learning to update the feature extractor, as described in Section III-A, and (2) using a sliding window to capture multiple-image temporal information within a food consumption pattern, as explained in Section III-B.
### _Feature Extraction Using Self-supervised Learning_
One limitation of existing personalized food classification approaches is the fixed feature extractor, which is unable to update using new images in a food consumption pattern. In this paper, we address these issues by leveraging self-supervised learning [26, 27, 28, 29] to learn image features without ground truth labels. Our method is designed to be compatible with any self-supervised learning backbone.
To accommodate self-supervised learning in our scenario where a large training batch is not feasible as new images typically arrive sequentially one by one, we apply the following techniques to create representative input batches.
**Group normalization:** In existing self-supervised learning with batch normalization, the error tends to increase rapidly as the batch size decreases. However, utilizing large batch sizes in the early time steps is not feasible in our scenario due to the limited number of food images. To tackle this issue, we replace batch normalization layers with group normalization layers [30], which provides constant error across different batch sizes, making it a more reliable alternative.
**Random image sampling** We employ a random image sampling technique as described in [31] to select input images for the self-supervised learning algorithm. Let \(t\) denote the current time step. Our objective is to randomly sample images from time steps before \(t\), rather than sampling them in a consecutive temporal order. The input set of images can be denoted as \(I_{t}=[f_{a},f_{b},\dots],\ 1\leq a,b,\dots\leq t\), where \(a,b,\dots\) represent the sampling time steps.
**Dual Instances Learning** To tackle the issue of class imbalance and intra-class variability in food classification, we propose to use a pair of images (\(f_{i,1}\), \(f_{i,2}\)) from each class \(i\) rather than employing two augmentations of the same image as inputs. The motivation is that different images from the same class should exhibit similar feature representation.
### _Sliding Window_
Existing personalized food classification methods [17, 18] rely on a single image feature to classify images within individual consumption patterns. However, incorporating temporal contextual information based on multiple images within food consumption patterns is also important to help capture the unique diet characteristics of each individual. In this work, we propose to combine the single-image feature and multiple-image temporal information for classification where the latter is achieved by constructing the sliding window to capture past multiple-image temporal information based on concatenated image features.
Specifically, we first compute single-image similarity score \(s^{b}_{t,N}\), to find the image that is most similar to the image to be classified. Given a new image at \(t=N\) with number of \(M\) food classes appeared so far, we calculate \(s^{b}_{t,N}\) by finding the cosine similarity between input image feature \(f_{N}\) and previous image features \(f_{t},t\in{1,2,...N-1}\), with the formula of
\[s^{b}_{t,N}=\frac{f_{t}^{T}f_{N}}{||f_{t}||_{2}||f_{N}||_{2}},\;1\leq t<N \tag{1}\]
where \(T\) corresponds to the transpose of a matrix. Since the same food class may appear multiple times before \(t=N\), we first take the maximum similarity among image features in the same class before \(t=N\), denoted as \(s^{b}_{m},1\leq m\leq M\) and then apply softmax to get \(s^{b^{\prime}}_{m}\). where \(m\) denotes the food class index and \(c_{t}\) denotes the food class at time step \(t\).
Each sliding window \(W\) can be built by concatenating image features as represented in the follows:
\[W_{i}=([f_{i},f_{i+1},...,f_{i+k-1}],c_{i+k-1}),\;1\leq i\leq N-k+1 \tag{2}\]
where \(i\) denotes the sliding window index, \(k\) is the length of the window, and \(c_{i+k-1}\) represents the class label associated with the window \(W_{i}\). To find the \(W_{i}\) with the highest similarity to \(W_{N-k+1}\), which contains the current image to be classified, we apply the nearest neighbor classifier to calculate the cosine similarity among sliding windows as
\[s^{w}_{i,N-k+1}=\frac{W_{i}^{T}W_{N-k+1}}{||W_{i}||_{2}||W_{N-k+1}||_{2}} \tag{3}\]
For the food class-based similarity score, we take the maximum value among all windows belonging to the same food label, denoted as \(s^{w}_{m},1\leq m\leq M\). We then take the softmax of \(s^{w}_{m}\) and denote it as \(s^{w^{\prime}}_{m}\).
Finally, we combine the similarity scores from \(s^{w^{\prime}}\) and \(s^{b^{\prime}}\) to obtain \(R_{m}\), which is computed as follows: \(R_{m}=s^{b^{\prime}}_{m}(s^{w^{\prime}}_{m})^{\alpha},\;0\leq\alpha\leq 1\) where \(\alpha\) denotes the weight associated with \(s^{w^{\prime}}\), which controls the level of significance of the sliding window method in computing the final similarity score. The higher the \(\alpha\) value, the greater the level of significance. The final prediction is calculated by: \(p_{t}=argmax\{R_{m}\}\), where \(t\) denotes the time step at which the image is to be classified.
## IV Experiments
In this section, we evaluate our proposed methods by comparing with existing works on Food101-Personal and VFN-Personal datasets introduced in Section II. We also conduct an ablation study to demonstrate the effectiveness of each component in our proposed framework.
### _Benchmark Protocol_
Different from the general image classification task that train a model on training data and evaluate on test data, there is no split of train and test in personalized dataset. Therefore, we propose the following evaluation protocol. Given a personalized dataset containing multiple food consumption patterns from different individuals, the personalized classifier is evaluated on each pattern one by one by assuming (1) the data becomes available sequentially, and (2) the model is updated in an online scenario, _i.e._, the training epoch is 1. During each time step in a pattern, the model first make prediction on the new image as one of the food classes seen so far and then use it for update. The performance on each pattern is evaluated by calculating the cumulative mean accuracy at each time step as:
\[C\_accuracy(t)=\frac{1}{t}\Sigma_{\tau=1}^{\tau=t}\mathbb{1}(p_{\tau}=c_{\tau}) \tag{4}\]
\(\mathbb{1}(\cdot)\) is a function indicating whether the current prediction of food is correct or not. \(p_{\tau}\) denotes the prediction at time step \(\tau\) for a pattern, and \(c_{\tau}\) represents the class label at time step \(\tau\) for a pattern. The overall performance on the entire dataset is calculated as the mean accuracy for all the personalized food consumption patterns.
### _Experiment Setup_
**Methods for comparison:** We employ Simsiam [28] and Barlow Twins [27] as self-supervised learning backbones, replacing batch normalization layers with group normalization layers [30] for small input batch sizes. We compare our method with existing methods including **CNN**[32], which uses a general fixed-class convolutional neural network on ISIA-500 dataset [24]; **1-NN**[20], which is a one nearest neighbor method; **SVMIL**[33], a common incremental learning method that utilizes the SVM model and updates the model based on a new image feature at every single time step; **SPC**[18] and **SPC++**[17], which employs nearest neighbor and nearest class mean classifier with a fixed pre-trained model, and incorporating a time-dependent model and weight optimization for classification.
Furthermore, we conduct an ablation study to demonstrate the effectiveness of each proposed component including **Random Sampling (RS)**, which is a random sampling method illustrated in section III-A; **Dual Instance Learning (DIL)**,
Fig. 2: Overview of the sliding window method. We obtain the similarity score for each food class by using both the current image feature and sliding windows, denoted as \(s^{b}\) and \(s^{w}\), respectively. The final prediction is computed based on the combined similarity vector \(R\).
which is the dual instance learning approach described in section III-A; and **Sliding Window (SW)**, which employs the sliding window method to capture multiple-image temporal information, as explained in Section III-B.
**Implementation detail:** We utilize ResNet-50 [34] pre-trained on ISIA-500 dataset [24] as backbone to extract image features in food consumption patterns. The batch size is set to 32 with training epochs 1 in online scenario. For SimSiam, we use SGD optimizer with learning rate of 0.001 and weight decay 0.0001. For Barlow Twins, we utilize the LARS optimizer [35] with a learning rate of \(1\times 10^{-6}\) on normalization and bias layers and \(1\times 10^{-5}\) on other layers, along with a weight decay of \(1\times 10^{-6}\). For our sliding window **SW**, we empirically set \(\alpha=0.0025\) with a window size of \(k=5\). The method is applied when \(t\geq 50\).
### _Results and Discussion_
Table I shows the results of personalized food classification at selected time steps for the Food101-Personal and VFN-Personal dataset. The first part of table I shows the comparison of classification performance of our proposed method with existing works. In the existing works, **CNN**[32] constantly exhibits low accuracy over time, as it does not learn to classify new classes from the consumption patterns. **SVMIL** underperforms compared to **1-NN**, due to only having one new image at each time step to learn from and not addressing the mini-batch learning issue. **1-NN**[20] shows inferior performance compared to **SPC**[18] because of not considering cold start problem. **SPC++**[17] outperforms **SPC**[18] by taking into account the short-term frequency of food consumption. Our proposed method can outperform the existing works at most time steps by considering the image feature updates during training in food consumption patterns over time and multiple-image temporal information. Our method improves the classification accuracy for \(2.6\%\) and \(1\%\) on **Food101-Personal** and **VFN-Personal** dataset, respectively.
The second and third part of table I shows ablation studies of our proposed method with SimSiam and Barlow Twins as backbone respectively. From **RS+SPC++** and **DIL+SPC++** method, it can be observed that both **RS** and **DIL** contribute nearly equally to the improvement of classification accuracy, indicating their equal effectiveness in sampling input images for self-supervised learning. Integrating both methods (i.e. **RS+DIL+SPC++**) leads to further improvement of classification accuracy since it facilitates learning from a balanced class distribution, considers intra-class dissimilarity within
Fig. 3: Classification accuracy at each time stamp on Food101-Personal datasets
a class and general image features without memorizing the specific appearance order of images within a pattern. Moreover, integrating one of the sampling techniques with the **SW** method (i.e.**RS+SW** or **DIL+SW**) can further improve the classification performance, emphasizing the significance of identifying multiple-image temporal information in personalized food classification. Finally, integrating all the modules enables the model to achieve the best performance across all methods on both benchmark datasets for most time steps.
Fig 3 shows the trends of classification accuracy at each time step for different methods on **Food101-Personal** dataset. In general, all methods in comparison improve over time except for **CNN**. Our proposed method shows a faster rate of improvement especially after \(t=100\) as it contains more multiple-image temporal information from the past.
## V Conclusion
In this paper, we focus on personalized food image classification. We first introduce two new benchmark datasets, Food101-Personal and VFN-Personal. Next, we propose a personalized food classifier that leverages self-supervised learning to enhance image feature extraction capabilities. We present two sampling methods, random sampling and dual instance learning, to minimize learning biases associated with sequential data, and suggest a sliding window method to capture multiple-image temporal information for the final classification. Our method is evaluated on both benchmarks and show promising improvements compared to existing work.
|
2302.00081 | SonoUno web: an innovative user centred web interface | Sonification as a complement of visualization is been under research for
decades as a new ways of data deployment. ICAD conferences, gather together
specialists from different disciplines to discuss about sonification. Different
tools as sonoUno, starSound and Web Sandbox are attempt to reach a tool to open
astronomical data sets and sonify it in conjunction to visualization. In this
contribution, the sonoUno web version is presented, this version allows user to
explore data sets without any installation. The data can be uploaded or a
pre-loaded file can be opened, the sonification and the visual characteristics
of the plot can be customized on the same window. The plot, sound and marks can
be saved. The web interface were tested with the main used screen readers in
order to confirm their good performance. | Gonzalo De La Vega, Leonardo Martin Exequiel Dominguez, Johanna Casado, Beatriz García | 2023-01-31T20:23:00Z | http://arxiv.org/abs/2302.00081v1 | # SonoUno web: an innovative user centred web interface+
###### Abstract
Sonification as a complement of visualization is been under research for decades as a new ways of data deployment. ICAD conferences, gather together specialists from different disciplines to discuss about sonification. Different tools as sonoUno, starSound and Web Sandbox are attempt to reach a tool to open astronomical data sets and sonify it in conjunction to visualization. In this contribution, the sonoUno web version is presented, this version allows user to explore data sets without any installation. The data can be uploaded or a pre-loaded file can be opened, the sonification and the visual characteristics of the plot can be customized on the same window. The plot, sound and marks can be saved. The web interface were tested with the main used screen readers in order to confirm their good performance.
Keywords:Sonification Graphic User Interface Human centred design.
## 1 Introduction
The need to explore data sets beyond the visual field has led the community to study new ways to represent it, this is the case of sonification. In this sense, since 1992 the ICAD conferences[1] has existed bringing together scientists from different fields to discuss sonification, how people perceive it and how it can be used. Related to sonification Phillips and Cabrera[2] present a sonification workstation; and related to astronomy Shafer et.al.[3] and Garcia Riber[4] develop specific projects to sonify solar harmonics and light curves.
During the past years some sonification programs were created as tools to make possible the multimodal exploration of visual and audio graphs, this is
the case of xSonify[5], Sonification Sandbox[6], Sonipy[7, 8], StarSound[9] and SonoUno[10]. All are standard alone software that requires you to download a package and install it. Related to the possibility of analyze data with sonification, Diaz-Merced [11] in her thesis, using the standard alone sonification software xSonify, concluded that sonification as a complement to visual display augments the detection of features in the data sets under analysis.
Given the complexity to use the available standard alone software and to avoid errors and problems during the software installation, the idea of a sonification software working through the web began to make sense. TwoTone[12], TimeWorkers[13], Sonification Blocks[14] and Web Sandbox[15] are different attempts to make it real, but none of them allow the end user to explore, make choices about the configuration and how they want to display the data and functionalities. In this sense, we present in this contribution a graphic user interface available in the web that presents the same user centred framework and almost same functionalities of sonoUno [16] desktop software.
The sonoUno software, in its web and desktop versions, is a public tool to display, sonify and apply mathematical functions to any data set. The actual original application of the software is to astronomical data, but it can be used with any type of data presented in two or more columns (csv or txt) files. SonoUno presents a user centred approach from the beginning, first with a theoretical framework, second with focus group sessions and then with a community of people that kindly test the software and send the feedback to the developers[17].
The sonoUno web interface was tested in different operative system and with different screen readers. This work was partially finaciated by the Project REINFORCE (GA 872859) with the support of the EC Research Innovation Action under the H2020 Programme SwafS-2019-1 the REINFORCE www.reinforceeu.eu.
## 2 Methodology
Taking in mind that the end user must have the ability to choose, configure and decide how they want to explore their data sets, this project requires the use of HTML, JavaScript, CSS and ARIA (Accessible Rich Internet Applications) tools and protocols to make it possible. It is a novel approach, because it is not common that web interfaces allows users to make decisions and to configure the display during the interaction. Concerning that, collapsible panel were used, maintaining the principal framework with few functionalities and giving the user the power to decide what they want to display and use.
In consideration of how people with visual impairments handling the digital interface, and how screen reader read the graphic user interface, sonoUno web design use the ARIA standard. Not only the ARIA-labels were indicated, but also an specific order to generate a good workflow thought functionalities and ensuring that the screen reader describe the things just as the visual display indicates. Moreover, the unnecessary elements of the visual display are not read
by the screen reader, for example, the plot are not read as plot, instead the button play allows to sonify the data plotted.
Another big challenge for this development was to ensure the synchronization between the audio and visual graph, bearing in mind the asynchronous nature of JavaScript. Events with timer were used to guarantee the correct relationship during the reproduction of the data set. Furthermore, during the last tests using large data sets a new problem arise, in the web version and with all this functionalities to plot and sonify large data sets is very difficult, take a lot of time and produce errors in some cases. To solve this issue a decimating filter is being tested.
### Graphic User Interface design
In order to maintain the web display as similar as possible to the desktop deployment, a menu was constructed at the top containing: input (allows to open csv/txt data sets, sound and marks that could be done in a data set pointing to parts of interest in the data); output (allows to save the sound, png plot and marks); sample data (this menu item contain pre-loaded data sets that can be displayed on the tool); help (open the complete manual of the tool); and quickstart (open a summary of what to expect and how to use the principal functions).
The reproduction buttons are always displayed and under the plot, these buttons are: play/pause, stop, mark point, delete mark, reset plot, the command text entry and the two sliders to indicate the x position and the tempo. On the other hand, math functionalities and the configurations are located on collapsible panels, this allows to maintain an organized display and few elements that have to be read by the screen reader (helps to reduce memory overload).
The sound and graphic display can be customized by the end user as their desire. About the sound the maximum and minimum frequency can be set, the volume, the sound type (sine, flute, piano and celesta), choose between continuous and logarithmic scale, and the envelope of the sound. Secondly, the plot configuration allows to set the titles, the grid, the line, the markers, and to flip the x and y axis.
## 3 Results
A screenshot of the interface was shown in Figure 1. This web tool allows users to see and hear data sets opened from csv or txt files, also end users can load data sets from 'Data Sample' menu item, for example the gravitational wave glitch showed in Figure 1 was selected from that menu item. At the bottom, the text entry box, allows to write the functionalities available on the interface (this feature allows to use the web interface from there avoiding the use of the mouse).
The plot section allows to zoom it directly on the same plot with the mouse. The abscissa position slider (see Figure 2 at the top) allows to move the cursor
through the data set and to begin the reproduction from there. The tempo slider allows to speed up and down the reproduction time. Figure 2 also shows opened the math function panel, where at the moment there are four functions ready to use: peak finder (in this case a new window allows to select the percentage of sensitivity and if you want to clean or mark the peaks); logarithmic; quadratic; and smooth. At bottom of Figure 2 the configuration panels collapsed are shown. Figure 3 shows the configuration panels opened with all it functions.
The SonoUno web interface was tested in different platforms with different screen readers (NVDA on Windows, Voice Over on MAC and Orca on Ubuntu). All the elements are enunciated by the screen reader, the elements on the panels are only recognizable when the panel is opened (this is very important to maintain the relation between the visual and auditory display).
## 4 Conclusion
A web interface of sonoUno software was developed, maintaining the original functionalities distribution as same as possible, continuing the user center design from the beginning of the tool. This web interface allows the user to explore the data set, making decisions about what the want to display and how.
Concerning the use of screen readers, the elements of the interface present descriptions and the order of the audible display was carefully design, to ensure the adequate correlation between visual and audible deployment. The principal
Figure 1: A sonoUno web interface screenshot, it include the menu, plot, reproduction buttons and the command line text box. The plot shows a gravitational wave glitch, detected by EGO[18] and part of the open data provided by the REINFORCE project
Figure 3: A screenshot of sound and plot configurations panels opened.
Figure 2: A screenshot with the x position and tempo sliders at the top, the math function panel opened and the sound and plot configurations collapsed.
free screen reader of each operative system was tested, and the results show a good performance.
This innovative approach seeks to continue growing, removing barriers and offering more accessible tools to analyse data sets. Since the beginning of the year, this web interface is being used by a professor at Spain with visual impaired students. This experience will let us know some features to enhance and if the sonoUno web interface can be use by student to better understand math and science.
As future works, the web interface is adapting to be used from any mobile device, and the axis limits of the plot will be set indicating the specific number to cut. New user tests and focus group will be performed to maintain and assure the user centred design philosophy of sonoUno and all the associated tools.
|
2309.15098 | Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
Language Models | We investigate the internal behavior of Transformer-based Large Language
Models (LLMs) when they generate factually incorrect text. We propose modeling
factual queries as constraint satisfaction problems and use this framework to
investigate how the LLM interacts internally with factual constraints. We find
a strong positive relationship between the LLM's attention to constraint tokens
and the factual accuracy of generations. We curate a suite of 10 datasets
containing over 40,000 prompts to study the task of predicting factual errors
with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe,
a method probing attention patterns, that can predict factual errors and
fine-grained constraint satisfaction, and allow early error identification. The
approach and findings take another step towards using the mechanistic
understanding of LLMs to enhance their reliability. | Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi | 2023-09-26T17:48:55Z | http://arxiv.org/abs/2309.15098v2 | # Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
###### Abstract
We investigate the internal behavior of Transformer-based Large Language Models (LLMs) when they generate factually incorrect text. We propose modeling factual queries as Constraint Satisfaction Problems and use this framework to investigate how the model interacts internally with factual constraints. Specifically, we discover a strong positive relation between the model's attention to constraint tokens and the factual accuracy of its responses. In our curated suite of 11 datasets with over 40,000 prompts, we study the task of predicting factual errors with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe, a method probing self-attention patterns, that can predict constraint satisfaction and factual errors, and allows early error identification. The approach and findings demonstrate how using the mechanistic understanding of factuality in LLMs can enhance reliability.1
Footnote 1: Our datasets, evaluation protocol, and methods will be released at [https://github.com/microsoft/mechanistic-error-probe](https://github.com/microsoft/mechanistic-error-probe).
## 1 Introduction
Large language models (LLMs) encode substantial knowledge (Petroni et al., 2019; Srivastava et al., 2022), yet they are prone to generating factually incorrect text. For instance, LLMs can generate confident-appearing completions with _hallucinations_(Zhang et al., 2023; Ji et al., 2023), fabricating entities or factual claims. As LLMs reach wider audiences and are used for safety-critical applications, understanding factuality becomes of paramount importance.
However, our understanding of how LLMs process factual queries and produce errors remains nascent. Existing approaches to interpret how models produce outputs fall into two categories; they either i) treat the LLM as a black box and ask it questions about generated factual claims, or ii) use white-box internals to study how LLMs process factual queries mechanistically. Though promising and exploratory, each approach has limitations.
Black-box approaches investigate the consistency of the claims of an LLM using follow-up questions with other LLMs (Cohen et al., 2023) or have the LLM judge its own response (Zhang et al., 2023; Manakul et al., 2023). However, explanations from LLMs have shown to be unreliable (Turpin et al., 2023) or convey contradictory signals, e.g. LLMs can produce an answer and then acknowledge that it is wrong (Zhang et al., 2023; Mundler et al., 2023). Further, these approaches use multiple generations from LLMs, which may be prohibitively expensive to use in practice.
Mechanistic white-box approaches investigate the internal mechanisms of LLMs to dissect factual recall. For instance, Meng et al. (2022); Geva et al. (2023) focus on facts with the (subject, relation, object) structure (e.g. Paris, capital of, France) and propose insightful mechanisms of how an LLM recalls a fact. They suggest that the Multi-Layer Perceptron (MLP) layers store facts, and attention layers transfer factual information from the subject tokens. However, these works focus on when the model can produce factually correct responses. Mechanics of factual errors are yet to be explored.
**Our Contributions:** Here, we investigate the internal mechanisms of LLMs when they produce factual errors. We propose to view factual queries as Constraint Satisfaction Problems (CSPs), where queries comprise constraints that completions should satisfy to be factually correct (SS3); e.g. in Figure 1 the _director name_ or the _award name_ are constraints on the model's response to a search query for a movie. We explore how properties of constraints, such as popularity, relate to the LLM's correctness and explore mechanisms of constraint satisfaction (SS4). We find that attention to constraint tokens correlates with LLM's factual correctness, where less attention indicates inaccurate responses.
Building on our insights, we propose SAT Probe, a method that predicts constraint satisfaction and factual errors using a simple probe on the LLM's attention to constraints (SS5). To test SAT Probe, we curate a suite of \(11\) datasets of single- and multi-constraint queries that in total comprise >\(40,000\) prompts. We find that SAT Probe performs comparably to the LLM's confidence. Further, SAT Probe can predict factual errors halfway through the forward pass to stop the computation partway and save costs. Our findings contribute to the mechanistic understanding of LLMs and demonstrate the potential of model internals to understand and mitigate factual errors.
## 2 Background: Language Models and Factual Recall
We first describe the transformer architecture (Vaswani et al., 2017). Our presentation largely follows that of Meng et al. (2022); Geva et al. (2023); Elhage et al. (2021), and similar to these works we omit the details around layer normalization for brevity. Let us have an input sequence of \(T\) tokens \(t_{1},...,t_{T}\) and \(t_{i}\in\mathcal{V}\) for a fixed vocabulary \(\mathcal{V}\). A token \(t_{i}\) is initially represented with a \(d\)-dimensional vector \(\mathbf{x}_{i}^{0}\in\mathbb{R}^{d}\) using an embedding matrix \(E\in\mathbb{R}^{|\mathcal{V}|\times d}\). We use \(\mathcal{V}^{+}\) to denote a sequence of tokens.
The architecture consists of \(L\) layers that transform the input token embeddings to a sequence of hidden states \(\mathbf{x}_{1}^{\ell},\dots,\mathbf{x}_{T}^{\ell}\) at each layer \(\ell\) where \(\mathbf{x}_{i}^{\ell}\) denotes the state of token \(i\). Often, each hidden state vector has the same number of dimensions, i.e., \(\forall\,i,\ell\ \mathbf{x}_{i}^{\ell}\in\mathbb{R}^{d}\). The states are obtained by:
\[\mathbf{x}_{i}^{\ell}=\mathbf{x}_{i}^{\ell-1}+\mathbf{a}_{i}^{\ell}+\mathbf{ m}_{i}^{\ell}, \tag{1}\]
where we call \(\mathbf{m}_{i}^{\ell}\) the _MLP contribution_ and \(\mathbf{a}_{i}^{\ell}\) the _attention contribution_ to a token \(i\) at layer \(\ell\). The LLM produces a predicted probability distribution for the next token by \(\hat{\mathbb{P}}(t_{T+1}|t_{1:T})=\text{Softmax}\big{(}W_{U}\mathbf{x}_{T}^{L} +\mathbf{b}_{U}\big{)}\), where \(W_{U}\in\mathbb{R}^{|\mathcal{V}|\times d}\) is the unembedding matrix, \(\mathbf{b}_{U}\in\mathbb{R}^{|\mathcal{V}|}\). In this work, we study the interactions between tokens. Unlike attention which is a function of the states of all
Figure 1: **Tracking attention to predict constraint satisfaction and factual errors. We view factual queries as Constraint Satisfaction Problems. That is, factual queries impose a set of constraints that the LLM’s responses must satisfy. To predict constraint satisfaction (i.e., factual correctness), we track the attention to the constraint tokens in an LLM (here, Llama-2 13B). We find that attention to the constraint tokens highly correlates with constraint satisfaction and factual correctness. The red text indicates factually incorrect completions, whereas the blue indicates factually correct completions.**
tokens, MLP contribution is a function of the state of the _same_ token. Thus, we do not focus on the MLP contribution; see Appendix A for a description.
The **attention** operation updates each token's state using the previous states at all positions, i.e., the representation for a token is updated by 'attending' to all the tokens that come before it. Formally, the operation involves four projection matrices \(W_{Q}^{t},W_{K}^{t},W_{G}^{t},W_{G}^{t}\in\mathbb{R}^{d\times d}\) that correspond to the 'query', 'key', 'value', and 'output' projections. Each of these matrices is split into multiple heads, where \(W_{Q}^{t,h},W_{K}^{t,h},W_{V}^{t,h}\in\mathbb{R}^{d\times d_{h}}\) and \(W_{O}^{t,h}\in\mathbb{R}^{d_{h}\times d}\) denote the matrices for head \(h\), \(d_{h}\) is the dimensionality for each head, and \(h\in[H]\). In practice, the embeddings are split into equal parts such that \(d_{h}=\frac{d}{H}\)(Elhage et al., 2021; Dar et al., 2022; Touvron et al., 2023). The _attention contribution_ from the token \(j\) to token \(i\)\(\mathbf{a}_{i,j}^{t}\) is defined as
\[\mathbf{a}_{i,j}^{\ell} =\sum_{h=1}^{H}A_{i,j}^{\ell,h}(x_{j}^{\ell-1}W_{V}^{t,h})W_{O}^{ \ell,h} \tag{2}\] \[A^{\ell,h} =\text{Softmax}\Bigg{(}\frac{\Big{(}X^{\ell-1}W_{Q}^{\ell,h} \Big{)}\Big{(}X^{\ell-1}W_{K}^{\ell,h}\Big{)}^{T}}{\sqrt{d_{h}/H}}\Bigg{)}, \tag{3}\]
where \(\mathbf{a}_{i}^{l}=\sum_{j\in[T]}\mathbf{a}_{i,j}^{l}\) and Softmax is taken row-wise. \(A^{\ell,h}\in\mathbb{R}^{T\times T}\) are the _attention weights_ computed by the \(h\)-th attention head at layer \(\ell\), and \(A_{i,j}^{\ell,h}\) is the entry in the \(i\)-th row and \(j\)-th column of the matrix. For autoregressive LLMs, \(A^{\ell,h}\) is lower triangular since each token can only attend to the representation of the previous tokens. For brevity, we use \([H]\) to denote the sequence of integers from \(1\) to \(H\), and superscript \([H]\) indicates stacking items for all \(h\in[H]\), i.e., \(A_{i,j}^{\ell,[H]}=\{A_{i,j}^{\ell,h}\}_{h=1}^{H}\in\mathbb{R}^{H}\).
**Mechanics of Factual Recall in Language Models:** Recent work investigates the internal activations of language models to understand the mechanics of factual. Meng et al. (2022); Geva et al. (2021) suggest that MLP layers store factual associations, by studying factual queries of the form (subject, relation, object). Further, Geva et al. (2023); Meng et al. (2022); Elhage et al. (2021) suggest that attention layers transfer factual knowledge to where it will be used. Specifically, when the LLM is given the prompt _LeBron James professionally plays_, the information _LeBron James professionally plays basketball_ is extracted by the MLP contribution to the tokens for _LeBron James_ (subject). Next, the attention layers transfer the information from the tokens of the subject to the last token for the model to generate _basketball_ (object). However, these works study the internal mechanisms when the LLM's completions are factually correct, _not when the LLM produces factually incorrect text_.
## 3 Factual Queries as Constraint Satisfaction Problems
Choosing the right framework to study factual errors is challenging. One can naively categorize completions as factually correct or incorrect, yet this binary view can fall short, e.g., queries that are easy for the LLM and ones that it barely gets right are indistinguishable since both are labeled as 'correct'. Further, it prevents us from building a model around why some queries are more difficult or which parts of the queries drive the LLM to failure.
To systematically study factual queries and LLMs' internal behavior, we propose the CSP view:
**Definition 3.1** (Factual Query as a CSP).: A factual query is specified by a set of constraints \(\mathcal{C}=\{(C_{1},V_{1}),\ldots(C_{K},V_{K})\}\) where \(C_{k}\in\mathcal{V}^{+}\) indicates the sequence of tokens for the constraining entity \(k^{2}\), and \(V_{k}:\mathcal{V}^{+}\rightarrow\{0,1\}\) is a _verifier_ that takes a set of generation tokens as the input and returns whether the constraint indexed by \(k\) is satisfied. Under this view, we call a completion \(Y\) as a _factual error_ if \(\exists\,k\in[K]:V_{k}(Y)=0\), that is, if there is a constraint in the factual query that the response does not satisfy3. Otherwise, we call the response _factually correct_.
Footnote 3: While it may be nontrivial to generally isolate tokens for the constraining entity for arbitrary queries, in our evaluations we investigate settings in which we assume we have access to this set.
A large set of factual queries can be seen as a set of constraints that responses must satisfy to be correct, e.g., see Figure 1. This structure is comprehensive; for example, an important subset of
queries made by users to search engines has historically been conjunctions of constraints (Spink et al., 2001). Structured and multi-constraint queries are also inherent to faceted search and information retrieval (Tunkelang, 2009; Hahn et al., 2010). Further, under this definition, prior (subject, relation, object) queries (Meng et al., 2022) can be seen to have a single-constraint structure. Similarly, instructions to LLMs are also constraints for controlling the output (Ouyang et al., 2022).
Focusing on the constraints of a CSP can help us reason about the difficulty of a query. We start with two factors that can describe difficulty for factual queries: i) the popularity of the constraining entity, and ii) the constrainedness of the query.
**Popularity of the Entity vs LLM Performance:** Recent work documented the correlation between training data frequency and memorization in LLMs (Carlini et al. (2022); Biderman et al. (2023); _inter alia_). However, even for many open-source LLMs, we cannot compute the frequency of facts since we do not have the training data or a trivial way to search for complex facts. As an accessible proxy for entities from WikiData, we use the number of site links on the page as the _popularity_ metric and we hypothesize that it strongly correlates with the training data frequency or popularity. See Tables 3,4 for examples of popularity statistics across basketball players and football teams.
For Figure 2 left, we produce queries of the form _Tell me year the basketball player [name] was born in_, and evaluate the LLM's performance via accuracy in this task. Then, we compare the correctness of the LLM for entities (players) of varying popularity. We observe that i) LLM performance is better for entities with higher popularity, and ii) larger LLMs have better performance for entities that are less popular. Similar relationships with popular/typical input are documented in concurrent work (Mallen et al., 2022; Kandpal et al., 2023; Yuksekgonul et al., 2023).
**Constrainedness of the CSP vs LLM Performance:** A well-explored complexity metric for CSPs is constrainedness (Gent et al., 1996). Here, we define constrainedness as the number of potential solutions to the given problem in the domain of the output. For instance, for a query of the form _Tell me a word that starts with the letter e and ends with the letter t_, we quantify constrainedness by the number of such words4 in the English language that satisfy these constraints.
Footnote 4: We use nltk.corpus.words to compute the number of such words.
In Figure 2 right, we show how constrainedness relates to correctness. We observe that i) as the problem becomes more constrained the LLM performance performance drops ii) larger models generally perform better across all constrainedness levels.
**Summary:** We argue that the CSP lens can provide a useful vocabulary to capture the difficulty of factual queries, and can let us understand what parts of the query are more difficult. Our goal is to build a framework to discuss how LLMs process factual queries and produce factual errors. Next, we describe how we leverage LLMs' internal mechanisms to characterize and predict factual errors.
## 4 Understanding Factual Errors Using Attention to Constraints
Here, we explore how an LLM processes constraints when the model produces factually incorrect text. Geva et al. (2023); Meng et al. (2022) suggest that attention layers transfer the factual information
Figure 2: **Difficulty of the factual query vs LLM performance. Left: Popularity vs Correctness** We observe that the more popular the entity in the factual query is, the more correct the LLMs are. **Right: Constrainedness vs Correctness** We observe that the more constrained the problem is (i.e. has a smaller set of potential solutions), the less correct the LLMs are.
from the source entity (e.g., _Bad Romance_) to the last token for generation (to generate _Lady Gaga_, Figure 3) when the LLM correctly addresses a query. However, these works do not explore the mechanisms when the model produces factually incorrect responses. Intuitively, we want to quantify how the LLM interacts with constraints to understand constraint satisfaction and thus factual errors.
To study how the LLM processes a constraint, we focus on the attention _to the constraint tokens_, i.e.,
\[\mathbf{a}_{c,T}^{\ell,h}=A_{c,T}^{\ell,h}\big{(}x_{c}^{\ell-1}W_{V}^{\ell,h} \big{)}W_{O}^{\ell,h}, \tag{4}\]
where \(\mathbf{a}_{c,T}^{\ell,h}\in\mathbb{R}^{d}\) indicates the attention contribution from a constraint token \(c\) through a head \(h\) to the final token \(T\) (where the \(T+1\)-th token will be generated). The total attention contribution to \(T\) is then is \(\mathbf{a}_{c,T}^{\ell}=\sum_{h}\mathbf{a}_{c,T}^{\ell,h}\). When the constraint comprises multiple tokens denoted by the set \(C\), we take the maximum value across all constraint tokens, i.e., \(A_{C,T}^{\ell,h}=\max_{c\in C}A_{c,T}^{\ell,h}\) or \(\mathbf{a}_{C,T}^{\ell,h}=\max_{c\in C}||\mathbf{a}_{c,T}^{\ell,h}||\)5. An example is shown in Figure 1; we track the regions that are marked by \(C_{1}\) and \(C_{2}\), which in this case represent the constraint that the movies were directed by the specified directors, and also won the specified awards.
Footnote 5: While there is earlier work that suggests the last constraint token could the most important, we observed that in practice there are subtleties. See Appendix C.2 for a short discussion.
To understand whether attention to constraints can help explain factual errors, we study three factors. First, we explore the relationship between attention and popularity of the constraining entity, as we find that LLM's correctness correlates with popularity (Fig 2). Next, we explore the relation of attention to the LLM's confidence \(\hat{\mathbb{P}}(Y|X)\), which estimates the probability of a completion \(Y\) given the prompt \(X\). Finally, we explore how attention patterns behave when we scale the LLMs.
**Attention predicts popularity:** In Figure 9, we show the results for predicting the popularity of the constraining entity in the prompt (the basketball player) only from the attention weights \((A_{C,T}^{[L],[H]})\) using linear regression. In all LLMs (Llama-2 7B, 13B, 70B), the predicted populaities using attention values significantly correlate with the ground truth popularity values (over a held-out set, with Spearman's Correlation \(\rho\geq 0.65\) and p-value \(p\approx 0\) for all LLMs). We give further details of the protocol in Appendix C.1.
This is a curious case: _Why should we have expected that LLMs have more attention to popular entities_? This finding aligns with the recent theoretical work that identifies a _frequency bias_ of self-attention layers, gradually putting more attention on tokens that co-occur a lot with the query token during training (Tian et al., 2023). However, our main goal is to characterize and predict factual errors. While popularity seems predictive, we may not always have access to a clean popularity measure or training data frequency of constraints.
**Attention correlates with confidence and LLM's correctness:** In Figure 4 (left four panels), each row represents the attention flow across layers for a single sample and we sort the points by the
Figure 3: **Tracking attention to predict factual errors in single-constraint settings. We track the attention contribution from the constraint tokens during generation. We observe a small-norm contribution (\(||\mathbf{a}_{c,T}^{\ell}||\)) when the LLM makes a factual error, in contrast, we observe a larger-norm attention contribution when the LLM is factually correct. The red text indicates factually incorrect completions, whereas the blue text indicates factually correct completions.**
confidence of the LLM. The leftmost panels show the attention for the \(25\) most confident predictions and the middle panels show the \(25\) least confident predictions; where the x-axis shows the layers, and colors indicate the norm of the attention contribution from the constraints \(\left(||\mathbf{a}_{C,T}^{\ell,[H]}||\right)\). The core observation is that _when the LLM is accurate, there is more attention to constraint tokens_ (first column) in sharp contrast to cases where the LLM fails and the attention is weak (second column).
In Figure 4's rightmost plots, queries are sorted and grouped by the LLM's total attention contribution from the constraints across all layers (\(\sum_{\ell}||\mathbf{a}_{C,T}^{\ell}||\)), and LLM's accuracy is computed for each group. Similar to the left panels, we observe that _the magnitude of attention to constraints correlates with accuracy_. This observation is not only interesting in hindsight; aforethought could have suggested either outcome (e.g., more attention correlating with hallucination). While the phenomenon deserves further explanatory investigation, this is a positive observation indicating that attention to constraints can be used to predict the LLM's success.
**Language models grow larger, pay more attention, and succeed more:** In Figure 5, each panel compares the attention to constraints for the basketball player queries between two different LLMs, where the x-axis indicates the smaller LLM, the y-axis indicates the larger LLM and the coloring indicates the success of the pair of LLMs. We group prompts by the attention contribution, and color the cells by the the most frequent category. We find that more (relatively) attention in both LLMs generally indicates success for both and less attention in both LLMs indicates failure for both. For cases on the top left, the larger LLM does pay more attention, and only the larger LLM succeeds. Overall, we note a consistent pattern between attention and correctness across model scales; and performance improvements in larger LLMs relate to increased attention to constraint tokens.
Figure 4: **Attention contribution correlates with correctness. The first two columns of panels** give the \(25\) samples for which the LLM makes the most and the least confidence predictions, respectively. The color indicates the norm of the attention contribution from the constraint, where each column in the panel captures a layer in the LLM and each row is a specific sample. **The last column of panels** relates the total attention to constraints and accuracy, where the x-axis is the attention contribution percentile in the dataset and the y-axis is the accuracy in the bin. The results are for the year of birth queries for basketball players (see 13).
Figure 5: **Attention contribution and model scaling.** Here, the x-axis and y-axis show the attention to the constraints \(\left(||\mathbf{a}_{C,T}^{\ell,[H]}||\right)\) for the smaller LLM and the larger LLM, respectively, and normalized via dividing by the maximum value. Coloring is determined by which of the two LLMs succeeds in factual queries. We group the factual queries by their x-axis value and y-axis values and color the cell with the most frequent category in the cell. Appendix Figure 11 presents the complete scatter plot.
**Summary:** In this section, we explored the interaction between attention, constraints, and factual correctness. Our findings indicate that attention can help us reason about and predict factual errors. In the next section, we pull this thread and conduct extensive experiments to start tapping into the potential of the LLMs' attention patterns for factual error prediction.
## 5 Predicting Factual Errors Using Attention to Constraints
Here, we show how our mechanistic understanding can be used to predict the failures of LLMs. Let \(X\) denote a prompt, a sequence of tokens that specifies a factual query with a set of constraints \(\mathcal{C}=\{(C_{1},V_{1}),\ldots(C_{K},V_{K})\}\). Let \(\hat{Y}\) be the response tokens obtained from the LLM after feeding \(X\). Broadly, we want to design a function \(f\) to estimate the probability that a constraint \(k\) is satisfied:
\[\hat{\mathbb{P}}(V_{k}(\hat{Y})=1)=f(X,\hat{Y},C_{k},\mathcal{M}),\]
using the LLM \(\mathcal{M}\), the prompt, the completion, and the constraints. For single-constraint factual queries where there is a single factually correct completion \(Y\), this can be reduced to the correctness, i.e. \(\hat{\mathbb{P}}(Y=\hat{Y})=f(X,\hat{Y},\mathcal{M})\). Note how this formalism closely matches that of selective classification (Geifman and El-Yaniv, 2017), where the goal is to abstain when the model would otherwise fail.
**Datasets:** For our evaluations, we curate a benchmark with \(11\) datasets that are listed in Table 1 containing \(>\)\(40,000\) queries. For single-constraint queries, we curate 4 datasets using WikiData and 3 datasets using the existing CounterFact dataset (Meng et al., 2022). We further designed four \(2\)-constraint datasets, using WikiData (Books and Movies), Opendatasoft (2023) (Nobel Winners), or hand-curation (Words). Further details about all data curation can be found in Appendix D.
**Constraint Verification \((V_{k})\):** We use Exact Match6 for single-constraint queries with a single solution. We probe WikiData to verify constraints when queries have multiple potential solutions (e.g., we check WikiData for whether the movie name generated by the model is directed by the director in the constraint). Appendix D.3 contains a complete description of the methodology.
Footnote 6: We acknowledge that exact match is a strict criterion that could introduce noise to our evaluations, and it constitutes a limitation where we use this verification. Evaluating factual correctness is still an evolving research topic (Min et al., 2023) and we do our best to find queries and prompt structures that suffer the least from this.
**Models:** We use the 7B, 13B, and 70B parameter variants of Llama-2 (Touvron et al., 2023) released through the HuggingFace's Transformers (Wolf et al., 2019). We perform our experiments on a single NVIDIA A100-PCIE-80GB GPU. 80GB memory can only fit the Llama-2 70B in 8-bit precision (Dettmers et al. (2023) report marginal-to-no performance drop). See Appendix A for further details on models.
**Evaluation Metrics**: We give the AUROC for the binary task of predicting failure or success as it does not require setting a threshold for the classifier. We also report \(\text{Risk}_{\text{Top 20\%}}\) (the fraction of mistakes for the samples with top 20% of the scores by the predictor \(f\)), \(\text{Risk}_{\text{Bottom 20\%}}\) (the fraction of mistakes for the samples with the bottom 20% of the scores by the predictor \(f\)). These metrics measure how well the model performs on the most and least reliable completions according to the predictor \(f\). For a good failure predictor, we want the actual error to be low among high-confidence examples and have a large fraction of failures among low-confidence examples.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Dataset Name & Constraint Type(s) & \(N\) & Constraint Source & Verifier & Example Prompt \\ \hline Basketball Players & _born in the year_ & 13631 & WikiData & Exact Match & Figure 13 \\ Football Teams & _founded in the year_ & 8825 & WikiData & Exact Match & Figure 14 \\ Movies & _directed by_ & 12197 & WikiData & Exact Match & Figure 15 \\ Songs & _performed by_ & 2813 & WikiData & Exact Match & Figure 16 \\ CounterFact & _mother tongue_ & 919 & CounterFact & Exact Match & Figure 17 \\ CounterFact & _citzenship_ & 958 & CounterFact & Exact Match & Figure 18 \\ CounterFact & _headquarter location_ & 756 & CounterFact & Exact Match & Figure 19 \\ \hline Books & _author, published year_ & 1492 & WikiData & WikiData Search & Figure 20 \\ Movies & _directed by, won awand_ & 1066 & WikiData & WikiData Search & Figure 21 \\ Nobel Winner & _won Nobel, born in city_ & 1290 & Opendatasoft (2023) & WikiData Search & Figure 22 \\ Words & _starts with, ends with_ & 1352 & WikiData & Character Match & Figure 23 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Overview of Datasets.** The columns denote the dataset name, constraint type, number of prompts, and the data sources used to collect entities and verify LLM responses, respectively.
### Predicting Factual Correctness
**Predictors (\(f\)):** We propose the constraint satisfaction probe, SAT Probe, that predicts whether an individual constraint is satisfied by only looking at self-attention layers. To demonstrate the simplicity, we define \(f\) to be a linear function of the attention weights to or contributions from constraints:
\[\hat{\mathbb{P}}(V_{k}(\hat{Y})=1;A_{C_{k},T})=\sigma(w^{T}A_{C_{k},T}+b),\]
where \(A_{C_{k},T},w^{T}\in\mathbb{R}^{L\times H},b\in\mathbb{R}\) and \(A_{C_{k}}=\{\forall\ell\in[L],h\in[H]:A_{C_{k},T}^{\ell,h}\}\). That is, we linearly probe the attention weights across all layers and attention heads, and we estimate the parameters \(w\) and \(b\) using Logistic Regression. In the multi-constraint setting, using SAT Probe, we simply combine the predictions for multiple constraints:
\[\hat{\mathbb{P}}(\prod_{k\in[K]}\mathbf{1}_{\{V_{k}(\hat{Y})=1\}};A_{C_{k},T} )=\prod_{k\in[K]}\hat{\mathbb{P}}(V_{k}(\hat{Y})=1;A_{C_{k},T}).\]
**Baselines:** We compare SAT Probe to the Confidence of the model, \(\hat{\mathbb{P}}(\hat{Y}|X)\), which concurrent work reports as a good hallucination detector (Varshney et al., 2023); and a Constant predictor that predicts the majority class (either 0 or 1) as baselines. Note that while Confidence is a strong baseline, it only provides an overall estimate for the whole generation, and cannot predict the failure for individual constraints. We also use the Popularity baseline only in the single-constraint WikiData datasets that we curated - as in other datasets, it is not accessible (CounterFact) or unclear (multi-constraint) how to compute. We do not need to map these scalar scores to a probability measure, as all of our evaluation metrics quantify whether classes are well-separated (e.g., AUROC). In the Appendix, we also give results with featurization using the attention contribution, e.g., \(\mathbf{a}_{C_{k},T}=\{\forall\ell\in[L],h\in[H]:||\mathbf{a}_{C_{k},T}^{\ell,h }||\}\), denoted by SAT Probe(a).
**Results:** In Figure 5(a), we present the overall AUROC of predicting factual correctness for multi-constraint queries, and Table 6 contains all metrics. In this task, we find that SAT Probe mostly performs comparably to and sometimes better than the model's Confidence in the correctness prediction task, in addition to being able to provide fine-grained feedback (i.e. which constraint is not satisfied, see SS5.2). In Figure 5(b), we present the AUROC results for the single-constraint setting, and in Table 5 we give the results in the tabular format. In the single-constraint setting, SAT Probe is comparable to Confidence. Further, we find that the approaches are comparably good in isolating highly reliable vs unreliable points (Table 5,6 show \(\text{Risk}_{\text{Top 20\%}}\) and \(\text{Risk}_{\text{Bottom 20\%}}\) metrics).
Overall, these results demonstrate how the attention weights alone can predict failures well. It is significantly better than the Constant baseline which suggests that it contains a nontrivial amount of information, and sometimes better than the Confidence. Surprisingly, even though LLMs are optimized by maximizing the next token probability, simply probing attention patterns exclusively on the constraints can match or sometimes exceed this performance (without using other states or non-constraint tokens). However, attention alone does not explain all failures (we observe some attention on constraints where the model still fails), and there is an opportunity for further investigation. Our findings demonstrate the value in studying the procedure by which a model produces an output, rather than only the output itself.
### Extensions
We study 3 extensions to explore the potential of SAT Probe and propose avenues for future work.
**Predicting partial constraint satisfaction:** SAT Probe gives access to failure predictions for individual constraints. We report the partial constraint satisfaction results in Table 7 where we report the failure prediction metrics for individual constraints and find comparable results to the single-constraint prediction task. While SAT Probe lets us test whether each constraint is satisfied, using the raw Confidence does not since it only outputs a single value for all constraints. We believe producing fine-grained reliability statements, such as reporting partial constraint satisfaction, can prove useful for debugging (e.g., failing to follow specific instructions).
**Early stopping:** Using SAT Probe, we can predict failures partway through the computation and save costs. In Appendix Figures 7, we show that we can predict failures earlier in the inference with an experiment across all single-constraint datasets. Specifically, we use only attention weights up to an intermediate layer and try to predict failures ahead of time. For Llama-2 7B and 13B, we observe
that we can stop the inference early without degradation in the average performance and save \(50\%\) of wall-clock time on failures for most datasets. In 70-B, early stopping of the inference results in a slight drop in performance. Especially for use cases where we have a high Risk\({}_{\text{Bottom 20\%}}\), we can isolate these most unreliable predictions and abstain from making a prediction. See Appendix B.4 for details on the ablation.
**Generalized predictors:** We explore using a single failure predictor across all constraint types. For this purpose, we train a failure predictor on a mixture of single constraint datasets and report the performance over individual datasets in Appendix B.5 and Figure 8. We observe the performance is competitive with training individual predictors for each constraint and still better than Popularity. This suggests the potential of general factual error detectors, as a future work avenue.
## 6 Related Works
Carlini et al. (2021, 2022); Biderman et al. (2023) related the training data frequency of a string to memorization in LLMs. In recent concurrent work, Mallen et al. (2022); Kandpal et al. (2023); Sun et al. (2023) document the relation between the success/difficulty of factual queries and a measure/proxy for training data frequency. Several recent works investigated the mechanics of factual recall. There are numerous works Elhage et al. (2021); Devlin et al. (2018); Olsson et al. (2022); Clark et al. (2019); Tian et al. (2023); Hutt et al. (2019); Voita et al. (2019); Burns et al. (2022); Gurnee et al. (2022) that discuss how specific attention heads exhibit certain functionalities, such as heads that encode syntax or induction heads that copy tokens. Further, Meng et al. (2022); Geva et al. (2023) discuss the role of attention in specifically transferring factual information, and Hernandez et al. (2023) studies how specific relations can be decoded with a linear transformation from the subject tokens. However, _none of these works_ investigate the mechanisties when factually incorrect information is generated. Halawi et al. (2022); Belrose et al. (2023) study how LLMs internally deal with safety-critical input, such as false demonstrations for in-context learning or prompt injections. These share a similar insight as ours: analyzing latent information _across layers_ and not in isolation could be more useful for failure prediction. Varshney et al. (2023) detects and mitigates hallucinations using the model's logits, which is closest to our Confidence baseline. Mundler et al. (2023);
Figure 6: **Factual Error Prediction.** (a) Predicting the failure probability for individual constraints using SAT Probe and combining them performs comparably, sometimes better than Confidence. (b) Predicting failure for single-constraint queries. SAT Probe is comparable to Confidence and better than Popularity. We average the performance across all relations for CounterFact datasets. For both figures, error bars show the standard error across 10 random seeds where the randomness is over rerunning the experiments with different train/test splits. Tables 5,6 contains the results in tabular form with multiple metrics.
Manakul et al. (2023); Zhang et al. (2023) interact with the LLMs in a black box fashion and aim to determine factual errors through inconsistencies, but doing so requires several forward passes and conveys conflicting signals such as refuting an initial claim, which can diminish user trust (Liao and Vaughan, 2023; Huang et al., 2020).
## 7 Conclusion and Future Work
While this work provides initial insights and a lens into leveraging LLM's internals to understand factual errors, it raises several exciting questions for future work. First, we studied only conjunctive factual queries, but the class of potential constraints is much broader (e.g. instructions (Ouyang et al., 2022), disjunctive queries). Studying those would improve the utility of the framework and our understanding of how models perform and represent compositions of constraints. Second, the content of the information in attention patterns remains opaque and warrants further investigation. Similarly, the reasons behind the correlation between attention and constraint popularity/correctness found here are still unknown. Here we offered a fairly simple framework to probe the information in attention, and we believe there are further opportunities for improvement. Overall, this work takes another step towards improving our understanding of safety-critical mechanisms in LLMs and operationalizing these insights.
## Acknowledgment
We would like to thank Duygu Yilmaz, Marah Abdin, Rahee Ghosh Peshawaria, Federico Bianchi, Kyle Swanson, Shirley Wu, James Zou, Eric Horvitz, Zhi Huang, Marco Tulio Ribeiro, Scott Lundberg for their support and comments throughout the project.
|
2310.00290 | Universality of periodic points in bounded discrete time series | We consider arbitrary bounded discrete time series originating from dynamical
system. Without any use of the Fourier transform, we find periodic points which
suitably characterizes (i.e. independent of Lyapunov exponent) the
corresponding time series. In particular, bounded discrete time series
generated by the autoregressive model (without the white noise) is equivalent
to a quasi periodic function. | Chikara Nakayama, Tsuyoshi Yoneda | 2023-09-30T07:46:47Z | http://arxiv.org/abs/2310.00290v6 | Mathematical structure of perfect predictive reservoir computing for autoregressive type of time series data
###### Abstract.
Reservoir Computing (RC) is a type of recursive neural network (RNN), and there can be no doubt that the RC will be more and more widely used for building future prediction models for time-series data, with low training cost, high speed and high computational power. However, research into the mathematical structure of RC neural networks has only recently begun. Bollt (2021) clarified the necessity of the autoregressive (AR) model for gaining the insight into the mathematical structure of RC neural networks, and indicated that the Wold decomposition theorem is the milestone for understanding of these. Keeping this celebrated result in mind, in this paper, we clarify hidden structures of input and recurrent weight matrices in RC neural networks, and show that such structures attain perfect prediction for the AR type of time series data.
Key words and phrases:Reservoir computing, Autoregressive model, universal approximation theorem, almost periodic functions, transcendental numbers. 2020 Mathematics Subject Classification: Primary 68T27; Secondary 11B50, Tertiary 42A16
## 1. Introduction
Reservoir Computing (RC) is a type of recursive neural network (RNN). Gilpin [4] evaluated 24 statistical forecasting models across 135 dynamical systems, including RC, autoregressive moving averages (ARIMA), deep neural networks such as the transformer model, long-short-term-memory networks (LSTM), vanilla recurrent neural networks (RNN), temporal convolutional neural networks and neural basis expansion/neural hierarchical interpolation (NBEATS/NHiTS). The best-performing machine learning models require very long training times, in contrast, the RC exhibits competitive performance with two orders of magnitude less training time. Thus there can be no doubt that the RC will be more and more widely used for building future prediction models for time-series data, with low training cost, high speed and high computational power.
On the other hand, research into the mathematical structure of RC neural networks has only recently begun. Bollt [1] clarified the necessity of the autoregressive (AR) model for gaining the insight into the mathematical structure of RC neural networks, and indicated the Wold decomposition theorem [10] is the milestone for understanding of these. More precisely, in the stochastic framework, a zero mean covariance stationary vector process admits a vector AR representation (see [1, Section V]). Furthermore, Gauthier et.al [3] proposed a next generation RC with quadratic reservoir vectors, which focuses not only on the mathematical understanding of the RC, but also on the fundamental improvement of it. In contrast to these celebrated results, we stick to the deterministic framework, and in this paper,
we clarify hidden structures of input and recurrent weight matrices, and show that these structures attain perfect prediction for the AR type of time series data.
## 2. AR model and almost periodic functions
Before going into structures of input and recurrent weight matrices, first we construct both training and reference data. Let us start from the following condition on smooth functions \(\phi\in C^{\infty}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), which is naturally expressing "_recurring pattern_":
1. \(\begin{cases}\text{From any sequence of the form }\{\phi(t+h_{n})\}_{n}\text{ where }h_{n}\text{ are real numbers,}\\ \text{one can extract a subsequence converging uniformly on the real line.}\end{cases}\)
Due to Corduneanu [2, Theorems 1.10 and 1.11], "_recurring pattern_" is nothing more than the almost periodicity (necessary and sufficient conditions), expressed as follows:
\[\phi(t)=\sum_{\lambda\in\Lambda}a_{\lambda}\sin\left(\lambda(t-b_{\lambda}) \right),\quad\{a_{\lambda}\}_{\lambda},\{b_{\lambda}\}_{\lambda}\subset\mathbb{ R},\quad\Lambda(\subset\mathbb{R})\text{ is countable.} \tag{2}\]
**Remark 1**.: We see that almost periodic functions possess quasi-periodic orbits, so, these are integrable systems (see the well-known Arnold-Liouville theorem). We now explain it briefly. Let \(L\in\mathbb{Z}_{\geq 1}\), \(\{\lambda_{j}\}_{j=1}^{L}\subset\mathbb{R}\) and let \(\mathcal{M}\) be a torus such that
\[\mathcal{M}=\prod_{j=1}^{L}(\mathbb{R}/(2\pi\mathbb{Z})).\]
Also let \(x_{t}\) be a shift operator (diffeomorphism) such that
\[x_{t}=x_{0}+\tau t:\mathcal{M}\to\mathcal{M}\quad(x_{0}\mapsto x_{t}),\quad t \in\mathbb{R},\quad\tau=\{\lambda_{j}\}_{j=1}^{L}\in T_{x_{t}}\mathcal{M}\cong \mathbb{R}^{L}.\]
Then there exists a \(g:\mathcal{M}\to\mathbb{R}\) such that
\[\phi(t)=\sum_{j=1}^{L}a_{\lambda_{j}}\sin\left(\lambda_{j}(t-b_{\lambda_{j}}) \right)=g\circ x_{t}.\]
More specifically, we set \(g\) as follows:
\[g(t_{1},t_{2},\cdots,t_{L})=\sum_{j=1}^{L}a_{\lambda_{j}}\sin t_{j}.\]
This expression exhibits nothing more than quasi-periodic orbit. Kobayashi et. al. [8] investigated the RC from a dynamical system perspective, such as unstable fixed points, periodic orbits, chaotic saddle, Lyapunov exponents and manifold structures (see also [6, 9]). We see that their _unstable periodic orbit_ must be related to our _quasi-periodic orbit_, since the definition of _"chaos"_ needs a sort of the following property (see [7]):
* Let \(f\) be a map, which takes an interval \(I\) to itself. Then periodic points of \(f\) are dense in \(I\).
We emphasize that almost periodic functions are indispensable for mathematically analyzing the AR model (in the deterministic framework). For \(\{p_{\ell}\}_{\ell=1}^{L}\subset\mathbb{R}\), the AR model is described as follows:
1. \(y(t)=p_{1}y(t-1)+p_{2}y(t-2)+\cdots+p_{L-1}y(t-L+1)+p_{L}y(t-L)\quad(t\geq 0)\)
with prescribed initial data \(\{y(-\ell)\}_{\ell=1}^{L}\). We now explain that this AR model crucially includes the structure of almost periodic functions (2). We plug the following initial data (looking into the characteristic equation)
\[y(-\ell)=\mu^{L-\ell},\quad(\mu\in\mathbb{R},\ \ell=0,1,\cdots,L)\]
into (3). Throughout this paper, we choose \(L>0\) to be even, and we look into eigenfunctions of the characteristic equation whose modulus of eigenvalues are exactly \(1\). In this context, the following equality is the crucial:
\[(\mu-e^{i\lambda})(\mu-e^{-i\lambda})=\mu^{2}-2\mu\cos\lambda+1\quad\text{for} \quad\mu,\lambda\in\mathbb{R}.\]
We multiply this type of second order polynomials \(L/2\) times, then we obtain the following equality which clarifies the relation between almost periodicity and the AR model (factorization of \(L\)-th degree polynomial):
\[0=-\sum_{\ell=1}^{L}p_{\ell}\mu^{L-\ell}+\mu^{L}=\prod_{j=1}^{L/2}(\mu^{2}-2 \cos\lambda_{j}\mu+1),\]
namely, if \(\{p_{\ell}\}_{\ell}\) satisfies the above equality (at least we can easily figure out that \(p_{L}=-1\)), \(\{y(t)\}_{t\geq 0}\) for the AR model (3) can be expressed as follows:
\[y(t)=\sum_{j=1}^{L/2}a_{j}\sin\left(\lambda_{j}(t-b_{j})\right),\quad t=0,1, \cdots,\]
where \(a_{j},b_{j}\in\mathbb{R}\) are uniquely determined by the initial data. Since the almost periodic functions naturally possess _recurring pattern_, in the next section, we employ this AR model data (almost periodic functions) as the both training and reference data, more precisely,
* \(\{y(t)\}_{t\in\mathbb{Z}_{<-L}}\) as the training data,
* \(\{y(t)\}_{t\in\mathbb{Z}_{\geq-L}}\) as the reference data.
## 3. Mathematical structure of reservoir computing (main result)
Throughout this paper we set \(N\) as the number of reservoir nodes (we will determine this \(N\) later). First we formulate the RC and then we state the main theorem. For sufficiently small \(\varepsilon>0\), let \(h:\mathbb{R}\to[-1,1]\) be an activate function (which is allowed to be odd symmetric) as follows:
\[|h(t)-\tanh t|<\varepsilon\quad\text{for}\quad t\in\mathbb{R}. \tag{4}\]
However, we expect that this condition (4) is not needed, that is, the setting \(h(t)=\tanh t\) may goes well. To give a simpler proof of the mathematical main theorem, we decided to employ (4). Now let us discretize the range \([-1,1]\) as follows: For \(K\in\mathbb{Z}_{\geq 1}\), we choose \(\{a_{k}^{K}\}_{k=0}^{2K}\) such that (we employ transcendental numbers)
* \(\{a_{1}^{K},a_{2}^{K},\cdots,a_{K-1}^{K},a_{K+1}^{K},\cdots,a_{2K-1}^{K}\}\subset \{\pm e^{-\frac{n}{m}}\ ;m,n\in\mathbb{Z}_{\geq 1}\}\subset\mathbb{R}\setminus \mathbb{Q}\),
* \(-1=a_{0}^{K}<a_{1}^{K}<a_{2}^{K}<\cdots<a_{K}^{K}=0<a_{K+1}^{K}<\cdots<a_{2K-1 }^{K}<a_{2K}^{K}=1\),
* \(\lim\limits_{K\to\infty}\sup\limits_{1\leq k\leq 2K}|a_{k-1}^{K}-a_{k}^{K}|=0\).
By the Lindemann-Weierstrass theorem, we see that
\[\frac{a_{k^{\prime}}^{K}}{a_{k}^{K}}\in(\mathbb{R}\setminus\mathbb{Q})\cup\{ 0\}\cup\{-1\}\quad(k\neq k^{\prime},\ k\neq K,\ k,k^{\prime}\geq 1), \tag{5}\]
\[\sum_{\ell=1}^{L}\frac{a_{k_{\ell}^{\prime}}^{K}}{a_{k_{\ell}}^{K}}\in(\mathbb{R} \setminus\mathbb{Q})\cup\{-L,-L+1,\cdots,-1,0,1,\cdots,L-1\}\quad(k_{\ell}\neq K,\;k_{\ell},k_{\ell}^{\prime}\geq 1), \tag{6}\]
except for the \(k_{1}=k_{1}^{\prime},\;k_{2}=k_{2}^{\prime},\cdots,k_{L}=k_{L}^{\prime}\) case.
**Remark 2**.: \[\sum_{\ell=1}^{L}\frac{a_{k_{\ell}^{\prime}}^{K}}{a_{k_{\ell}}^{K}}=L\]
if and only if \(k_{1}=k_{1}^{\prime},\;k_{2}=k_{2}^{\prime},\cdots,k_{L}=k_{L}^{\prime}\).
In what follows, we employ the AR model data (almost periodic functions) as the both training and reference data:
\[y(t)=\sum_{\ell=1}^{L}a_{\ell}\sin(\lambda_{\ell}(t-b_{\ell}))=\sum_{\ell=1}^{ L}p_{\ell}y(t-\ell),\quad t\in\mathbb{Z}, \tag{7}\]
for some suitably prescribed \(\{p_{\ell}\}_{\ell=1}^{L}\), \(\{\lambda_{\ell}\}_{\ell=1}^{L}\), \(\{a_{\ell}\}_{\ell=1}^{L}\) and \(\{b_{\ell}\}_{\ell=1}^{L}\), with the normalization \(y(t)\in[-1,1]\) (\(t\in\mathbb{Z}\)). We now discretize this \(y(t)\). There exists a unique \(k_{t}\in\{1,\cdots,2K\}\) such that
\[\frac{a_{k_{t}-1}^{K}+a_{k_{t}}^{K}}{2}< y(t)\leq\frac{a_{k_{t}}^{K}+a_{k_{t}+1}^{K}}{2}\quad(k_{t}=2,3, \cdots,2K-2)\quad\text{or}\] \[a_{0}^{K}< y(t)\leq\frac{a_{1}^{K}+a_{2}^{K}}{2}\quad(k_{t}=1)\quad\text{or}\] \[\frac{a_{2K-2}^{K}+a_{2K-1}^{K}}{2}< y(t)\leq a_{2K}^{K}\quad(k_{t}=2K-1), \tag{8}\]
thus we can appropriately define the discretized \(\bar{y}\) as follows:
\[\bar{y}(t):=a_{k_{t}}^{K}. \tag{9}\]
Note that, we can simplify this discretization as follows:
\[\bar{y}(t)=\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}|y(t)-(a-0 )|,\]
where \(a-0:=a-\varepsilon\) for any sufficiently small \(\varepsilon>0\). Now we determine the training time steps \(T>0\). To determine it, we just apply (1), namely, there exists a sufficiently large \(T>0\) such that
\[\sup_{t}|y(t-T)-y(t)|\ll 1/K. \tag{10}\]
This means that the sequence pattern
\[\bar{y}(-L),\bar{y}(-L+1)\cdots,\bar{y}(-1)\]
is almost the same as
\[\bar{y}(-L-T),\bar{y}(-L+1-T),\cdots,\bar{y}(-1-T).\]
Rigorously, it may still have an error \(\ll 1/K\), but for simplicity, here, we identify these two sequences. Now we set up the RC as follows:
* Training phase
From time-series (training) data \(\{\bar{y}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\), we create reservoir state vectors (column vectors) \(\{\bar{r}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\subset\mathbb{R}^{N}\) by using the following RC: For each fixed \(t\), first, we determine the following tentative reservoir state vectors \(\widetilde{r}_{t}(t-\ell+1)\) (\(\ell=L,L-1,\cdots,2,1\) in this order) inductively:
\[\begin{split}\widetilde{r}_{t}(t-L+1)&=h(W^{in} \bar{y}(t-L)),\\ \widetilde{r}_{t}(t-\ell+1)&=h(W\widetilde{r}_{t}(t -\ell)+W^{in}\bar{y}(t-\ell))\quad(\ell=L-1,L-2,\cdots,2,1),\end{split} \tag{11}\]
where \(W^{in}\in\mathbb{R}^{N\times 1}\) is a column vector (degenerated input weight matrix), \(W\in\mathbb{R}^{N\times N}\) is a square matrix (recurrent weight matrix). These \(W^{in}\) and \(W\) are prescribed vector and matrix, and we will explain concrete \(W^{in}\) and \(W\) in the next section. Then we set
\[\bar{r}(t):=\widetilde{r}_{t}(t).\]
Note that, this \(L\) should be corresponding to the _transient time interval_ in the usual RC.
**Remark 3**.: In this paper, we neglect the term \(\widetilde{r}_{t}(t-L)\) (in the first equation in (11)) which exists in the usual RC. Even if we take \(\widetilde{r}_{t}(t-L)\) into account, the contribution of this term is relatively small, if the recurrent weight matrix \(W\) satisfies the _echo state property_ (see Jaeger [5]), however, it remains open question whether or not \(W\) in Theorem 1 really satisfies it, when \(K\) and \(L\) are relatively large.
From reservoir state vectors, we determine a row vector \(W^{out}\in\mathbb{R}^{1\times N}\) (degenerated output weight matrix) by using the mean-square error. More precisely, we find \(W^{out}\) such that
\[W^{out}:=\operatorname*{arg\,min}_{\widetilde{W}^{out}}\sum_{t=-T}^{-L-1} \left|y(t)-\widetilde{W}^{out}\bar{r}(t)\right|^{2}. \tag{12}\]
* Inference phase
We plug \(W^{in}\), \(W\) and \(W^{out}\) into the following RC, and create a series of future prediction \(\{\bar{u}(t)\}_{t\geq 0}\) from initial reference data \(\{\bar{y}(-\ell)\}_{\ell=1}^{L}\):
\[\begin{cases}\bar{r}(-L+1)&=h(W^{in}\bar{y}(-L)),\\ \bar{r}(-\ell+1)&=h(W\bar{r}(-\ell)+W^{in}\bar{y}(-\ell)),\quad(\ell=L-1,L-2, \cdots 2,1)\\ \bar{u}(0)&=W^{out}\bar{r}(0)-\bar{\delta}_{n_{0}},\\ \end{cases} \tag{13}\]
\[\begin{cases}\bar{r}(t)&=h(W\bar{r}(t-1)+W^{in}\bar{u}(t-1)-W^{in}\bar{u}(t-L -1)),\\ \bar{u}(t)&=W^{out}\bar{r}(t)-\bar{\delta}_{n_{t}}.\end{cases}\quad(t=1,2, \cdots).\]
where \(\bar{\delta}_{n}\) is defined in (16) as averages of the errors \(\{\delta(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\) in Remark 6, and the index \(n_{t}\) (\(t=0,1,\cdots\)) is uniquely determined. See Remark 7.
**Remark 4**.: Since we do not know whether or not this \(W\) possesses the _echo state property_ so far, thus we need to subtract \(W^{in}\bar{u}(t-L-1)\) and eliminate the past contribution.
Then we can state the main theorem as follows:
**Theorem 1**.: _(Perfect prediction for \(K\to\infty\).) For each \(K\in\mathbb{Z}_{\geq 1}\), there exist \(h\) with (4), \(W\) and \(W^{in}\) such that_
\[|\bar{u}(t)-y(t)|\lesssim_{L}\frac{2^{t}}{K}\quad(t\geq 0). \tag{14}\]
**Remark 5**.: In the Fourier analysis, _existence_ of the perfect prediction is rather obvious due to (1). The point is that we found an _explicit representation (i.e. pattern memory)_ of it.
## 4. Proof of main theorem
The crucial point of the proof is to construct suitable \(W\) and \(W^{in}\). In order to do so, we need to define a row vector which represents \(N\)-consecutive time series data:
\[V_{\ell}:=(V_{\ell,1},V_{\ell,2},\cdots,V_{\ell,N})\quad\text{for}\quad\ell=1, 2,\cdots,L.\]
Let \(a_{k}:=a_{k}^{K}\). First let \(\sigma_{j}\) (\(j=1,2,\cdots,N\)) be a permutation operator, namely,
\[\sigma_{j}:\{1,2,\cdots,L\}\to\{a_{1},a_{2},\cdots,a_{K-1},0,a_{K+1},a_{K+2}, \cdots,a_{2K-1}\}\]
(\(\ell\mapsto\sigma_{j}(\ell)\)) and \(\sigma_{j}\neq\sigma_{j^{\prime}}\) (\(j\neq j^{\prime}\)). We exclude the case when
\[\sigma_{j}(\ell)=0\quad\text{for}\quad\ell=1,2,\cdots,L\quad\text{(identically zero case)},\]
and we impose the following two conditions for uniquely determining \(N\):
* For any \(t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}\), there is \(j\in\{1,\cdots,N\}\) such that \(\sigma_{j}(\ell)=\bar{y}(t-\ell)\) for \(\ell=1,2,\cdots,L\),
* For any \(j\in\{1,\cdots,N\}\) there is \(t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}\) such that \(\sigma_{j}(\ell)=\bar{y}(t-\ell)\) for \(\ell=1,2,\cdots,L\).
Note that \(N\leq(2K-1)^{L}-1\) due to the sequence with repetition. Then we can define the representation of \(L\)-consecutive time series data as follows:
\[V_{\ell,j}:=\sigma_{j}(\ell).\]
This definition covers all patterns of \(L\)-consecutive time series data in \(\{\bar{y}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\), in other words, in the training phase, there exists a column vector
\[e:=\underbrace{(0,0,\cdots,0,1,0\cdots,0)}_{N}^{T}\]
such that
\[V_{\ell}e=\bar{y}(t-\ell)\quad(\ell=1,\cdots,L).\]
In particular, due to (10), even for the initial reference data \(\{\bar{y}(-\ell)\}_{\ell=1}^{L}\), there exists a column vector
\[e:=\underbrace{(0,0,\cdots,0,1,0\cdots,0)}_{N}^{T}\]
such that
\[V_{\ell}e=\bar{y}(-\ell)\quad(\ell=1,\cdots,L).\]
In order to obtain the future prediction data \(\{\bar{u}(t)\}_{t=0}^{\infty}\) sequentially, we need to classify each patterns as follows:
\[\mathcal{T}_{n}:=\left\{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}:\sigma_{n }(\ell)=\bar{y}(t-\ell)\quad\text{for}\quad\ell=1,2,\cdots,L\right\}. \tag{15}\]
The crucial point is that, for \(t\in\mathcal{T}_{n}\),
\[\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}\left|\sum_{\ell=1}^ {L}p_{\ell}y(t-\ell)-(a-0)\right|\]
may NOT be uniquely determined. In this case we just choose arbitrary one \(t^{*}\in\mathcal{T}_{n}\) such that
\[v_{n}^{*}:=\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}\left|\sum_{ \ell=1}^{L}p_{\ell}y(t^{*}-\ell)-(a-0)\right|\]
and we define the row vector \(V^{*}\in\mathbb{R}^{N}\) as follows:
\[V^{*}:=(v_{1}^{*},v_{2}^{*},\cdots,v_{N}^{*}).\]
We need this \(V^{*}\) in the inference phase, to obtain the future prediction data \(\{\bar{u}(t)\}_{t=0}^{\infty}\) sequentially.
**Remark 6**.: We observe the controllable errors:
\[v_{n}^{*}-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=:\delta^{*}(t )\quad\text{for}\quad t\in\mathcal{T}_{n},\] \[|\delta^{*}(t)|\lesssim\left|\sum_{\ell=1}^{L}p_{\ell}y(t^{*}- \ell)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)\right|+\frac{1}{K}\lesssim_{L} \frac{1}{K},\] \[y(t)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=\sum_{\ell=1}^{L}p _{\ell}y(t-\ell)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=:\delta(t),\] \[|\delta(t)|\lesssim_{L}\frac{1}{K}.\]
Let \(\bar{\delta}=(\bar{\delta}_{1},\bar{\delta}_{2},\cdots,\bar{\delta}_{N})\) be the corresponding mean averages:
\[\bar{\delta}_{n}:= \frac{1}{|\mathcal{T}_{n}|}\sum_{t\in\mathcal{T}_{n}}(\delta(t)- \delta^{*}(t)),\quad\text{in other words},\] \[\bar{\delta}_{n}:= \operatorname*{arg\,min}_{\delta}\frac{1}{|\mathcal{T}_{n}|}\sum_ {t\in\mathcal{T}_{n}}|\delta(t)-\delta^{*}(t)-\delta|^{2}. \tag{16}\]
Clearly, \(|\bar{\delta}_{n}|\lesssim_{L}1/K\).
**Remark 7**.: In the inference phase, for each \(t\geq 0\), there exists a unique column vector
\[e_{n_{t}}:=(\underbrace{\overbrace{0,0,\cdots,0,1}^{n_{t}},0\cdots,0}_{N})^{T}\]
such that
\[\bar{u}(t-\ell)=V_{\ell}e_{n_{t}}\quad\text{for}\quad\ell=1,2,\cdots,L. \tag{17}\]
We have used this index \(n_{t}\) in (13).
Next we define column vectors (conjugate type of vectors)
\[W_{\ell}^{in}:=(W_{\ell,1}^{in},W_{\ell,2}^{in},\cdots,W_{\ell,N}^{in})^{T} \quad\text{for}\quad\ell=1,2,\cdots,L,\]
as follows: First let \(\sigma_{i}^{*}\) (\(i=1,2,\cdots,N\)) be an adjoint type of permutation operator, namely, let
\[\begin{cases}\sigma_{j}^{*}(\ell):=\frac{1}{\sigma_{j}(\ell)}&\text{if}\quad \sigma_{j}(\ell)\neq 0,\\ \sigma_{j}^{*}(\ell):=0&\text{if}\quad\sigma_{j}(\ell)=0\end{cases}\]
for \(\ell\in\{1,\cdots,L\}\) and \(j\in\{1,\cdots N\}\). Then we can define the conjugate type of representation of \(L\)-consecutive time series as follows:
\[W^{in}_{\ell,i}:=\sigma^{*}_{i}(\ell)\times\frac{1}{\#\{\sigma^{*}_{i}(\ell)\neq 0 ;\ell=1,2,\cdots,L\}}. \tag{18}\]
For notational convenience, we set \(W^{in}_{0}:=W^{in}_{L}\), also, let \(W^{in}:=W^{in}_{L}\). By the definition of this \(\{W^{in}_{\ell}\}_{\ell}\), then we can construct a suitable matrix \(W\). More precisely, our main task now is to assure existence of the inverse of matrix \(X\):
\[X:=h(W^{in}_{0}V_{1}+W^{in}_{1}V_{2}+\cdots+W^{in}_{L-1}V_{L}).\]
Note that, by using this expression, the RC (11) can be rewritten as follows:
\[WX=W^{in}_{1}V_{1}+W^{in}_{2}V_{2}+\cdots+W^{in}_{L}V_{L}=:Y. \tag{19}\]
**Lemma 2**.: \(X\) _is a regular matrix. In other words, we have \(W=YX^{-1}\)._
**Remark 8**.: By using this \(W\), we have
\[Wh(W^{in}\bar{y}(t-L)) =W^{in}_{1}\bar{y}(t-L),\] \[W(h(W^{in}\bar{y}(t-L+1)+W^{in}_{1}\bar{y}(t-L))) =W^{in}_{1}\bar{y}(t-L+1)+W^{in}_{2}\bar{y}(t-L),\] \[\vdots\] \[W\left(h\left(\sum_{\ell=1}^{L}W^{in}_{\ell-1}\bar{y}(t-\ell) \right)\right) =\sum_{\ell=1}^{L}W^{in}_{\ell}\bar{y}(t-\ell).\]
Note that, if \(\bar{y}(t-L)=0\), then we just skip the first step and start from the second step. If \(\bar{y}(t-L)=\bar{y}(t-L+1)=0\), then we skip the first and second steps and start from the third step, and so on.
Proof.: The key ingredient of the proof is the following:
* Let \(f\) be a non-zero polynomial in \(N\) variables. Then the complement of the zero point set, that is, \(\{x\in\mathbb{R}^{N};f(x)\neq 0\}\) is dense in \(\mathbb{R}^{N}\).
In the following proof, we give a concrete representation of such density. By (5) and (18) (see also Remark 2), we see
\[W^{in}_{\ell-1,i}V_{\ell,j} \in\left\{1,2,\cdots,\#\{\sigma_{i}(\ell);\ell=1,2,\cdots,L\} \right\}\times\frac{1}{\#\{\sigma_{i}(\ell);\ell=1,2,\cdots,L\}}\] \[\text{or}\quad\in(\mathbb{R}\setminus\mathbb{Q})\cup\{0\}.\]
By (6), Remark 2 and Lindemann-Weierstrass theorem, we see
\[\sum_{\ell=1}^{L}W^{in}_{\ell-1,j}V_{\ell,j}=1\quad\text{and}\quad\sum_{\ell= 1}^{L}W^{in}_{\ell-1,i}V_{\ell,j}\neq 1\]
for \(i\neq j\). In order to construct an appropriate \(h\), we use a finite set \(G\) as follows:
\[G:=\left\{\sum_{\ell=1}^{L}W^{in}_{\ell-1,i}V_{\ell,j};i,j\in\{1,2,\cdots,N\} \right\}\subset\mathbb{R}.\]
Note that \(1\in G\). Now we take a smooth function \(h:\mathbb{R}\to[-1,1]\) satisfying the following:
\[h(1) \in\mathbb{Q}\setminus\{0\},\] \[h(\gamma) \in\left(\{\pm e^{-\frac{n}{m}}\ ;m,n\in\mathbb{Z}_{\geq 1}\}\cup\{0 \}\right)\subset(\mathbb{R}\setminus\mathbb{Q})\cup\{0\},\quad\text{for} \quad\gamma\subset G\setminus\{1\}.\]
Then we can easily check that (applying the Lindemann-Weierstrass theorem)
* \(h(\gamma_{1})h(\gamma_{2})\cdots h(\gamma_{N})\in(\mathbb{R}\setminus\mathbb{Q}) \cup\{0\}\quad\) for \(\quad\{\gamma_{n}\}_{n=1}^{N}\in G^{N}\setminus\{\underbrace{1,1,\cdots,1}_{N}\}\),
* for any \(\{\tau_{n^{\prime}}\}_{n^{\prime}=1}^{N!-1}\in\{-1,1\}^{N!-1}\,\)and \(\{\{\gamma_{n,n^{\prime}}\}_{n=1}^{N}\}_{n^{\prime}=1}^{N!-1}\subset G^{N} \setminus\{\underbrace{1,1,\cdots,1}_{N}\}\),
\[\sum_{n^{\prime}=1}^{N!-1}\tau_{n^{\prime}}h(\gamma_{1,n^{\prime}})h(\gamma_{ 2,n^{\prime}})\cdots h(\gamma_{N,n^{\prime}})\in(\mathbb{R}\setminus\mathbb{Q} )\cup\{0\}.\]
By applying the above properties, then we see that the determinant of the matrix \(X\) is nonzero, since it is expressed as
\[|X|=\eta_{1}+\eta_{2},\quad\eta_{1}\in\mathbb{Q}\setminus\{0\},\quad\eta_{2} \in(\mathbb{R}\setminus\mathbb{Q})\cup\{0\}.\]
Now we resume the proof of the main theorem. First we solve the following:
\[\bar{\delta}+V^{*}=W^{out}X\]
Since there exists the inverse matrix \(X^{-1}\), we obtin
\[\left(\bar{\delta}+V^{*}\right)X^{-1}=W^{out}. \tag{20}\]
We now check that this is the desired \(W^{out}\). By Remark 6 and (16), we have that
\[\sum_{t}\left|y(t)-W^{out}\bar{r}(t)\right|^{2}\] \[= \sum_{t}\left|\delta(t)-\delta^{*}(t)+V^{*}-W^{out}\bar{r}(t) \right|^{2}\] \[= \sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}|\delta(t)-\delta^{*}(t) -\bar{\delta}_{n}|^{2}\] \[\quad+2\sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}(\delta(t)-\delta^ {*}(t)-\bar{\delta}_{n})\left(\bar{\delta}_{n}+V^{*}e_{n}-W^{out}h(Y)e_{n}\right)\] \[\quad+\sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}\left|\bar{\delta} _{n}+V^{*}e_{n}-W^{out}h(Y)e_{n}\right|^{2},\]
where \(e_{n}\) is a suitable column vector such that
\[e_{n}:=(\underbrace{\overbrace{0,0,\cdots,0,1}^{n},0\cdots,0}_{N})^{T}.\]
Therefore the minimum value of (12) can be attained by (16) and (20), which is zero. In the inference phase (13), we show (14). First we estimate the case \(t=0\). Since
\[\bar{u}(0)=v_{n_{0}}^{*}=\sum_{\ell=1}^{L}p_{\ell}\bar{y}(-\ell)+\delta^{*}(0), \quad\text{and}\quad y(0)=\sum_{\ell=1}^{L}p_{\ell}\bar{y}(-\ell)+\delta(0),\]
we have
\[|\bar{u}(0)-y(0)|\lesssim|\delta(0)|+|\delta^{*}(0)|\lesssim_{L}\frac{2}{K}.\]
Next we estimate the case \(t=1\). Since
\[\bar{u}(1)=v_{n_{1}}^{*} =p_{1}\bar{u}(0)+\sum_{\ell=2}^{L}p_{\ell}\bar{y}(-\ell+1)+\delta^{ *}(1)\] \[=p_{1}(y(0)+\delta^{*}(0)-\delta(0))+\sum_{\ell=2}^{L}p_{\ell}\bar{ y}(-\ell+1)+\delta^{*}(1)\]
and
\[y(1)=\sum_{\ell=1}^{L}y(1-\ell)=\sum_{\ell=1}^{L}\bar{y}(1-\ell)+ \delta(1),\]
we have
\[|\bar{u}(1)-y(1)|\lesssim_{L}\frac{4}{K}.\]
Also we estimate the case \(t=2\). Since
\[\bar{u}(2)=v_{n_{2}}^{*} =p_{1}\bar{u}(1)+p_{2}\bar{u}(0)+\sum_{\ell=3}^{L}p_{\ell}\bar{y}( -\ell+2)+\delta^{*}(2)\] \[=p_{1}(y(1)+\delta^{*}(1)-\delta(1)+\delta^{*}(0)-\delta(0))+p_{2 }(y(0)+\delta^{*}(0)-\delta(0))\] \[\quad+\sum_{\ell=3}^{L}p_{\ell}\bar{y}(-\ell+2)+\delta^{*}(2)\]
and
\[y(2)=\sum_{\ell=1}^{L}y(2-\ell)=\sum_{\ell=1}^{L}\bar{y}(2-\ell)+ \delta(2),\]
we have
\[|\bar{u}(2)-y(2)|\lesssim_{L}\frac{8}{K}.\]
Repeating this argument, we have
\[|\bar{u}(t)-y(t)|\lesssim_{L}\frac{2^{t}}{K}.\]
This is the desired estimate.
### Acknowledgments
The author is grateful to Professors Chikara Nakayama and Yoshitaka Saiki for valuable comments. Research of TY was partly supported by the JSPS Grants-in-Aid for Scientific Research 20H01819.
### Conflict of Interest
The authors have no conflicts to disclose.
|
2309.15926 | Magnetic flux plays an important role during a BHXRB outburst in
radiative 2T GRMHD simulations | Black hole (BH) X-ray binaries cycle through different spectral states of
accretion over the course of months to years. Although fluctuations in the BH
mass accretion rate are generally recognized as the most important component of
state transitions, it is becoming increasingly evident that magnetic fields
play a similarly important role. In this article, we present the first
radiative two-temperature (2T) general relativistic magnetohydrodynamics
(GRMHD) simulations in which an accretion disk transitions from a quiescent
state at an accretion rate of $\dot{M} \sim 10^{-10} \dot{M}_{\rm Edd}$ to a
hard-intermediate state at an accretion rate of $\dot{M} \sim 10^{-2}
\dot{M}_{\rm Edd}$. This huge parameter space in mass accretion rate is bridged
by artificially rescaling the gas density scale of the simulations. We present
two jetted BH models with varying degrees of magnetic flux saturation. We
demonstrate that in `Standard and Normal Evolution' models, which are
unsaturated with magnetic flux, the hot torus collapses into a thin and cold
accretion disk when $\dot{M} \gtrsim 5\times 10^{-3} \dot{M}_{\rm Edd}$. On the
other hand, in `Magnetically Arrested Disk' models, which are fully saturated
with vertical magnetic flux, the plasma remains mostly hot with substructures
that condense into cold clumps of gas when $\dot{M} \gtrsim 1 \times 10^{-2}
\dot{M}_{\rm Edd}$. This suggests that the spectral signatures observed during
state transitions are closely tied to the level of magnetic flux saturation. | M. T. P. Liska, N. Kaaz, K. Chatterjee, Razieh Emami, Gibwa Musoke | 2023-09-27T18:05:02Z | http://arxiv.org/abs/2309.15926v2 | # Magnetic flux plays an important role during a BHXRB outburst in radiative 2T GRMHD simulations
###### Abstract
Black hole (BH) X-ray binaries cycle through different spectral states of accretion over the course of months to years. Although fluctuations in the BH mass accretion rate are generally recognized as the most important component of state transitions, it is becoming increasingly evident that magnetic fields play a similarly important role. In this article, we present the first radiative two-temperature (2T) general relativistic magnetohydrodynamics (GRMHD) simulations in which an accretion disk transitions from a quiescent state at an accretion rate of \(\dot{M}\sim 10^{-10}\dot{M}_{\rm Edd}\) to a hard-intermediate state at an accretion rate of \(\dot{M}\sim 10^{-2}\dot{M}_{\rm Edd}\). This huge parameter space in mass accretion rate is bridged by artificially rescaling the gas density scale of the simulations. We present two jetted BH models with varying degrees of magnetic flux
Most general relativistic magnetohydrodynamic (GRMHD) simulations to date address accretion in the quiescent state. While BHXRBs indeed spend most of their time in the quiescent state, they accrete most of their gas (and hence grow most rapidly) in the hard-intermediate and high-soft states (e.g. Fabian, 2012). However, simulating accretion disks in these luminous states is numerically challenging due to the presence of dynamically important radiation fields and thermal decoupling between ions and electrons. Presently, only a handful of GRMHD codes are able to model radiation (e.g. Sadowski et al., 2013; McKinney et al., 2013; Fragile et al., 2014; Ryan et al., 2017; White et al., 2023). In addition, since radiative cooling makes such accretion disks thinner, one needs a much higher resolution to resolve them. For example, to resolve on a spherical grid a disk that is two times thinner without static or adaptive mesh refinement requires a factor 32 more computational time. These factors make such simulations extremely expensive and complex. Due to recent algorithmic and computational advances, radiative GRMHD simulations of accretion disks accreting above a few percent of the Eddington limit (i.e., very thin disks) came within the realm of possibility (e.g. Ohsuga and Mineshige, 2011; Mishra et al., 2016; Morales Teixeira et al., 2018; Fragile et al., 2018; Lancova et al., 2019; Mishra et al., 2020, 2022; Liska et al., 2022, 2023). These recent advances supplement earlier work that attempted to tackle the physics driving accretion in the luminous states using an ad hoc cooling function in place of first-principles radiation (e.g. Noble et al., 2009; Avara et al., 2016; Hogg and Reynolds, 2017, 2018; Scepi et al., 2023; Nemmen et al., 2023; Bollimpalli et al., 2023).
Recently, first-of-their-kind radiative GRMHD simulations of accretion disks accreting at \(L\sim 0.35L_{\rm Edd}\) demonstrated that in systems where no vertical magnetic flux is present, a thin and cold accretion disk forms, possibly explaining the high-soft state (Liska et al., 2022). However, in the presence of dynamically important large scale vertical magnetic flux (e.g. 'MADs', Narayan et al., 2003; Tchekhovskoy et al., 2011)), the accretion disk is truncated and decouples within \(r\sim 20r_{g}\) into a two-phase plasma of cold and dense clumps surrounded by hot and dilute gas (Liska et al., 2022). The presence of cold plasma down to the innermost stable circular orbit (ISCO) provides an interesting explanation for the observed relativistic broadened iron-reflection lines in the hard-state (e.g. Reis et al., 2010), which is thought to only feature hot gas unable to produce such lines. In between these two extreme regimes, where vertical magnetic flux is present but does not saturate the disk, Lancova et al. (2019) demonstrated that a hot plasma with both inflowing and outflowing components sandwic a thin accretion disk. Such puffy disk models can potentially describe BHXRBs in the intermediate spectral state, which launch relativistic jets but show no clear evidence of significant disk truncation (e.g. Kara et al., 2019).
However, none of this work addresses how and at which accretion rates these high-luminosity accretion states form and what role magnetic fields play in that process. In this work we present the first radiative two-temperature GRMHD simulations spanning 8 orders of magnitude in mass accretion rate. These simulations demonstrate a transition from a hot torus in the quiescent state to either a magnetically truncated (e.g. Liska et al., 2022) or puffy accretion disk (e.g. Lancova et al., 2019) in the (hard-) intermediate state depending on the amount of magnetic flux saturation. In Section 3 we describe our radiative GRMHD code and numerical setup, before presenting our results in Section 4 and concluding in Section 5.
## 2 Numerical Setup
To model the rise from quiescence to the hard-intermediate state we use the GPU-accelerated GRMHD code H-AMR (Liska et al., 2018, 2022). H-AMR evolves the radiative two-temperature GRMHD equations (e.g. Sadowski et al., 2013; Sadowski et al., 2017) on a spherical grid. Similar to Liska et al. 2022, we model the radiation field as a second fluid using the M1 approximation and, in addition, also evolve the photon number density to get a better estimate for the radiation temperature (Sadowski and Narayan, 2015). Radiative processes such as Brehmstrahlung, Synchrotron, bound-free, iron line emission, and scattering (including Comptonization) are included assuming a \(M_{\rm Bh}=10M_{\odot}\) black hole with solar abundances (\(X=0.70\), \(Y=0.28\), \(Z=0.02\)). The associated energy-averaged grey opacities are provided in McKinney et al. (2017) (equations C16, D7 and E1).
At each timestep, the dissipation rate is calculated by subtracting the internal energy provided by the entropy equation from the internal energy provided by the energy equation (e.g. Ressler et al., 2015). Subsequently, the total energy dissipation is divided as a source term between the electron and ions based on a reconnection heating model (Rowan et al., 2017). This deposits a fraction \(\delta_{e}\lesssim 0.5\) of the dissipation into the electrons, which varies between \(\delta_{e}\sim 0.2\) in less magnetized regions to \(\delta_{e}\sim 0.5\) in highly magnetized regions. Coulomb collisions (Stepney, 1983) are taken into account through an implicit source term (e.g. Sadowski et al., 2017). To avoid the jet funnel becoming devoid of gas and to keep the GRMHD scheme stable, we floor the density in the drift frame of the jet (Ressler et al., 2015) such that the ratio of the density and magnetic pressure \(\frac{p_{\rm B}}{\rho}\lesssim 12.5\).
We use a spherical grid in the Kerr-Schild foliation with coordinates \(x^{1}=\log(r)\), \(x^{2}=\theta\), and \(x^{3}=\varphi\) with a resolution of \(N_{r}\times N_{\theta}\times N_{\varphi}=420\times 192\times 192\) for our SANE model and \(N_{r}\times N_{\theta}\times N_{\varphi}=560\times 192\times 192\) for our MAD model. This adequately resolves the fastest growing MRI-wavelength by
\(\gtrsim 16\) cells in all 3 dimensions. We place the outer boundary at \(R_{\rm out}=10^{3}r_{g}\) for our SANE model, and at \(R_{\rm out}=10^{4}r_{g}\) for our MAD model. We also maintain at least 5 cells within the event horizon such that the inner boundary is causally disconnected from the rest of the computational domain. We speed up the simulations approximately 3-fold by introducing 4 levels of local adaptive timestepping (Liska et al., 2022). To prevent cell squeezing around the polar axis from slowing down our simulations (e.g. Courant et al., 1928) we use 4 levels of static mesh derefinement (Liska et al., 2018, 2022) to reduce the \(\varphi\)-resolution to \(N_{\varphi}=[96,48,24,12]\) within \(\theta\lesssim[30^{\circ},15^{\circ},7.5^{\circ},3.75^{\circ}]\) from each pole. This maintains a cell aspect ratio of roughly \(|\Delta r|:|\Delta\theta|:|\Delta\varphi|\sim 1:1:2\) throughout the grid, which is sufficient to capture the 3-dimensional nature of the turbulence.
## 3 Physical Setup
To understand the effects of magnetic flux saturation on the transition from the quiescent to the hard-intermediate state, we include two models in the SANE ('Standard and Normal Evolution', Narayan and Yi, 1994) and MAD ('Magnetically Arrested Disk', Narayan et al., 2003) regimes. We assume a rapidly spinning black hole with spin parameter \(a=0.9375\). Our SANE model (XRB SANE) features a standard Fishbone and Moncrief torus (Fishbone and Moncrief, 1976) with inner radius \(r_{\rm in}=6\,r_{g}\), radius of maximum pressure at \(r_{\rm max}=12\,r_{g}\), and outer radius \(r_{\rm out}\sim 50r_{g}\). Our MAD model (XRB MAD), on the other hand, features a much larger torus with \(r_{\rm in}=20\,r_{g}\) and \(r_{\rm max}=41r_{g}\) whose outer edge lies at \(r_{\rm out}\sim 800\,r_{g}\). These torii are pretty standard choices in the GRMHD community (e.g. Porth et al., 2019; Chatterjee et al., 2023). We thread the SANE model with magnetic vector potential \(A_{\Phi}\propto\rho-0.2\) and the MAD model with magnetic vector potential \(A_{\Phi}\propto\rho r^{3}\sin^{3}(\theta)\exp\left(-\frac{r}{400r_{g}}\right)- 0.2\). Here \(\rho\) is the gas density. In both cases this produces a field loop that is approximately the size of the torus. Because the field loop in our MAD torus is much larger than in our SANE torus, only the MAD torus contains enough magnetic flux to get saturated and become MAD. In both cases, we normalize the resulting magnetic field such that \(\beta^{\rm max}=p_{\rm gas}^{\rm max}/p_{b}^{\rm max}=100\) where \(p_{\rm gas}^{\rm max}\) and \(p_{b}^{\rm max}\) are the maximum gas and magnetic pressure in the torus. For the purpose of calculating the initial torus solution we set the adiabatic index \(\gamma=5/3\) for our SANE model and \(\gamma=13/9\) for our MAD model. We subsequently distribute, according to our heating prescription involving a magnetic reconnection model (Rowan et al., 2017), the total pressure between the ions and electrons before we self-consistently evolve their entropy and adiabatic indices (e.g. Sadowski et al., 2017).
To make GRMHD simulations of BHXRB outbursts feasible, we artificially shorten the relevant timescales by introducing a rescaling method that, as a function of time, sets the accretion rate to a predetermined value. However, before we apply this method, we first run the simulation for \(t=10^{4}r_{g}/c\) in two-temperature non-radiative GRMHD to get the accretion disk into a quasi-steady state. We subsequently restart the simulation in radiative two-temperature GRMHD and re-normalize the density (\(\rho\)) every full timestep with a factor \(\zeta\) such that the running average of the black hole mass accretion rate, \(\langle\dot{M}\rangle\),
\[\langle\dot{M}\rangle=\langle\int-\sqrt{-\beta}\rho u^{\prime}d\theta d\varphi \rangle|^{t}_{t-10^{4}r_{g}/c}, \tag{1}\]
is scaled to a time-dependent 'target' mass accretion rate,
\[\dot{M}_{\rm Target}=10^{-10}\times 2^{\frac{\rm(jet)}{10^{4}}}\dot{M}_{\rm Edd}, \tag{2}\]
via the rescaling factor,
\[\zeta=\dot{M}_{\rm Target}/\langle\dot{M}\rangle|_{t=5r_{g}} \tag{3}\]
Here \(\dot{M}_{\rm Edd}=\frac{L_{\rm Edd}}{\eta_{\rm NT}c^{2}}\) is the Eddington accretion rate and \(\eta_{\rm NT}=0.178\) the Novikov and Thorne (1973) radiative efficiency. We also rescale the internal energy density (\(u_{g}\)), radiation energy density (\(E_{\rm rad}\)) and magnetic energy density (\(b^{2}\)) with the same prefactor \(\zeta\) as the density. This leads to a doubling of the black hole mass accretion rate every \(t=10^{4}r_{g}/c\). Note that this approach automatically increases the total amount of event horizon magnetic flux (\(\Phi=\frac{\sqrt{4\pi}}{2}\int|B^{\prime}|d\theta d\varphi\rangle\) by a factor \(\sqrt{\zeta}\), such that the normalized magnetic flux (\(\phi=\frac{\Phi|_{\rm max}}{\sqrt{\langle\dot{M}\rangle}|_{t=5r_{g}}}\)) remains constant. When \(\phi\sim 40-50\) we expect that the
Figure 1: **Panel A:** The event horizon mass accretion rate \(\dot{M}\) closely follows the target mass accretion rate \(\dot{M}_{\rm target}\) (red) for both the SANE (black) and MAD (blue) models. **Panel B:** The normalized magnetic flux \(\phi\) maintains saturation value in the MAD model and stays a factor \(\gtrsim 2.0\) below saturation in the SANE model.
disk turns MAD and flux bundles get ejected from the black hole (e.g. Tchekhovskoy et al., 2011; McKinney et al., 2012). We achieve inflow-outflow equilibrium over the mass accretion rate doubling time (\(\Delta t=10^{4}r_{g}/c\)) up to approximately \(r\sim 15r_{g}\) for our SANE model and \(r\sim 30r_{g}\) for our MAD model.
## 4 Results
We evolve both models for \(t\sim 260,000-280,000\,r_{g}/c\), during which the targeted mass accretion rate increases by 8 orders of magnitude. As illustrated in Figure 1a, the black hole mass accretion rate closely follows the targeted mass accretion rate for both models. However, as illustrated in Figure 1b, while the normalized magnetic flux threading the BH event horizon stays constant in our MAD model, it increases by a factor of \(\sim 3\) in our SANE model. This is because a significant fraction of the initial gas reservoir accretes or gets ejected in the form of winds, leading to a relative larger increase in the dimensionless magnetic flux compared to the mass accretion rate. The rapid variability of the magnetic flux observed in our MAD model is a well-known characteristic of MADs (e.g. Tchekhovskoy et al., 2011; McKinney et al., 2012) caused by flux bundles being ejected from the black hole event horizon through magnetic reconnection (e.g. Ripperda et al., 2021).
The contour plots of density and electron temperature in Figure 2 illustrate 3 different stages of the artificially-induced state transition. An accompanying animation also illustrating the ion temperature is included in the supplementary materials and on our Youtube playlist. In the first stage (\(\dot{M}\lesssim 10^{-6}\dot{M}_{\rm Edd}\)), the radiative efficiency is low and radiative cooling plays a negligible role. The ions in the plasma are significantly hotter than the electrons because our heating prescription typically injects only a fraction \(\delta_{e}\sim 0.2-0.4\) of the dissipative heating into the electrons (and a fraction \(\delta_{i}\sim 0.6-0.8\) into the ions). In the second stage (\(\dot{M}\gtrsim 10^{-6}\dot{M}_{\rm Edd}\)), radiative cooling of the electrons becomes efficient leading to a drop in electron temperature but no other structural change (see also Chatterjee et al., 2023). In the third stage (\(\dot{M}\gtrsim 10^{-2}\dot{M}_{\rm Edd}\)), Coulomb collisions become efficient. This allows the ions
Figure 2: The SANE (upper panels) and MAD (lower panels) models at 3 different accretion rates. The left hemisphere illustrates the electron temperature (\(T_{e}\)) while the right hemisphere illustrates the density (\(\rho\)). The disk-jet boundary (\(b^{2}/\rho=1\)) is demarcated by white line and the last scattering surface (\(\tau_{eq}=1\)) is demarcated by a magenta line. The inset in the left hemisphere gives the mass accretion rate and luminosity in Eddington units. See our Youtube playlist for a full animation of both figures that also includes the ion temperature. At very low accretion rates (left panels), the electron temperature is determined by the heating rate and adiabatic evolution of the electrons. At intermediate accretion rates (middle panels), the electrons cool efficiently but there is no noticeable change in the disk structure. At accretion rates of \(\dot{M}\gtrsim 10^{-2}\dot{M}_{\rm Edd}\) the torus collapses into a thin accretion disk (SANE) sandwiched by a hot plasma or forms a magnetically truncated accretion disk (MAD).
to cool by transferring their energy to the radiation-emitting electrons and, eventually, leads to a rapid collapse of the hot torus.
In our SANE model this collapse results in a geometrically thin accretion disk surrounded by hot magnetic pressure supported gas outside of \(r\gtrsim 3\,r_{\rm g}\). Thus, the disk is only truncated very close to the black hole. The production of hot nonthermal electrons within the ISCO was predicted by Hankla et al. (2022). Interestingly, this hot coronal gas rather than the thin accretion disk seems to be responsible for the majority of \(\dot{M}\) (Fig. 3). This is similar to the puffy accretion disks first presented in other radiative GRMHD simulations (Lancova et al., 2019) and in pure MHD simulations of weakly to moderately magnetized disks (Jacquemin-Ide et al., 2021). On the other hand, in our MAD model the hot torus transitions into a two-phase medium with cold optically thick patches of gas surrounded by hot, optically thin, plasma. These cold patches of gas are visible for \(20\,r_{g}\lesssim r\lesssim 100\,r_{g}\) and do not reach the event horizon. Since this work was performed at a rather low resolution, we were forced to stop these simulations after the cold plasma became under-resolved. Follow-up simulations featuring a much higher resolution will be necessary to resolve this cold and slender plasma evolves as we keep increasing the accretion rate.
Nevertheless, these findings diverge from the magnetically truncated accretion disk models detailed in Liska et al. (2022), where a slender disk was truncated at \(r\sim 20r_{g}\) and cold patches of gas reached the event horizon. The absence of this thin disk in our simulations may be attributed to the considerably higher saturation of magnetic flux within our torus, distinguishing it from the disk in Liska et al. 2022. This discrepancy could feasibly result in a significantly larger truncation radius. Consequently, if the truncation radius in our MAD model lies much further out, it is plausible that our simulation's duration is insufficient to capture the formation of a thin accretion disk. A larger truncation radius (assuming the cold clumps of gas were sufficiently resolved, which might not be true) might consequently also explain why no cold plasma reaches the event horizon. Namely, as proposed in Liska et al. (2022), magnetic reconnection can potentially evaporate the cold clumps of gas before they reach the event horizon. This is less likely to happen if the magnetic truncation radius moves further in and, hence, the cold clumps have less time to evaporate.
In Figure 4 we plot the time evolution of the bolometric luminosity (panels a and b), density scale height (panels c and d), and outflow efficiencies (panel c and d) as function of the mass accretion rate. While the luminosity increases from \(L=10^{-15}L_{\rm Edd}\) to \(L=10^{-2}L_{\rm Edd}\), the radiative efficiency increases by \(3-5\) orders of magnitude. Similar to results presented in the radiative GRMHD simulations of Ryan et al. (2017) and Dexter et al. (2021), the MAD model is significantly more radiatively efficient, especially at low accretion rates. This is caused by more efficient Synchrotron cooling in the highly magnetized gas of a MAD. Around \(\dot{M}=5\times 10^{-3}\dot{M}_{\rm Edd}\) the SANE model collapses into a thin accretion disk, and we observe a rapid order-of-magnitude rise in the radiative efficiency to the NT73 (Novikov and Thorne, 1973) limit of \(\eta_{rad}\sim\eta_{NT}\sim 0.18\). Here \(\eta_{rad}=\frac{(\int\sqrt{-\pi\pi^{\prime}}d\theta d\varphi|)_{\rm max}\varphi _{\rm g}}{(\dot{M})|_{\rm max}\varphi_{\rm g}}\) with \(R_{\nu}^{\mu}\) being the radiation stress-energy tensor. This collapse manifests itself as a rapid decrease in the density scale height of the disk (\(\frac{h}{r}=\langle\theta-\pi/2\rangle|_{\rho}\) with \(|_{\rho}\) denoting a density-weighted average). On the other hand, in our MAD model the radiative efficiency asymptotes to \(\eta_{rad}\sim 1.2\eta_{NT}\). This has been observed in other radiative MADs (e.g. Curd and Narayan, 2023) and could, pending future analysis, potentially be explained by the presence of a dynamically important magnetic field that injects energy into the accreting gas, which is not accounted for in Novikov and Thorne (1973). In addition, there is only a marginal factor \(\sim 2\) decrease in the disk scale height after the formation of cold plasma, because magnetic pressure is able to stabilize the accretion disk against runaway thermal collapse (e.g Sadowski, 2016; Jiang et al., 2019). The total wind (\(\eta_{wind}=\frac{\int\sqrt{-\pi^{\prime}}t^{\prime}_{\nu}d\theta d\varphi|_{ \rm max}^{(\rho^{2}/\rho<1)}-M|_{\rm max}\varphi_{\rm g}}{\langle\dot{M}\rangle |_{\rm max}\varphi_{\rm g}}\) with \(T_{\nu}^{\mu}\) the stress energy tensor and \(\frac{b^{2}}{\rho}=1\) the wind-jet boundary) and jet (\(\eta_{jet}=\frac{\int\sqrt{-\pi^{\prime}}t^{\prime}_{\nu}d\theta d\varphi|_{ \rm max}^{(\rho^{2}/\rho)<1}}{\langle\dot{M}\rangle|_{\rm max}\varphi_{\rm g}}\)) driven outflow efficiency remain relatively constant throughout the evolution in our MAD model. However, in our SANE model, the increase
Figure 3: A contourplot of density with velocity streamlines in black and the last scattering surface in magenta for model XRB SANE. Similar to the puffy accretion disk models presented in Lančová et al. (2019) the majority of gas accretion seems to be driven by inflows outside of the disk’s midplane.
in the normalized magnetic flux causes the jet to become significantly more efficient over time (e.g. \(\eta_{jet}\propto\phi^{2}\)).
To better understand when, during an outburst, certain physical processes become important, we plot in Figure 5(a,b) the radiative cooling timescale (\(t_{rad}\) = \(\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\Lambda_{Com}|=\eta_{e}}\) with \(\Lambda_{Em}\) the radiative emission rate and \(u_{i,e}\) the ion/electron internal energy), Compton timescale (\(t_{Compt}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\Lambda_{Compt }|=\eta_{e}}\) with \(\Lambda_{Compt}\) the Compton scattering emission rate), Coulomb coupling timescale (\(t_{Coulom}=\frac{\int\sqrt{g}(\kappa_{\star}+\kappa_{\star})d\theta d\varphi|= \eta_{e}}{\Lambda_{Coulom}|=\eta_{e}}\) with \(\Lambda_{Coulom}\) the Coulomb coupling rate), and accretion timescale (\(t_{Acc}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\int\sqrt{g_{ \star}}d\theta d\varphi|=\eta_{e}}\)). \(\Lambda_{\rm Em}\), \(\Lambda_{\rm Compt}\), and \(\Lambda_{\rm Coul}\) are derived from the opacities given in McKinney et al. (2017) and the Coulomb coupling rate given in Sadowski et al. (2017). Evidently, the radiative and Compton cooling timescale become similar to the accretion timescale around \(\dot{M}\gtrsim 10^{-6.5}\dot{M}_{\rm Edd}\) and \(\dot{M}\gtrsim 10^{-4}\dot{M}_{\rm Edd}\) respectively. This manifests itself in figure 5(c,d) as a decrease in the electron temperature (\(T_{e}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\int\sqrt{g_{ \star}}d\theta d\varphi|=\eta_{e}}\)). The ion temperature (\(T_{i}=\frac{\int\sqrt{g_{\star}}d\theta d\theta\varphi|_{\eta_{e}\eta_{e}}}{ \int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}\)) only drops when the Coulomb coupling timescale becomes comparable to the accretion timescale around \(\dot{M}\sim 5\times 10^{-3}\dot{M}_{\rm Edd}\). Meanwhile the plasma transitions in figure 5(e,f) from a quasi-relativistic adiabatic index \(\gamma\sim 1.5\) to a non-relativistic \(\gamma\sim 5/3\). Even at accretion rates that are typically associated with radiatively inefficient accretion (\(\dot{M}\lesssim 10^{-7}\dot{M}_{\rm Edd}\)), future work will need to test if electron cooling (see also Dibi et al., 2012; Yoon et al., 2020) and/or a self-consistent adiabatic index can change the spectral signatures compared to equivalent non-radiative single-temperature GRMHD models (e.g. Moscibrodzka et al., 2014).
## 5 Discussion and Conclusion
In this article we addressed for the first time the transition from the quiescent to the hard intermediate state using radiative two-temperature GRMHD simulations. By rescaling the black hole mass accretion rate across 8 orders of magnitude, these simulations demonstrated that radiative cooling and Coulomb coupling become increasingly important and eventually lead to a transition to a two-phase medium. While the hot torus in SANE models transitions to a thin accretion disk surrounded by a sandwich-like corona, rem
Figure 4: **Panels (a,b):** As the mass accretion rate rises the luminosity increases from \(L\sim 10^{-15}-10^{-13}L_{EDD}\) to \(L\sim 10^{-2}L_{EDD}\). This is driven by both an increase in the mass accretion rate and radiative efficiency. **Panels (c,d):** The density scaleheight of the disk stays relatively constant until the accretion rate exceeds \(\dot{M}\sim 10^{-3}\dot{M}_{EDD}\) at which point the disk collapses. At this point Coulomb collisions are efficient and both the ions can cool through the radiation emitting electrons. **Panels (e,f):** The jet (blue), wind (black), radiative (purple) and NT73 (dashed green) efficiency as function of \(\dot{M}\). Interestingly, the MAD model maintains a significantly higher radiative efficiency, presumably due to more efficient Synchrotron emission.
Figure 5: **Panels (a,b):** In both models the electron temperature \(T_{e}\) starts cooling for \(\dot{M}\gtrsim 10^{-6}\dot{M}_{EDD}\) due to more efficient radiative cooling. The ion temperature \(T_{i}\) is governed by the ability of the ions to transfer their energy to the radiation-emitting electrons and doesn’t start cooling until \(\dot{M}\sim 10^{-3}\dot{M}_{EDD}\). **Panels (c,d)**: We compare the radiative emission (\(t_{Em}\)), Comptonization (\(t_{Compt}\)), and Coulomb collision (\(t_{Coulom}\)) timescale against the accretion (\(t_{acc}\)) timescale to illustrate the importance of these processes at different accretion rates. **Panels (e,f)**: The adiabatic index for the electrons \(\gamma_{e}\) is relativistic while the adiabatic index for the ions \(\gamma_{i}\) is non-relativistic. This leads to a semi-relativistic gas with adiabatic index \(\gamma_{g}\sim 1.55\) at \(\dot{M}\lesssim 10^{-6}\dot{M}_{\rm Edd}\).
inscent of a puffy (Lancova et al., 2019) or magnetically elevated (Begelman and Silk, 2017) disk, the MAD torus transitions to a magnetically truncated disk and forms cold clumps of gas embedded in a hot corona (see also Bambic et al., 2023). This is similar to previous work (Liska et al., 2019, 2022), but in this case an outer thin accretion disk is absent and the cold plasma does not reach the event horizon. While a thin accretion disk forms around \(\dot{M}\sim 5\times 10^{-3}\dot{M}_{\rm Edd}\) for our SANE model, which agrees well with Dexter et al. (2021), no cold plasma is observed in our MAD model before \(\dot{M}\gtrsim 1\times 10^{-2}\dot{M}_{\rm Edd}\). As described in the appendix, we find remarkably similar conclusions for two analogous models applicable to a \(M=6.5\times 10^{9}M_{\odot}\) AGN. Pending future ray-tracing analysis, we expect the MAD and SANE model have vastly different spectral and time-variability signatures. We also plan to address the structure and dynamics of the thin and truncated disks as we keep increasing the density scale with a dedicated simulation campaign performed at a much higher resolution.
The goal of this article was to study the transition from the quiescent state to the hard intermediate state, both of which feature radio jets. Thus, we have not considered models that do not produce any jets such as models with a purely toroidal field (e.g. Liska et al., 2022) and instead only considered a jetted SANE model with \(\phi\sim 5-15\) and a jetted MAD model with \(\phi\sim 40-55\). By rescaling both the gas density and magnetic energy density proportionally to the target mass accretion rate, this work implicitly assumes that the accretion-rate normalized magnetic flux (\(\phi\)) remains constant within a factor \(\sim 2\). Conventional thinking might suggest that since the hard-intermediate state is associated with the most powerful jets, and recent polarimetric Event Horizon Telescope observations (Event Horizon Telescope Collaboration et al., 2021) strongly imply that AGN accretion disks in the quiescent state are MAD, the hard-intermediate state would be MAD as well. However, the jet power is set by the total amount of magnetic flux threading the black hole (\(P_{\rm jet}\propto\Phi^{2}\)), and thus a SANE jet at a much higher accretion rate can easily outperform a MAD jet at a lower accretion rate. Thus, an interesting possibility to be explored in future work would include a model where the magnetic flux does not increase proportional to \(\Phi\propto\sqrt{\dot{M}}\) but is truncated at a maximum value \(\Phi\propto min(\sqrt{\zeta}\Phi_{0},\Phi_{max})\). This would cause the disk to transition from a MAD disk in the quiescent state to a SANE disk in the hard-intermediate state where, at least initially, the magnetic pressure is still dynamically important (e.g. Begelman and Silk, 2017; Dexter and Begelman, 2019; Lancova et al., 2019). In upcoming work, we will employ ray-tracing calculations to compare both our existing models and future models featuring a truncated magnetic flux against multi-wavelength observations, which offer constraints on the truncation radius and the size/geometry of coronal structures in actual astrophysical systems (e.g. Ingram and Done, 2011; Plant et al., 2014; Fabian et al., 2014; Garcia et al., 2015; Kara et al., 2019).
There are several theoretical and observational arguments that support this 'truncated flux' scenario. First, for systems to remain MAD during a 2-4 orders of magnitude increase in \(\dot{M}\), they would need to advect \(1-2\) orders of magnitude additional magnetic flux onto the BH (e.g \(\Phi\propto\sqrt{\dot{M}}\) in a MAD). Especially when the outer disk becomes geometrically thin it is unclear if this is physically possible since theoretical arguments suggest thin disks might not be able to advect magnetic flux loops (e.g. Lubow et al., 1994). Second, observations suggests that the disk truncation radius in the hard intermediate state appears (e.g. Reis et al., 2010; Kara et al., 2019) to be rather small (\(r_{t}\lesssim 5r_{g}\)). This is inconsistent with recent GRMHD simulations which demonstrated that even when the disk only contained a factor \(\sim 1.5\) of excess magnetic flux (above the MAD limit), this led to a truncation radius \(r_{t}\sim 20r_{g}\)(e.g. Liska et al., 2019, 2022). Third, low-frequency quasi-periodic oscillations which are ubiquitous in the hard-intermediate state (e.g. Ingram and Motta, 2019), are most likely seeded by a precessing disk which tears of from a larger non-precessing disk (e.g. Stella and Vietri, 1998; Ingram et al., 2009, 2016; Musoke et al., 2022). This has been observed in radiative and non-radiative GRMHD simulations where a tilted thin accretion disk is threaded by a purely toroidal magnetic field (e.g. Liska et al., 2022; Musoke et al., 2022; Liska et al., 2023) and in similar GRMHD simulations where the accretion disk is threaded by a below saturation level vertical magnetic field (e.g. Liska et al., 2021). However, there are no numerical simulations that have shown any disk tearing or precession where the disk is saturated by vertical magnetic flux (e.g. Fragile et al., 2023). The main problem is that for a disk to tear (and precess), the warping of space-time needs to substantially exceed the viscous torques holding the disk together (e.g. Nixon and King, 2012; Nealon et al., 2015; Dogan et al., 2018; Dogan and Nixon, 2020; Raj et al., 2021). However, the viscous torques stemming from equipartition strength magnetic fields within the truncation radius might be too strong for a disk to tear.
While our simulations incorporate the effects of radiation and thermal decoupling between ions and electrons, they still rely on a rather simplistic heating prescription for electrons extracted from particle-in-cell models (Rowan et al., 2017). Since, absent any Coulomb collisions, the cooling rate in a given magnetic field will be determined by the temperature and density of the radiation emitting electrons, the radiative efficiency at lower accretion rates can become sensitive to the used heating prescription (e.g. Chael et al., 2018). For example, in our models roughly a fraction \(\delta_{e}\sim 20-40\%\) of the dissipation ends up in the electrons. If this electron heating fraction would be smaller/bigger, we expect that the radia
tive efficiency to drop/rise and the collapse to a two-phase medium to occur later/earlier. Similarly, other microphysical effects, typically not captured by the ideal MHD approximation, such as thermal conduction between the corona and disk (e.g. Meyer and Meyer-Hofmeister, 1994; Liu et al., 1999; Meyer-Hofmeister and Meyer, 2011; Cho and Narayan, 2022), a non-unity magnetic Prandtl number (e.g. Balbus and Henri, 2008), could alter the transition rate to a two-phase medium.
In addition, it was recently demonstrated that the physics driving accretion in luminous black holes (e.g. with \(L\gtrsim 0.01L_{\rm Edd}\)), which are misaligned with the black hole spin axis, is fundamentally different. Namely, dissipation of orbital energy is driven by nozzle shocks induced by strong warping (Kaaz et al., 2022; Liska et al., 2023) instead of magneto-rotational instability (MRI) driven turbulence (e.g. Balbus and Hawley, 1991, 1998). These nozzle shocks form perpendicular to the line of nodes, where the disk's midplane intersects the equatorial plane of the black hole, and increase the radial speed of the gas by \(2-3\) orders of magnitude in luminous systems that are substantially misaligned. This could, at a given accretion rate, lead to a decrease in the disk's density, potentially delaying the formation of a thin accretion disk. We expect to address outbursts of warped accretion disks in the coming years.
Numerically, this article has also introduced a method to study outbursts by artificially rescaling the density as a function of time. This solves the issue that the physical processes in the outer disk that drive such drastic fluctuations in the mass accretion rate occur over timescales that are too long to simulate (real outbursts typically take weeks to months, while our simulations last for \(t\sim 10-15s\)). Future applications of this method might include (i) ultra-luminous accretion disks, which decay from super-Eddington to sub-Eddington accretion rates; (ii) the transition from the hard-intermediate state to the high-soft state, where the magnetic flux threading the black hole drops while the accretion rate remains constant; and (iii) the transition from the high-soft state to the quiescent state, characterised by a gradual drop in the mass accretion rate.
## 6 Acknowledgements
We thank Sera Markoff, Sasha Tchekhovskoy, and Ramesh Narayan for insightful discussions. An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs under awards PHY129 and AST178. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. ML was supported by the John Harvard, ITC and NASA Hubble Fellowship Program fellowships. NK was supported by an NSF Graduate Research Fellowship. RE was supported by the NASA ATP grant numbers 21-ATP21-0077, NSF AST-1816420, and HST-GO-16173.001-A. KC was supported by the Black Hole Initiative (BHI) fellowship program. GM was supported by a Netherlands Research School for Astronomy (NOVA) Virtual Institute of Accretion (VIA) and the Canadian Institute for Theoretical Astrophysics (CITA) postdoctoral fellowships.
## Appendix
We present two radiative two-temperature GRMHD models (M87 SANE and M87 MAD) where we change the black hole mass from a typical BHXRB of \(M_{\rm BH}=10M_{\odot}\) to a large AGN such as M87 with \(M_{\rm BH}=6.5\times 10^{9}M_{\odot}\). Figures 6, 7, 8,9 and 10 in the Appendix correspond to figures 1, 2, 3, 4 and 5 in the main article. Interestingly, the evolution of our AGN models looks very similar to our BHXRB models. The most striking difference between BHXRBs and AGN is a slightly lower radiative efficiency at lower accretion rates, which can be explained by a weaker Synchrotron emission opacity coefficient (e.g. McKinney et al., 2017) and is reflected in a longer emission time \(t_{Em}\). In addition, after the plasma condenses into a two-phase medium, the temperature of the cold phase gas in AGN (\(T_{e}\sim 10^{5}K\)) is much lower than in BHXRBs (\(T_{e}\sim 10^{7}K\)). This is a well known fact in the analytic theory of radiatively efficient AGN accretion disks (e.g. Shakura & Sunyaev, 1973; Novikov & Thorne, 1973), which are less dense and hence more radiation pressure dominated than their BHXRB analogues.
Figure 6: Same as figure 1, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 7: Same as figure 2, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 8: Same as figure 3, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 10: Same as figure 5, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN. |
2309.12091 | Scotogenic model from an extended electroweak symmetry | We argue that the higher weak isospin $SU(3)_L$ manifestly unifies dark
matter and normal matter in its isomultiplets for which dark matter carries a
conserved dark charge while normal matter does not. The resultant gauge
symmetry is given by $SU(3)_C\otimes SU(3)_L \otimes U(1)_X\otimes U(1)_G$,
where the first factor is the color group, while the rest defines a theory of
scotoelectroweak in which $X$ and $G$ determine electric charge
$Q=T_3-1/\sqrt{3}T_8+X$ and dark charge $D=-2/\sqrt{3}T_8+G$. This setup
provides both appropriate scotogenic neutrino masses and dark matter stability
as preserved by a residual dark parity $P_D=(-1)^D$. Interpretation of the dark
charge is further discussed, given that $SU(3)_L$ is broken at very high energy
scale. | Phung Van Dong, Duong Van Loi | 2023-09-21T14:03:04Z | http://arxiv.org/abs/2309.12091v2 | # Scotoelectroweak theory
###### Abstract
We argue that the higher weak isospin \(SU(3)_{L}\) manifestly unifies dark matter and normal matter in its isomultiplets for which dark matter carries a conserved dark charge while normal matter does not. The resultant gauge symmetry is given by \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\), where the first factor is the color group, while the rest defines a theory of scotoelectroweak in which \(X\) and \(G\) determine electric charge \(Q=T_{3}-1/\sqrt{3}T_{8}+X\) and dark charge \(D=-2/\sqrt{3}T_{3}+G\). This setup provides both appropriate scotogenic neutrino masses and dark matter stability as preserved by a residual dark parity \(P_{D}=(-1)^{D}\). Interpretation of the dark charge is further discussed, given that \(SU(3)_{L}\) is broken at very high energy scale.
## I Introduction
Neutrino mass [1; 2] and dark matter [3; 4] are the important questions in science which require the new physics beyond the standard model. Additionally, the standard model cannot address the quantization of electric charge and the existence of just three fermion families, as observed in the nature.
Among attempts to solve these issues, the model based on \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\) (called 3-3-1) gauge symmetry is well-motivated as it predicts the family number to be that of colors by anomaly cancellation [5; 6; 7; 8; 9]. Further, the charge quantization naturally arises in the 3-3-1 model for typical fermion contents [10; 11; 12; 13; 14]. The 3-3-1 model may supply small neutrino masses by implementing radiative and/or seesaw mechanisms [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] and dark matter stability by interpreting global/discrete symmetries [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. Recently, the 3-3-1 model may give a suitable solution to the \(W\)-mass anomaly [40].
In the 3-3-1 model, the baryon minus lepton number \(B-L\) generically neither commutes nor closes algebraically with \(SU(3)_{L}\). This enlarges the 3-3-1 group to a complete gauge symmetry \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{N}\) (called 3-3-1-1) in which the last factor \(N\) relates to \(B-L\) via a \(SU(3)_{L}\) charge and this setup reveals matter parity as a residual gauge symmetry [41; 42]. This matter parity stabilizes various dark matter candidates besides related phenomena as studied in [43; 44; 45; 46]. The 3-3-1-1 model typically supplies neutrino masses via canonical seesaw, as suppressed by heavy right-handed neutrinos that exist due to anomaly cancellation and gain large Majorana masses from \(N\)-charge breaking. However, it may alternatively generate neutrino masses via scotogenic mechanism due to the existence of matter parity [47; 48; 49; 50; 51]. The cosmological inflation, asymmetric matter production, new abelian \(N\)-charge breaking, and effect of kinetic mixing between two \(U(1)\) groups are extensively investigated in [52; 53; 54; 55; 56; 57] too.
The 3-3-1 symmetry has a property that unifies dark matter and normal matter in \(SU(3)_{L}\) multiplets and normally couples dark matter in pairs in interactions [41]. Above, \(B-L\) is realized in such a way that dark matter carries a wrong \(B-L\) number opposite to that defined in the standard model for normal matter. Hence, dark matter is odd, governed by the matter parity. Since both dark matter and normal matter have \(B-L\) charge, this setup implies a strict couple between the two kinds of matter through \(B-L\) gauge portal. This work does not further examine such interacting effects of dark matter, especially under experimental detection [43; 44; 45; 46]. Instead, we propose a dark charge for dark matter, while normal matter has no dark charge, which has a nature completely different from \(B-L\) and relaxes such interaction. This interpretation of dark charge supplies naturally scotogenic neutrino mass and dark matter [58], because the mentioned canonical seesaw including its right-handed neutrinos manifestly disappears.
A global version for dark charge under consideration was first discussed in [32] in attempt to find a mechanism for dark matter stability in 3-3-1 model and further promoted in [41]. As electric charge \(Q\) is unified with weak isospin \(T_{i}\) (\(i=1,2,3\)) in electroweak theory \(SU(2)_{L}\otimes U(1)_{Y}\) for which \(Q=T_{3}+Y\), the present proposal combines both electric charge \(Q\) and dark charge \(D\) in a higher weak isospin \(T_{n}\) (\(n=1,2,3,\cdots,8\)) yielding \(SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\) for which \(Q=T_{3}+\beta T_{8}+X\) and \(D=\beta^{\prime}T_{8}+G\). Here the coefficients \(\beta,\beta^{\prime}\) determine the electric charge and dark charge of dark fields, respectively. This theory indeed unifies dark force and electroweak force in the same manner the electroweak theory does so for electromagnetic
force and weak force, thus it is called scotoelectroweak, where "scoto" means darkness.
The rest of this work is organized as follows. In Sec. II we propose the scotoelectroweak model. In Sec. III we examine scalar and gauge boson mass spectra. In Sec. IV we obtain the scheme of neutrino mass generation. In Sec. V we investigate dark matter observables. In Sec. VI we constrain the model and deliver a numerical investigation. In Sec. VII we give a realization of dark charge that the model refers to. Finally, we summarize our results and conclude this work in Sec. VIII.
## II Scotoelectroweak setup
In the standard model, the weak isospin \(SU(2)_{L}\) arranges left-handed fermions in isodoublets \((\nu_{aL},e_{aL})\sim 2\) and \((u_{aL},d_{aL})\sim 2\), while putting relevant right-handed fermions in isosinglets \(e_{aR}\sim 1\), \(u_{aR}\sim 1\), and \(d_{aR}\sim 1\), where \(a=1,2,3\) is a family index.
The standard model cannot explain nonzero neutrino masses and flavor mixing required by oscillation experiments. Additionally, it cannot explain the existence of dark matter which makes up most of the mass of galaxies and galaxy clusters.
We argue that both the questions may be solved by existence of dark fields, a new kind of particles, which are assumed, possessing a conserved dark charge (\(D\)), normalized to unity for brevity, i.e. \(D=\pm 1\). The content of dark fields and relevant dark symmetry are determined by enlarging the weak isospin \(SU(2)_{L}\) to a higher symmetry, \(SU(3)_{L}\).
The fundamental representations of \(SU(3)_{L}\) are decomposed as \(3=2\oplus 1\) and \(3^{*}=2^{*}\oplus 1\) under \(SU(2)_{L}\). Hence, enlarging known fermion isodoublets (\(2/2^{*}\)) implies dark fermion isosinglets (1's) lying at the bottom of \(3/3^{*}\), such as
\[\psi_{aL}=\begin{pmatrix}\nu_{aL}\\ e_{aL}\\ N_{aL}\end{pmatrix}\sim 3,\ \ \ \ Q_{\alpha L}=\begin{pmatrix}d_{\alpha L}\\ -u_{\alpha L}\\ D_{\alpha L}\end{pmatrix}\sim 3^{*},\ \ \ \ Q_{3L}=\begin{pmatrix}u_{3L}\\ d_{3L}\\ U_{3L}\end{pmatrix}\sim 3, \tag{1}\]
where \(\alpha=1,2\) is a family index as \(a=1,2,3\) is. Furthermore, the relevant right-handed partners transform as \(SU(3)_{L}\) singlets,
\[e_{aR}\sim 1,\ \ \ \ N_{aR}\sim 1,\ \ \ \ u_{aR}\sim 1,\ \ \ \ d_{aR}\sim 1,\ \ \ \ D_{\alpha R}\sim 1,\ \ \ \ U_{3R}\sim 1. \tag{2}\]
Above, the \([SU(3)_{L}]^{3}\) anomaly cancelation requires the third quark family (as well as those of leptons) transforming differently from the first two quark families [59; 60; 61; 62]. This
condition demands that the number of fermion families matches that of color. As stated, \(N_{a}\) and \(U_{3}\) have a dark charge \(D=1\), while \(D_{\alpha}\) possesses a dark charge \(D=-1\), as all collected in Tab. 1. It is noted that all normal fields carry no dark charge, i.e. \(D=0\).1 We further assume \(N_{a}\), \(D_{\alpha}\), and \(U_{3}\) possessing an electric charge \(Q=0\), \(-1/3\), and \(2/3\) respectively like those of the 3-3-1 model with right-handed neutrinos.2
Footnote 1: As the standard model, the hypothetical right-handed neutrinos \(\nu_{aR}\) are a gauge singlet having neither electric charge nor dark charge and are thus not imposed; whereas, the other right-handed fermions must be present, as already included.
Footnote 2: Additionally, these dark leptons and quarks have the same \(B,L\) numbers as usual leptons and quarks, hence \(B\) and \(L\) are global charges commuting with \(SU(3)_{L}\) like those in the standard model, opposite to the original 3-3-1-1 model.
It is clear that \(Q=\text{diag}(0,-1,0)\) and \(D=\text{diag}(0,0,1)\) for lepton triplet \(\psi_{L}\) which both neither commute nor close algebraically with \(SU(3)_{L}\) charges. By symmetry principles, we obtain two new abelian charges \(X\) and \(G\) which complete the gauge symmetry,
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}, \tag{3}\]
called 3-3-1-1, where \(SU(3)_{C}\) is the color group, \(SU(3)_{L}\) is previously given, while \(X,G\) determine electric and dark charges, respectively,
\[Q=T_{3}-\frac{1}{\sqrt{3}}T_{8}+X,\hskip 14.226378ptD=-\frac{2}{\sqrt{3}}T_{8}+G, \tag{4}\]
where \(T_{n}\) (\(n=1,2,3,\cdots,8\)) is \(SU(3)_{L}\) charge.
The fermion representation content under the 3-3-1-1 symmetry is given by
\[\psi_{aL} \sim(1,3,-1/3,1/3),\hskip 14.226378ptQ_{\alpha L}\sim(3,3^{*},0,-1 /3),\hskip 14.226378ptQ_{3L}\sim(3,3,1/3,1/3), \tag{5}\] \[e_{aR} \sim(1,1,-1,0),\hskip 14.226378ptN_{aR}\sim(1,1,0,1),\hskip 14.226378ptu _{aR}\sim(3,1,2/3,0),\] (6) \[d_{aR} \sim(3,1,-1/3,0),\hskip 14.226378ptD_{\alpha R}\sim(3,1,-1/3,-1), \hskip 14.226378ptU_{3R}\sim(3,1,2/3,1). \tag{7}\]
\begin{table}
\begin{tabular}{l c
All the anomalies vanish. Indeed, since the 3-3-1 model is well established, it it sufficient to verify those associated with \(U(1)_{G}\).
\[[SU(3)_{C}]^{2}U(1)_{G} \sim \sum_{\rm quarks}(G_{q_{L}}-G_{q_{R}}) \tag{8}\] \[= 2.3.(-1/3)+3.(1/3)-2.(-1)-1=0,\]
\[[SU(3)_{L}]^{2}U(1)_{G} \sim \sum_{\rm(anti)triplets}G_{F_{L}} \tag{9}\] \[= 3.(1/3)+2.3.(-1/3)+3.(1/3)=0,\]
\[[{\rm Gravity}]^{2}U(1)_{G} \sim \sum_{\rm fermions}(G_{f_{L}}-G_{f_{R}}) \tag{10}\] \[= 3.3.(1/3)+2.3.3.(-1/3)+3.3.(1/3)\] \[-3.1-2.3.(-1)-3.1=0,\]
\[[U(1)_{X}]^{2}U(1)_{G} = \sum_{\rm fermions}(X_{f_{L}}^{2}G_{f_{L}}-X_{f_{R}}^{2}G_{f_{R}}) \tag{11}\] \[= 3.3.(-1/3)^{2}.(1/3)+3.3.(1/3)^{2}(1/3)\] \[-2.3.(-1/3)^{2}.(-1)-3.(2/3)^{2}.(1)=0,\]
\[U(1)_{X}[U(1)_{G}]^{2} = \sum_{\rm fermions}(X_{f_{L}}G_{f_{L}}^{2}-X_{f_{R}}G_{f_{R}}^{2}) \tag{12}\] \[= 3.3.(-1/3).(1/3)^{2}+3.3.(1/3)(1/3)^{2}\] \[-2.3.(-1/3).(-1)^{2}-3.(2/3).(1)^{2}=0,\]
\[[U(1)_{G}]^{3} = \sum_{\rm fermions}(G_{f_{L}}^{3}-G_{f_{R}}^{3}) \tag{13}\] \[= 3.3.(1/3)^{3}+2.3.3.(-1/3)^{3}+3.3.(1/3)^{3}\] \[-3.(1)^{3}-2.3.(-1)^{3}-3.(1)^{3}=0.\]
The 3-3-1-1 symmetry breaking and mass generation are appropriately induced by
\[\eta = \begin{pmatrix}\eta_{1}^{0}\\ \eta_{2}^{-}\\ \eta_{3}^{0}\end{pmatrix}\sim(1,3,-1/3,1/3), \tag{14}\] \[\rho = \begin{pmatrix}\rho_{1}^{+}\\ \rho_{2}^{0}\\ \rho_{3}^{+}\end{pmatrix}\sim(1,3,2/3,1/3),\] (15) \[\chi = \begin{pmatrix}\chi_{1}^{0}\\ \chi_{2}^{-}\\ \chi_{3}^{0}\end{pmatrix}\sim(1,3,-1/3,-2/3),\] (16) \[\phi \sim (1,1,0,-2),\hskip 14.226378pt\xi\sim(1,1,0,1). \tag{17}\]
Here \(\phi\) couples to \(N_{R}N_{R}\), breaks \(U(1)_{G}\), and defines a dark parity. The fields \(\eta\), \(\rho\), and \(\chi\) couple a fermion (anti)triplet to right-handed partners of the first, second, and third components respectively and break the 3-3-1 symmetry. The scalar \(\xi\) analogous to a field in [50] couples to \(\eta^{\dagger}\chi\) and \(\phi\) inducing neutrino mass. Dark charge for scalars is included to Tab. 1 too. Note that dark scalars include \(\eta_{3}\), \(\rho_{3}\), \(\chi_{1,2}\), \(\xi\), and \(\phi\), which have \(D\neq 0\), whereas the rest fields, \(\eta_{1,2}\), \(\rho_{1,2}\), and \(\chi_{3}\), are normal scalars possessing \(D=0\).
Scalar fields develop vacuum expectation values (VEVs), such as
\[\langle\eta\rangle = \begin{pmatrix}\frac{u}{\sqrt{2}}\\ 0\\ 0\\ 0\end{pmatrix},\hskip 14.226378pt\langle\rho\rangle=\begin{pmatrix}0\\ \frac{v}{\sqrt{2}}\\ 0\end{pmatrix},\hskip 14.226378pt\langle\chi\rangle=\begin{pmatrix}0\\ 0\\ \frac{w}{\sqrt{2}}\end{pmatrix},\hskip 14.226378pt\langle\phi\rangle=\frac{ \Lambda}{\sqrt{2}},\hskip 14.226378pt\langle\xi\rangle=0. \tag{18}\]
The scheme of symmetry breaking is given by
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\] \[\downarrow\Lambda,w\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes P_{D}\] \[\downarrow u,v\] \[SU(3)_{C}\otimes U(1)_{Q}\otimes P_{D}\]
Here we assume \(\Lambda,w\gg u,v\) for consistency with the standard model. Besides the residual electric and color charges, the model conserves a residual dark parity,
\[P_{D}=(-1)^{D}=(-1)^{-\frac{2}{\sqrt{3}}T_{8}+G}. \tag{19}\]
Indeed, a residual charge resulting from \(SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\) breaking must take the form \(R=x_{n}T_{n}+yX+zG\). \(R\) must annihilate the vacua \(\langle\eta,\rho,\chi\rangle\), i.e. \(R\langle\eta,\rho,\chi\rangle=0\), leading to \(x_{1}=x_{2}=x_{4}=x_{5}=x_{6}=x_{7}=0\), \(x_{3}=y\), and \(x_{8}=-\frac{1}{\sqrt{3}}(y+2z)\). Substituting these \(x\)'s we get \(R=yQ+zD\), where \(Q,D\) are given as in (4). Obviously, \(Q\) and \(D\) commute, i.e. \([Q,D]=0\), implying that they are separated as two abelian subgroups. Additionally, \(Q\) annihilates the vacuum \(\langle\phi\rangle\), i.e. \(Q\langle\phi\rangle=0\), implying that \(Q\) is a final residual charge, conserved after breaking. For the remainder, \(D\) is broken by \(\langle\phi\rangle\), since \(D\langle\phi\rangle=-2\Lambda/\sqrt{2}\neq 0\). However, a residual symmetry of it, i.e. \(P_{D}=e^{i\omega D}\), may be survived, i.e. \(P_{D}\langle\phi\rangle=\langle\phi\rangle\), or \(e^{i\omega(-2)}=1\), where \(\omega\) is a transformation parameter. It leads to \(\omega=k\pi\), for \(k\) integer. Hence, \(P_{D}=e^{ik\pi D}=(-1)^{kD}=\{1,(-1)^{D}\}\cong Z_{2}\) for which we redefine \(P_{D}=(-1)^{D}\) to be dark parity as in (19). The dark parity (odd/even) of particles are collected in Tab. 1 too. It is stressed that \(\eta_{3}^{0}\), \(\chi_{1}^{0}\), and \(\xi\) do not have a nonzero VEV due to dark parity conservation.
We now write the total Lagrangian of the model,
\[{\cal L}={\cal L}_{\rm kin}+{\cal L}_{\rm Yuk}-V. \tag{20}\]
The kinetic part takes the form,
\[{\cal L}_{\rm kin} = \sum_{F}\bar{F}i\gamma^{\mu}D_{\mu}F+\sum_{S}(D^{\mu}S)^{\dagger}( D_{\mu}S)-\frac{1}{4}\sum_{A}A_{\mu\nu}A^{\mu\nu}, \tag{21}\]
where \(F\), \(S\), and \(A\) denote fermion, scalar, and gauge-boson multiplets respectively, the covariant derivative \(D_{\mu}\) and field strength tensors \(A_{\mu\nu}\) are explicitly given by
\[D_{\mu} = \partial_{\mu}+ig_{s}t_{n}G_{n\mu}+igT_{n}A_{n\mu}+ig_{X}XB_{\mu} +ig_{G}GC_{\mu}, \tag{22}\] \[G_{n\mu\nu} = \partial_{\mu}G_{n\nu}-\partial_{\nu}G_{n\mu}-g_{s}f_{nmp}G_{m\mu }G_{p\nu},\] (23) \[A_{n\mu\nu} = \partial_{\mu}A_{n\nu}-\partial_{\nu}A_{n\mu}-gf_{nmp}A_{m\mu}A_{ \mu\nu},\] (24) \[B_{\mu\nu} = \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu},\ \ \ \ C_{\mu\nu}= \partial_{\mu}C_{\nu}-\partial_{\nu}C_{\mu}, \tag{25}\]
where \((g_{s},\ g,\ g_{X},\ g_{G})\), \((G_{n\mu},A_{n\mu},B_{\mu},C_{\mu})\), and \((t_{n},\ T_{n},\ X,\ G)\) indicate coupling constants, gauge bosons, and charges according to 3-3-1-1 subgroups, respectively. Notice that all gauge bosons have \(D=0\) behaving as normal fields, except for \(X^{0},Y^{-}\) coupled to \(T_{4,5,6,7}\) having \(D=-1\) and acting as dark vectors, which are all listed to Tab. 1 too.
The Yukawa Lagrangian is easily obtained,
\[{\cal L}_{\rm Yuk} = h^{e}_{ab}\bar{\psi}_{aL}\rho e_{bR}+h^{N}_{ab}\bar{\psi}_{aL} \chi N_{bR}+\frac{1}{2}h^{\prime N}_{ab}\bar{N}^{c}_{aR}N_{bR}\phi \tag{26}\] \[+h^{d}_{\alpha a}\bar{Q}_{\alpha L}\eta^{*}d_{aR}+h^{u}_{\alpha a} \bar{Q}_{\alpha L}\rho^{*}u_{aR}+h^{D}_{\alpha\beta}\bar{Q}_{\alpha L}\chi^{*}D _{\beta R}\] \[+h^{u}_{3a}\bar{Q}_{3L}\eta u_{aR}+h^{d}_{3a}\bar{Q}_{3L}\rho d_{aR }+h^{U}_{33}\bar{Q}_{3L}\chi U_{3R}+H.c..\]
The scalar potential can be decomposed,
\[V=V(\rho,\chi,\eta,\phi)+V(\xi), \tag{27}\]
where the first part relates to a potential that induces breaking,
\[V(\rho,\chi,\eta,\phi) = \mu^{2}_{1}\rho^{\dagger}\rho+\mu^{2}_{2}\chi^{\dagger}\chi+\mu^{ 2}_{3}\eta^{\dagger}\eta+\lambda_{1}(\rho^{\dagger}\rho)^{2}+\lambda_{2}(\chi^ {\dagger}\chi)^{2}+\lambda_{3}(\eta^{\dagger}\eta)^{2} \tag{28}\] \[+\lambda_{4}(\rho^{\dagger}\rho)(\chi^{\dagger}\chi)+\lambda_{5}( \rho^{\dagger}\rho)(\eta^{\dagger}\eta)+\lambda_{6}(\chi^{\dagger}\chi)(\eta^{ \dagger}\eta)\] \[+\lambda_{7}(\rho^{\dagger}\chi)(\chi^{\dagger}\rho)+\lambda_{8}( \rho^{\dagger}\eta)(\eta^{\dagger}\rho)+\lambda_{9}(\chi^{\dagger}\eta)(\eta^{ \dagger}\chi)+(f\epsilon^{ijk}\eta_{i}\rho_{j}\chi_{k}+H.c.)\] \[+\mu^{2}\phi^{\dagger}\phi+\lambda(\phi^{\dagger}\phi)^{2}+ \lambda_{10}(\phi^{\dagger}\phi)(\rho^{\dagger}\rho)+\lambda_{11}(\phi^{ \dagger}\phi)(\chi^{\dagger}\chi)+\lambda_{12}(\phi^{\dagger}\phi)(\eta^{ \dagger}\eta),\]
while the last part relates to a dark sector that induces neutrino mass,
\[V(\xi) = \mu^{2}_{\xi}\xi^{\dagger}\xi+\lambda_{\xi}(\xi^{\dagger}\xi)^{2} +\lambda_{13}(\xi^{\dagger}\xi)(\rho^{\dagger}\rho)+\lambda_{14}(\xi^{\dagger }\xi)(\chi^{\dagger}\chi)+\lambda_{15}(\xi^{\dagger}\xi)(\eta^{\dagger}\eta) \tag{29}\] \[+\lambda_{16}(\xi^{\dagger}\xi)(\phi^{\dagger}\phi)+(f_{1}\phi\xi \xi+f_{2}\xi\eta^{\dagger}\chi+\lambda_{17}\phi^{*}\xi^{*}\eta^{\dagger}\chi+H. c.).\]
Above, \(h\)'s and \(\lambda\)'s are dimensionless, while \(\mu\)'s and \(f\)'s have a mass dimension. We can consider the parameters \(f\), \(f_{1,2}\), and \(\lambda_{17}\) to be real by absorbing their phases (if any) into appropriate scalar fields \(\eta\), \(\rho\), \(\chi\), \(\phi\), and \(\xi\). That said, the potential conserves CP. We also suppose that CP is not broken by vacua, i.e. the VEVs \(u\), \(v\), \(w\), and \(\Lambda\) are all real too. It is further noted that there are neither mixing between a scalar (CP-even) and a pseudo-scalar (CP-odd) due to CP conservation nor mixing between a \(P_{D}\)-even field and a \(P_{D}\)-odd field due to dark parity conservation.
## III Scalar and gauge boson masses
### Scalar mass spectrum
The potential \(V(\rho,\chi,\eta,\phi)\) has been explicitly examined in [43]. Let us summarize its result. First, expand the scalar fields around their VEVs,
\[\eta=\left(\begin{array}{c}\frac{u}{\sqrt{2}}\\ 0\\ 0\end{array}\right)+\left(\begin{array}{c}\frac{S_{1}+iA_{1}}{\sqrt{2}}\\ \eta_{-}^{-}\\ \frac{S_{1}^{\prime}+iA_{1}^{\prime}}{\sqrt{2}}\end{array}\right),\ \ \ \ \rho=\left(\begin{array}{c}0\\ \frac{v}{\sqrt{2}}\\ 0\end{array}\right)+\left(\begin{array}{c}\rho_{1}^{+}\\ \frac{S_{2}+iA_{2}}{\sqrt{2}}\\ \rho_{3}^{+}\end{array}\right), \tag{30}\]
\[\chi=\left(\begin{array}{c}0\\ 0\\ \frac{w}{\sqrt{2}}\end{array}\right)+\left(\begin{array}{c}\frac{S_{1}^{ \prime}+iA_{1}^{\prime}}{\sqrt{2}}\\ \chi_{-}^{-}\\ \frac{S_{3}+iA_{3}}{\sqrt{2}}\end{array}\right),\ \ \ \ \phi=\frac{\Lambda}{\sqrt{2}}+\frac{S_{4}+iA_{4}}{\sqrt{2}}, \tag{31}\]
and notice that the following approximations "\(\simeq\)" are given up to \((u,v)/(-f,w,\Lambda)\) order. The usual Higgs field (\(H\)) and three new neutral scalars (\(H_{1,2,3}\)) are obtained by
\[H\simeq\frac{uS_{1}+vS_{2}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ H_{1} \simeq\frac{vS_{1}-uS_{2}}{\sqrt{u^{2}+v^{2}}}, \tag{32}\] \[H_{2}\simeq c_{\varphi}S_{3}-s_{\varphi}S_{4},\ \ \ \ H_{3} \simeq s_{\varphi}S_{3}+c_{\varphi}S_{4}, \tag{33}\]
with mixing angle \(t_{2\varphi}=\frac{\lambda_{11}w\Lambda}{\lambda\Lambda^{2}-\lambda_{2}w^{2}}\). The usual Higgs mass is appropriately achieved at the weak scale \(m_{H}\sim(u,v)\), while the new scalar masses are
\[m_{H_{1}}^{2}\simeq-\frac{fw}{\sqrt{2}}\left(\frac{u}{v}+\frac{ v}{u}\right), \tag{34}\] \[m_{H_{2,3}}^{2}\simeq\lambda_{2}w^{2}+\lambda\Lambda^{2}\mp \sqrt{(\lambda_{2}w^{2}-\lambda\Lambda^{2})^{2}+\lambda_{11}^{2}w^{2}\Lambda^ {2}}. \tag{35}\]
A massive pseudo-scalar with corresponding mass is identified as
\[\mathcal{A}=\frac{vwA_{1}+uwA_{2}+uvA_{3}}{\sqrt{u^{2}v^{2}+v^{2}w^{2}+u^{2}w^ {2}}},\ \ \ m_{\mathcal{A}}^{2}=-\frac{f}{\sqrt{2}}\left(\frac{vw}{u}+\frac{uw}{v}+ \frac{uv}{w}\right). \tag{36}\]
Two charged scalars are given by
\[H_{4}^{\pm}=\frac{v\chi_{2}^{\pm}+w\rho_{3}^{\pm}}{\sqrt{v^{2}+w^{2}}},\ \ \ \ H_{5}^{\pm}=\frac{v\eta_{2}^{\pm}+u\rho_{1}^{\pm}}{\sqrt{u^{2}+v^{2}}}, \tag{37}\]
with respective masses,
\[m_{H_{4}}^{2}=\left(\frac{\lambda_{7}}{2}-\frac{fu}{\sqrt{2}vw}\right)(v^{2}+ w^{2}),\ \ \ \ m_{H_{5}}^{2}=\left(\frac{\lambda_{8}}{2}-\frac{fw}{\sqrt{2}vu}\right)(v^{2}+ u^{2}). \tag{38}\]
A neutral complex scalar with corresponding mass is
\[H^{\prime 0}\equiv\frac{S^{\prime}+iA^{\prime}}{\sqrt{2}}=\frac{u\chi_{1}^{0*}+w \eta_{3}^{0}}{\sqrt{u^{2}+w^{2}}},\ \ \ \ m_{H^{\prime}}^{2}=\left(\frac{\lambda_{9}}{2}-\frac{fv}{\sqrt{2}uw}\right)(u^ {2}+w^{2}), \tag{39}\]
where the real \(S^{\prime}=(wS^{\prime}_{3}+uS^{\prime}_{1})/\sqrt{u^{2}+w^{2}}\) and imaginary \(A^{\prime}=(wA^{\prime}_{3}-uA^{\prime}_{1})/\sqrt{u^{2}+w^{2}}\) parts of \(H^{\prime}\) are degenerate with the same \(H^{\prime}\) mass.
Except for the usual Higgs mass, all new scalar masses are given at \((w,\Lambda,-f)\) scale. For the remaining fields, the massless Goldstone bosons of neutral gauge fields \(Z\), \(Z^{\prime}\), and \(Z^{\prime\prime}\) are identified as
\[G_{Z}=\frac{uA_{1}-vA_{2}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ G_{Z^{\prime}}=\frac{w(u^{2}+v^{2})A _{3}-uv(vA_{1}+uA_{2})}{\sqrt{(u^{2}+v^{2})(u^{2}v^{2}+v^{2}w^{2}+u^{2}w^{2})} },\ \ \ G_{Z^{\prime\prime}}=A_{4}, \tag{40}\]
while those of charged/complex gauge fields \(W^{\pm}\), \(Y^{\pm}\), and \(X^{0}\) take the form,
\[G_{W}^{\pm}=\frac{u\eta_{2}^{\pm}-v\rho_{1}^{\pm}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ G_{Y}^{\pm}=\frac{w\chi_{2}^{2}-v\rho_{3}^{\pm}}{\sqrt{v^{2}+w^{2}}},\ \ \ \ G_{X}^{0}=\frac{w\chi_{1}^{0}-u\eta_{3}^{0*}}{\sqrt{u^{2}+w^{2}}}. \tag{41}\]
Because \(\langle\xi\rangle=0\), the potential \(V(\xi)\) does not affect the minimum conditions derived from \(V(\rho,\chi,\eta,\phi)\) as in [43]. In other words, \(u,v,w,\Lambda\) are uniquely given, assuming that \(\mu^{2}<0\), \(\mu_{1,2,3}^{2}<0\), \(\lambda>0\), \(\lambda_{1,2,3}>0\), and necessary conditions for \(\lambda_{4,5,\cdots,12}\). Additionally, conservations of dark parity and electric charge imply that the presence of \(\xi\), i.e. \(V(\xi)\), modifies only the mass spectrum of \(H^{\prime}\) and \(G_{X}\), or exactly \(S^{\prime}\) and \(A^{\prime}\), which includes
\[V \supset \frac{1}{2}\left(S^{\prime}\ \ S^{\prime}_{5}\right)\begin{pmatrix}m_{H ^{\prime}}^{2}&\left(\frac{f_{2}}{\sqrt{2}}+\frac{\lambda_{17}\Lambda}{2} \right)\sqrt{u^{2}+w^{2}}\\ \left(\frac{f_{2}}{\sqrt{2}}+\frac{\lambda_{17}\Lambda}{2}\right)\sqrt{u^{2}+ w^{2}}&m_{\xi}^{2}+\sqrt{2}f_{1}\Lambda\end{pmatrix}\begin{pmatrix}S^{\prime} \\ S^{\prime}_{5}\end{pmatrix} \tag{42}\] \[+\frac{1}{2}\left(A^{\prime}\ \ A^{\prime}_{5}\right)\begin{pmatrix}m_{H ^{\prime}}^{2}&\left(\frac{f_{2}}{\sqrt{2}}-\frac{\lambda_{17}\Lambda}{2} \right)\sqrt{u^{2}+w^{2}}\\ \left(\frac{f_{2}}{\sqrt{2}}-\frac{\lambda_{17}\Lambda}{2}\right)\sqrt{u^{2} +w^{2}}&m_{\xi}^{2}-\sqrt{2}f_{1}\Lambda\end{pmatrix}\begin{pmatrix}A^{\prime} \\ A^{\prime}_{5}\end{pmatrix},\]
where \(\xi\equiv(S^{\prime}_{5}+iA^{\prime}_{5})/\sqrt{2}\) and \(m_{\xi}^{2}\equiv\mu_{\xi}^{2}+\lambda_{13}v^{2}/2+\lambda_{14}w^{2}/2+ \lambda_{15}u^{2}/2+\lambda_{16}\Lambda^{2}/2\). Defining two mixing angles
\[t_{2\theta_{R}}=\frac{(\sqrt{2}f_{2}+\lambda_{17}\Lambda)\sqrt{u^{2}+w^{2}}}{m _{\xi}^{2}+\sqrt{2}f_{1}\Lambda-m_{H^{\prime}}^{2}},\ \ \ \ t_{2\theta_{I}}=\frac{(\sqrt{2}f_{2}-\lambda_{17}\Lambda)\sqrt{u^{2}+w^{2}}}{m _{\xi}^{2}-\sqrt{2}f_{1}\Lambda-m_{H^{\prime}}^{2}}, \tag{43}\]
we obtain physical fields
\[R_{1}=c_{\theta_{R}}S^{\prime}-s_{\theta_{R}}S^{\prime}_{5},\ \ \ \ R_{2}=s_{\theta_{R}}S^{\prime}+c_{\theta_{R}}S^{\prime}_{5}, \tag{44}\] \[I_{1}=c_{\theta_{I}}A^{\prime}-s_{\theta_{I}}A^{\prime}_{5},\ \ \ \ I_{2}=s_{\theta_{I}}A^{\prime}+c_{\theta_{I}}A^{\prime}_{5}, \tag{45}\]
with respective masses
\[m_{R_{1,2}}^{2} = \frac{1}{2}\left[m_{H^{\prime}}^{2}+m_{\xi}^{2}+\sqrt{2}f_{1}\Lambda\right. \tag{46}\] \[\left.\mp\sqrt{(m_{H^{\prime}}^{2}-m_{\xi}^{2}-\sqrt{2}f_{1} \Lambda)^{2}+(\sqrt{2}f_{2}+\lambda_{17}\Lambda)^{2}(u^{2}+w^{2})}\right],\] \[m_{I_{1,2}}^{2} = \frac{1}{2}\left[m_{H^{\prime}}^{2}+m_{\xi}^{2}-\sqrt{2}f_{1}\Lambda\right.\] (47) \[\left.\mp\sqrt{(m_{H^{\prime}}^{2}-m_{\xi}^{2}+\sqrt{2}f_{1} \Lambda)^{2}+(\sqrt{2}f_{2}-\lambda_{17}\Lambda)^{2}(u^{2}+w^{2})}\right].\]
### Gauge boson mass spectrum
The gauge bosons obtain mass from \({\cal L}\supset\sum_{S}(D^{\mu}\langle S\rangle)^{\dagger}(D_{\mu}\langle S\rangle)\). Substituting the VEVs, we get physical non-Hermitian gauge bosons
\[W_{\mu}^{\pm}=\frac{A_{1\mu}\mp iA_{2\mu}}{\sqrt{2}},\ \ \ \ X^{0,0*}=\frac{A_{4\mu}\mp iA_{5 \mu}}{\sqrt{2}},\ \ \ \ Y^{\mp}=\frac{A_{6\mu}\mp iA_{7\mu}}{\sqrt{2}}, \tag{48}\]
with respective masses,
\[m_{W}^{2}=\frac{g^{2}}{4}(u^{2}+v^{2}),\ \ \ \ m_{X}^{2}=\frac{g^{2}}{4}(u^{2}+w^{2}),\ \ \ \ m_{Y}^{2}=\frac{g^{2}}{4}(v^{2}+w^{2}). \tag{49}\]
\(W\) is identical to that of the standard model and \(u^{2}+v^{2}=(246\ {\rm GeV})^{2}\).
Neutral gauge bosons are identified as
\[A_{\mu}=s_{W}A_{3\mu}+c_{W}\left(-\frac{t_{W}}{\sqrt{3}}A_{8\mu} +\sqrt{1-\frac{t_{W}^{2}}{3}}B_{\mu}\right), \tag{50}\] \[Z_{\mu}=c_{W}A_{3\mu}-s_{W}\left(-\frac{t_{W}}{\sqrt{3}}A_{8\mu} +\sqrt{1-\frac{t_{W}^{2}}{3}}B_{\mu}\right),\] (51) \[{\cal Z}_{\mu}^{\prime}=\sqrt{1-\frac{t_{W}^{2}}{3}}A_{8\mu}+ \frac{t_{W}}{\sqrt{3}}B_{\mu}, \tag{52}\]
where \(s_{W}=e/g=\sqrt{3}t_{X}/\sqrt{3+4t_{X}^{2}}\), with \(t_{X}=g_{X}/g\), is the sine of the Weinberg angle. The photon \(A_{\mu}\) is massless and decoupled. The \(Z\) boson that is identical to that of the standard model is radically lighter than the \({\cal Z}^{\prime}\) boson of 3-3-1 model and the \(C\) boson of \(U(1)_{G}\). Although \(Z\) mixes with \({\cal Z}^{\prime}\) and \(C\), at \((u,v)/(w,\Lambda)\) order the field \(Z\) is decoupled as a physical field possessing a mass,
\[m_{Z}^{2}\simeq\frac{g^{2}}{4c_{W}^{2}}(u^{2}+v^{2}). \tag{53}\]
There remains a mixing between \({\cal Z}^{\prime}\) and \(C\), yielding physical fields by diagonalization,
\[Z^{\prime}=c_{\theta}{\cal Z}^{\prime}-s_{\theta}C,\ \ \ \ Z^{\prime\prime}=s_{ \theta}{\cal Z}^{\prime}+c_{\theta}C, \tag{54}\]
with mixing angle and respective masses,
\[t_{2\theta} = \frac{4\sqrt{3+t_{X}^{2}}t_{G}w^{2}}{4t_{G}^{2}(w^{2}+9\Lambda^{2 })-(3+t_{X}^{2})w^{2}}, \tag{55}\] \[m_{Z^{\prime},Z^{\prime\prime}}^{2} = \frac{g^{2}}{18}\left\{4t_{G}^{2}(w^{2}+9\Lambda^{2})+(3+t_{X}^{2 })w^{2}\right.\] (56) \[\left.\mp\sqrt{[4t_{G}^{2}(w^{2}+9\Lambda^{2})-(3+t_{X}^{2})w^{2 }]^{2}+16(3+t_{X}^{2})t_{G}^{2}w^{4}}\right\},\]
where \(t_{G}=g_{G}/g\).
The above result is similar to that in [43] since the scalar multiplets have a dark charge value equal to that for \(B-L\). The difference would be explicitly in the couplings of \(Z^{\prime},Z^{\prime\prime}\) with matter fields because the normal fermions have \(B-L\) while do not have dark charge. For comparison and further usage, we compute in Tab. 2 the couplings of \(Z^{\prime}\) with fermions, while those for \(Z^{\prime\prime}\) can be obtained from \(Z^{\prime}\) by replacing \(c_{\theta}\to s_{\theta}\) and \(s_{\theta}\to-c_{\theta}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline \(f\) & \(g_{V}^{Z^{\prime}}(f)\) & \(g_{A}^{Z^{\prime}}(f)\) \\ \hline \(\nu_{a}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(e_{a}\) & \(\frac{c_{\theta}(1-4s_{W}^{2})}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{ W}t_{G}\) & \(\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(N_{a}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}-\frac{4}{3}s_{\theta}c_{W}t_{G}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}+\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \(u_{\alpha}\) & \(-\frac{c_{\theta}(3-8s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(u_{3}\) & \(\frac{c_{\theta}(3+2s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(d_{\alpha}\) & \(-\frac{c_{\theta}(3-2s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(d_{3}\) & \(\frac{c_{\theta}\sqrt{3-4s_{W}^{2}}}{6}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) & \(\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(U\) & \(-\frac{c_{\theta}(3-7s_{W}^{2})}{3\sqrt{3-4s_{W}^{2}}}-\frac{4}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}+\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \(D_{\alpha}\) & \(\frac{c_{\theta}(3-5s_{W}^{2})}{3\sqrt{3-4s_{W}^{2}}}+\frac{4}{3}s_{\theta}c_ {W}t_{G}\) & \(\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}-\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Couplings of \(Z^{\prime}\) with fermions; additionally, notice that \(Z^{\prime\prime}\)-fermion couplings derived from this table with replacement \(c_{\theta}\to s_{\theta}\) and \(s_{\theta}\to-c_{\theta}\).
Neutrino Mass
In the 3-3-1-1 model by gauging \(B-L\), the right-handed neutrinos are required for anomaly cancellation. Consequently, neutrinos obtain a small mass via canonical seesaw mechanism, suppressed by large right-handed neutrino mass scales relating to \(B-L\) breaking. In this kind of model, ordinary lepton doublets may couple to a scalar and fermions that both are odd under the matter parity, revealing an interesting possibility for scotogenic neutrino mass generation alternative to the above canonical seesaw [47; 48; 49; 50; 51]. The issue raised is how to suppress this canonical seesaw since the \(B-L\) breaking scale is not necessarily large for the latter. The most studies have chosen \(B-L\) charges for right-handed neutrinos to be \(-4,-4,+5\) which avoid their coupling to usual leptons and Higgs boson. But one must introduce two scalar singlets coupled to these right-handed neutrinos in order to make them appropriately heavy, hence expressing a complicated \(U(1)_{N}\) Higgs sector with two unreasonable pseudo Nambu-Goldstone bosons. Additionally, the fermions that are odd under the matter parity responsible for the mentioned scotogenic setup are not necessarily present under the theoretical ground, unlike the unwanted \(\nu_{aR}\). The present 3-3-1-1 model by gauging dark charge properly overcomes such issues. Indeed, \(\nu_{aR}\) are not required by dark charge anomaly cancellation, thus the canonical seesaw disappears. Additionally, \(N_{aR}\) must be present for dark charge anomaly cancellation, which are odd under dark parity and coupled to usual leptons via a scalar triplet. We introduce only an extra scalar singlet \(\xi\) that necessarily separates the relevant \(H^{\prime}\) (i.e. \(S^{\prime},A^{\prime}\)) mass, yielding a neutrino mass generation scheme to be more economical than the previous studies.
First note that charged leptons and every (usual and exotic) quarks gain appropriate masses from the Yukawa Lagrangian, as usual/similar to the 3-3-1 model. Neutral fermions obtain a mass matrix of form,
\[\mathcal{L}_{\rm Yuk}\supset-\frac{1}{2}\left(\bar{N}_{aL}\ \ \bar{N}_{aR}^{c} \right)\begin{pmatrix}0&m_{ab}^{D}\\ m_{ba}^{D}&m_{ab}^{R}\end{pmatrix}\begin{pmatrix}N_{bL}^{c}\\ N_{bR}\end{pmatrix}+H.c., \tag{57}\]
where \(m^{D}=-h^{N}w/\sqrt{2}\) and \(m^{R}=-h^{\prime N}\Lambda/\sqrt{2}\) are Dirac and (right-handed) Majorana masses for \(N\), respectively. We can diagonalize the generic mass matrix, yielding
\[\mathcal{L}_{\rm Yuk}\supset-\frac{1}{2}\bar{N}_{k}^{c}M_{k}N_{k}, \tag{58}\]
for \(k=1,2,\cdots,6\), where \((N_{aL}^{c},N_{aR})=(U_{ak},V_{ak})N_{k}\) relates the gauge states to mass eigen
states \(N_{k}\) with mass eigenvalues \(M_{k}\).
What concerns is neutrino mass generation Lagrangian which is collected from those in Yukawa interactions and scalar potential, such as
\[\mathcal{L} \supset \frac{uh_{ab}^{N}V_{bk}}{\sqrt{2}\sqrt{u^{2}+w^{2}}}\bar{\nu}_{aL}( c_{\theta_{R}}R_{1}+s_{\theta_{R}}R_{2}-ic_{\theta_{I}}I_{1}-is_{\theta_{I}}I_{2})N_ {k} \tag{59}\] \[+\frac{wh_{ab}^{N}V_{bk}}{\sqrt{u^{2}+w^{2}}}\bar{\nu}_{aL}G_{X}^{ 0}N_{k}-\frac{1}{2}M_{k}N_{k}^{2}+H.c.\] \[-\frac{1}{2}m_{R_{1}}^{2}R_{1}^{2}-\frac{1}{2}m_{R_{2}}^{2}R_{2}^ {2}-\frac{1}{2}m_{I_{1}}^{2}I_{1}^{2}-\frac{1}{2}m_{I_{2}}^{2}I_{2}^{2},\]
where we have used \(\chi_{1}^{0}=(uH^{0*}+wG_{X}^{0})/\sqrt{u^{2}+w^{2}}=[u(c_{\theta_{R}}R_{1}+s _{\theta_{R}}R_{2}-ic_{\theta_{I}}I_{1}-is_{\theta_{I}}I_{2})/\sqrt{2}+wG_{X}^ {0}]/\sqrt{u^{2}+w^{2}}\) and \(N_{bR}=V_{bk}N_{k}\). Neutrino mass generation Feynman diagram is depicted in Fig. 1 in both flavor basis (left panel) and mass eigenbasis (right panel).
Neutrino mass is induced in form of \(\mathcal{L}\supset-\frac{1}{2}\bar{\nu}_{aL}(m_{\nu})_{ab}\nu_{bL}^{c}+H.c.\) in which
\[(m_{\nu})_{ab} = \frac{u^{2}}{u^{2}+w^{2}}\frac{(h^{N}V)_{ak}(h^{N}V)_{bk}M_{k}}{3 2\pi^{2}} \tag{60}\] \[\times\left(\frac{c_{\theta_{R}}^{2}m_{R_{1}}^{2}\ln\frac{M_{k}^ {2}}{m_{R_{1}}^{2}}}{M_{k}^{2}-m_{R_{1}}^{2}}-\frac{c_{\theta_{I}}^{2}m_{I_{1} }^{2}\ln\frac{M_{k}^{2}}{m_{I_{1}}^{2}}}{M_{k}^{2}-m_{I_{1}}^{2}}+\frac{s_{ \theta_{R}}^{2}m_{R_{2}}^{2}\ln\frac{M_{k}^{2}}{m_{R_{2}}^{2}}}{M_{k}^{2}-m_{ R_{2}}^{2}}-\frac{s_{\theta_{I}}^{2}m_{I_{2}}^{2}\ln\frac{M_{k}^{2}}{m_{I_{2}}^{2}}}{M _{k}^{2}-m_{I_{2}}^{2}}\right).\]
Remarks are in order
1. The divergent one-loop contributions corresponding to \(R_{1,2}\) and \(I_{1,2}\) are cancelled out due to \(c_{\theta_{R}}^{2}-c_{\theta_{I}}^{2}+s_{\theta_{R}}^{2}-s_{\theta_{I}}^{2}=0\).
Figure 1: Neutrino mass generation in the scotoelectroweak theory, where left and right diagrams are given in flavor and mass eigenbases, respectively.
2. For gauge realization of the matter parity, the inert scalar doublet \((\chi_{1},\chi_{2})\) may approximate as Goldstone mode of a gauge vector doublet \((X,Y)\), i.e. \((\chi_{1},\chi_{2})\sim(G_{X},G_{Y})\). Both \(G_{X}\) and \(X\) do not contribute to neutrino mass since they possess a degenerate mass between particle and antiparticle, opposite to its global versions [58; 63].
3. Contributing to neutrino mass is a scalar singlet \(\eta_{3}\) that mixes with \(\chi_{1}\), thus suppressed by \((u/w)^{2}\sim 10^{-3}\) besides the usual loop factor \((1/32\pi^{2})\sim 10^{-3}\), another intermediate scalar singlet \(\xi\) that connects to \(\eta_{3}\), and the singlet mass splittings \(\Delta m^{2}/m^{2}\sim f_{1}/\Lambda\sim f_{2}\lambda_{17}/\Lambda\) as well as Majorana masses \(M_{k}\sim\Lambda\) for \(N_{k}\), all governed by dark charge breaking field \(\langle\phi\rangle\sim\Lambda\). It translates to \[m_{\nu}\sim\left(\frac{h^{N}}{10^{-2}}\right)^{2}\times\left(\frac{f_{1},f_{2} \lambda_{17}}{\text{GeV}}\right)\times 0.1\ \text{eV},\] (61) appropriate to experiment, given that \(h^{N}\sim 10^{-2}\), and the soft coupling \(f_{1,2}\sim 1\) GeV is not necessarily small, in contrast to [50]. This is due to a double suppression between the weak and new physics scales, \((u/w)^{2}\).
## V Dark matter
Contributing to the scotogenic neutrino masses is two kinds of dark field, the dark scalars \(R_{1,2},I_{1,2}\) and the dark fermions \(N_{1,2,\cdots,6}\). In contrast to the 3-3-1-1 model by gauging \(B-L\), the dark scalars in the present model are now separated in mass \(m_{R_{1}}\neq m_{I_{1}}\) and \(m_{R_{2}}\neq m_{I_{2}}\). This presents interesting coannihilation phenomena between \(R_{1}\) and \(I_{1}\) as well as \(R_{2}\) and \(I_{2}\) that set the relic density, if each of them is interpreted to be dark matter. Additionally, the dark scalar mass splitting would avoid dangerous scattering processes of \(R_{1}/I_{1}\) or \(R_{2}/I_{2}\) with nuclei in direct detection experiment due to mediators of \(Z,Z^{\prime},Z^{\prime\prime}\). The phenomenology of dark scalar candidates is quite analogous to those studied in the 3-3-1 model with inert multiplets [34; 37; 38], which will be skipped. In what follows we assume the dark fermions containing dark matter, namely the dark matter candidate is assigned as \(N_{1}\) which has a mass smaller than other \(N\)'s, dark scalars, and dark vectors. Therefore, this \(N_{1}\) is absolutely stabilized by dark parity conservation.
A distinct feature between the 3-3-1-1 model by gauging \(B-L\) and the 3-3-1-1 model by gauging dark charge is that \(N_{1}\) in the former has \(B-L=0\), while \(N_{1}\) in the latter has
\(D=1\neq 0\). Therefore, in the present model \(N_{1}=U_{a1}^{*}N_{aL}^{c}+V_{a1}^{*}N_{aR}\) has both (left and right) chiral couplings to \(Z^{\prime},Z^{\prime\prime}\), such as
\[{\cal L} \supset -\left[\left(\frac{gc_{W}c_{\theta}}{\sqrt{3-4s_{W}^{2}}}+\frac{g _{G}s_{\theta}}{3}\right)U_{a1}^{*}U_{a1}-g_{G}s_{\theta}V_{a1}^{*}V_{a1} \right]\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime}_{\mu} \tag{62}\] \[-\left[\left(\frac{gc_{W}s_{\theta}}{\sqrt{3-4s_{W}^{2}}}-\frac{g _{G}c_{\theta}}{3}\right)U_{a1}^{*}U_{a1}+g_{G}c_{\theta}V_{a1}^{*}V_{a1} \right]\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime\prime}_{\mu},\]
where the terms \(V_{a1}\) (exactly of \(N_{aR}\)) exist only in the present model, which sets the neutrino mass above. Specially, we will examine the effect of \(N_{aR}\) by assuming \(||V_{a1}||\gg||U_{a1}||\), i.e. the dark matter \(N_{1}\simeq V_{a1}^{*}N_{aR}\) to be most right-handed. Combined with unitarity condition, we have \(V_{a1}^{*}V_{a1}=1-U_{a1}^{*}U_{a1}\simeq 1\) while \(U_{a1}^{*}U_{a1}\simeq 0\). Eq. (62) becomes
\[{\cal L}\supset g_{G}s_{\theta}\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime}_{\mu}-g_ {G}c_{\theta}\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime\prime}_{\mu}. \tag{63}\]
In the early universe, \(N_{1}\) annihilates to usual fields via \(Z^{\prime},Z^{\prime\prime}\) portals as in Fig. 2 which set the relic density. Here the \(Z^{\prime},Z^{\prime\prime}\) couplings with usual fermions (\(f=\nu,e,u,d\)) can be found in Tab. 2. It is stressed that there are no \(t\)-channel annihilations exchanged by \(X,Y\) dark vectors, in contrast to [41]. Additionally, the Higgs portal interactions of \(N_{1}\) with normal matter are small and suppressed.
The dark matter annihilation cross-section is computed as
\[\langle\sigma v\rangle_{N_{1}}=\frac{g^{4}m_{N_{1}}^{2}}{16\pi c_{W}^{4}}\sum _{f,x,y}\frac{g_{V}^{x}(N_{1})g_{V}^{y}(N_{1})N_{C}(f)[g_{V}^{x}(f)g_{V}^{y}(f )+g_{A}^{x}(f)g_{A}^{y}(f)]}{(4m_{N_{1}}^{2}-m_{x}^{2})(4m_{N_{1}}^{2}-m_{y}^{ 2})}, \tag{64}\]
where \(x,y=Z^{\prime},Z^{\prime\prime}\), \(N_{C}(f)\) refers to the color number of \(f\), and \(g_{V}^{Z^{\prime}}(N_{1})=-s_{\theta}c_{W}t_{G}\) and \(g_{V}^{Z^{\prime\prime}}(N_{1})=c_{\theta}c_{W}t_{G}\) are given in the mass basis of \(N\), as mentioned. Further, the dark matter relic density can be approximated as \(\Omega_{N_{1}}h^{2}\simeq 0.1\) pb\(/\langle\sigma v\rangle_{N_{1}}\simeq 0.12\), where the last value is given by experiment [64].
Figure 2: Fermion dark matter annihilation to normal matter.
Because \(N_{1}\) is a Majorana particle, it scatters with quarks in direct detection experiment only through spin-dependent (SD) effective interaction exchanged by \(Z^{\prime},Z^{\prime\prime}\) analogous to the diagram in Fig. 2 for \(f=q\), namely
\[\mathcal{L}_{\rm eff}\supset\frac{g^{2}}{4c_{W}^{2}}\sum_{q,x}\frac{g_{A}^{x}(N _{1})g_{A}^{x}(q)}{m_{x}^{2}}(\bar{N}_{1}\gamma^{\mu}\gamma_{5}N_{1})(\bar{q} \gamma_{\mu}\gamma_{5}q), \tag{65}\]
where \(g_{A}^{x}(N_{1})=-g_{V}^{x}(N_{1})\) for \(x=Z^{\prime},Z^{\prime\prime}\). The SD cross-section determining scattering of \(N_{1}\) with a target neutron (\(n\)) is given by
\[\sigma_{N_{1}}^{\rm SD}=\frac{3g^{4}m_{n}^{2}}{4\pi c_{W}^{4}}\sum_{x,y}\frac{ g_{A}^{x}(N_{1})g_{A}^{y}(N_{1})[g_{A}^{x}(u)\lambda_{u}^{n}+g_{A}^{x}(d)( \lambda_{d}^{n}+\lambda_{s}^{n})][g_{A}^{y}(u)\lambda_{u}^{n}+g_{A}^{y}(d)( \lambda_{d}^{n}+\lambda_{s}^{n})]}{m_{x}^{2}m_{y}^{2}} \tag{66}\]
where \(x,y=Z^{\prime},Z^{\prime\prime}\), and the fractional quark-spin coefficients are \(\lambda_{u}^{n}=-0.42\), \(\lambda_{d}^{n}=0.85\), and \(\lambda_{s}^{n}=-0.88\) for neutron [65]. Notice that dark matter scattering with proton leads to a similar bound, which is not of interest.
## VI Constraining
As the neutrino mass is governed by \(h^{N}\) and \(f_{1,2},\lambda_{17}\) all independent of the gauge portal, the dark matter observables can appropriately be constrained, independent with those for neutrino.3 Only supplemental conditions relevant to dark matter are mass regime for WIMP stability, collider limit for \(Z^{\prime},Z^{\prime\prime}\) masses, and FCNCs, studied in order.
Footnote 3: Note that \(N_{1}\) mass that enters dark matter observables can be induced by a \(h^{\prime N}\) coupling. The other \(h^{\prime N}\) and \(h^{N}\) couplings are sufficient to recover neutrino data.
### WIMP stability
It is easy to adjust relevant Yukawa couplings and scalar potential parameters so that \(N_{1}\) is lighter than other dark fermions and dark scalars. But for dark vectors, we must impose
\[m_{N_{1}}<m_{X,Y}\simeq\frac{g}{2}w, \tag{67}\]
where \(m_{N_{1}}=M_{1}\) is the mass of \(N_{1}\) as mentioned and the last approximation is given at the leading order \(u,v\ll w\).
### Collider bound
In our model, \(Z^{\prime}\) and \(Z^{\prime\prime}\) couple to lepton and quark quite equally. Hence, the LEPII [66] and LHC [67] experiments would make similar bounds on these gauge bosons, analogous to a sequential \(Z^{\prime}\) boson that has the same couplings as the standard model \(Z\). That said, it is necessary to consider only the LEPII bound for possess \(e^{+}e^{-}\to f\bar{f}\) exchanged by \(Z^{\prime},Z^{\prime\prime}\), given by the effective interaction,
\[{\cal L}_{\rm eff}\supset\sum_{x}\frac{g^{2}}{c_{W}^{2}m_{x}^{2}}[\bar{e} \gamma^{\mu}(a_{L}^{x}(e)P_{L}+a_{R}^{x}(e)P_{R})e][\bar{f}\gamma_{\mu}(a_{L}^ {x}(f)P_{L}+a_{R}^{x}(f)P_{R})f], \tag{68}\]
for \(x=Z^{\prime},Z^{\prime\prime}\) and \(f=\mu,\tau\), where the chiral couplings \(a_{L,R}^{x}(f)=\frac{1}{2}[g_{V}^{x}(f)\pm g_{A}^{x}(f)]\) can be extracted from Tab. 2, particularly
\[a_{L}^{Z^{\prime}}(e)=\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{ 3}s_{\theta}c_{W}t_{G},\hskip 14.226378pta_{L}^{Z^{\prime\prime}}(e)=a_{L}^{Z^ {\prime}}(e)|_{c_{\theta}\to s_{\theta},s_{\theta}\to-c_{\theta}}. \tag{69}\]
Since leptons possess universal couplings, we further write
\[{\cal L}_{\rm eff}\supset\sum_{x}\frac{g^{2}[a_{L}^{x}(e)]^{2}}{c_{W}^{2}m_{x }^{2}}(\bar{e}\gamma^{\mu}P_{L}e)(\bar{f}\gamma_{\mu}P_{L}f)+(LR)+(RL)+(RR), \tag{70}\]
where the last three terms (\(\cdots\)) differ from the first term only in chiral structures. The LEPII has studied such chiral interactions, typically indicating to
\[\sum_{x}\frac{g^{2}[a_{L}^{x}(e)]^{2}}{c_{W}^{2}m_{x}^{2}}=\frac{g^{2}}{c_{W} ^{2}}\left\{\frac{[a_{L}^{Z^{\prime}}(e)]^{2}}{m_{Z^{\prime}}^{2}}+\frac{[a_{ L}^{Z^{\prime\prime}}(e)]^{2}}{m_{Z^{\prime\prime}}^{2}}\right\}<\frac{1}{(6~{}{\rm TeV })^{2}}. \tag{71}\]
### Fcnc
Since quark families transform differently under the gauge symmetry, there must be FCNCs coupled to \(Z^{\prime},Z^{\prime\prime}\). They arise from the gauge interaction,
\[{\cal L}\supset-g\bar{F}\gamma^{\mu}[T_{3}A_{3\mu}+T_{8}A_{8\mu}+t_{X}(Q-T_{3 }+T_{8}/\sqrt{3})B_{\mu}+t_{G}(D+2T_{8}/\sqrt{3})C_{\mu}]F, \tag{72}\]
where we have substituted \(X,G\) from (4). It is noted that all leptons and exotic quarks do not flavor change, while the couplings of \(Q\), \(T_{3}\), and \(D\) always conserve flavors, due to dark parity conservation. What remains is only usual quarks coupled to \(T_{8}\),
\[{\cal L} \supset -g\bar{q}_{L}\gamma^{\mu}T_{\delta}q_{L}(A_{8\mu}+t_{X}/\sqrt{3} B_{\mu}+2t_{G}/\sqrt{3}C_{\mu}) \tag{73}\] \[\supset \bar{q}_{iL}^{\prime}\gamma^{\mu}q_{jL}^{\prime}(V_{qL}^{*})_{3 i}(V_{qL})_{3j}(g^{\prime}Z^{\prime}+g^{\prime\prime}Z^{\prime\prime}),\]
which flavor changes for \(i\neq j\) (\(i,j=1,2,3\)). Above, \(q\) denotes either \(u=(u_{1},u_{2},u_{3})\) or \(d=(d_{1},d_{2},d_{3})\) whose \(T_{8}\) value is \(T_{q8}=\frac{1}{2\sqrt{3}}\text{diag}(-1,-1,1)\). Additionally, \(q^{\prime}\) defines mass eigenstates, either \(u^{\prime}=(u,c,t)\) or \(d^{\prime}=(d,s,b)\), related to gauge states by \(q_{L,R}=V_{qL,R}q^{\prime}_{L,R}\) which diagonalizes relevant quark mass matrices. The \(g^{\prime},g^{\prime\prime}\) couplings are
\[g^{\prime}=2g_{G}s_{\theta}-\frac{gc_{\theta}c_{W}}{\sqrt{3-4s_{W}^{2}}},\ \ \ \ g^{ \prime\prime}=g^{\prime}(c_{\theta}\to s_{\theta},s_{\theta}\to-c_{\theta}). \tag{74}\]
Integrating \(Z^{\prime},Z^{\prime\prime}\) out, we obtain an effective Lagrangian describing meson mixing,
\[\mathcal{L}_{\text{eff}}\supset(\bar{q}^{\prime}_{iL}\gamma^{\mu}q^{\prime}_{ jL})^{2}[(V^{*}_{qL})_{3i}(V_{qL})_{3j}]^{2}\left(\frac{g^{\prime 2}}{m^{2}_{Z^{ \prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{\prime\prime}}}\right). \tag{75}\]
Aligning the quark mixing to down quark sector, i.e. \(V_{uL}=1\), it implies \(V_{dL}=V_{\text{CKM}}\). Since neutral meson mixings \(K^{0}\)-\(\bar{K}^{0}\) and \(B^{0}_{d,s}\)-\(\bar{B}^{0}_{d,s}\) give quite the same bounds, we consider only the last one, \(B^{0}_{s}\)-\(\bar{B}^{0}_{s}\) mixing, implying [64]
\[[(V^{*}_{dL})_{32}(V_{dL})_{33}]^{2}\left(\frac{g^{\prime 2}}{m^{2}_{Z^{ \prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{\prime\prime}}}\right)<\frac{1}{(100 \text{ TeV})^{2}}. \tag{76}\]
The CKM factor is \((V^{*}_{dL})_{32}(V_{dL})_{33}=4\times 10^{-2}\), leading to
\[\frac{g^{\prime 2}}{m^{2}_{Z^{\prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{ \prime\prime}}}<\frac{1}{(4\text{ TeV})^{2}}. \tag{77}\]
### Numerical estimation
We take \(s_{W}^{2}=0.231\), \(\alpha=1/128\), and \(t_{G}=1\), hence \(t_{X}=\sqrt{3}s_{W}/\sqrt{3-4s_{W}^{2}}\simeq 0.577\) and \(g_{G}=g=0.652\). It is clear from (55) and (56) that the \(\mathcal{Z}^{\prime}\)-\(C\) mixing angle \(\theta\) and the \(Z^{\prime},Z^{\prime\prime}\) masses \(m_{Z^{\prime},Z^{\prime\prime}}\) depend only on the two new physics scales, \(w,\Lambda\). Hence, the constraints (71) and (77) each directly yield a bound on \((w,\Lambda)\), as despited in Fig. 3. Such a bound depends infinitesimally on \(t_{G}\), i.e. the strength of the dark coupling \(g_{G}\), if it varies. This is due to the fact that ordinary leptons and quarks have zero dark charge and the effects come only from small mixings. The FCNCs make a \((w,\Lambda)\) bound stronger than that of the collider, which would be taken into account for neutrino mass and dark matter.
The FCNC bound under consideration yields that
1. In the limit \(\Lambda\to\infty\) (or \(\Lambda\gg w\)), we obtain \(w=4\) TeV. In this case, \(Z^{\prime\prime}\) is superheavy and decoupled from the 3-3-1 particle spectrum, while the \(Z^{\prime}\) mass is \(m_{Z^{\prime}}=1.59\) TeV.
2. In the limit \(w\to\infty\) (or \(w\gg\Lambda\)), we achieve \(\Lambda=2.68\) TeV. In this case, \(Z^{\prime\prime}\) is superheavy and decoupled from the standard model particle spectrum with \(U(1)_{D}\) symmetry (see below), while the \(Z^{\prime}\) mass is \(m_{Z^{\prime}}=2.35\) TeV.
3. In the case of \(w\sim\Lambda\), both \(Z^{\prime},Z^{\prime\prime}\) effectively govern the new physics. We fix benchmark values to be \((w,\Lambda)=(5,4.5)\) or \((9,3)\), which translate to \((m_{Z^{\prime}},m_{Z^{\prime\prime}})=(1.85,6.3)\) or \((2.26,6.18)\) respectively, where all values are in TeV.
Using the parameter values and the last case given above, we plot the dark matter relic density (cf. Sec. V) as a function of the dark matter mass as in Fig. 4. It is stressed that the \(Z^{\prime},Z^{\prime\prime}\) mass resonances (left, right funnels, respectively) are necessary to set the correct relic density, \(\Omega_{N_{1}}h^{2}\leq 0.12\). For the case \((w,\Lambda)=(5,4.5)\) TeV, the \(Z^{\prime}\) resonance \(m_{N_{1}}=m_{Z^{\prime}}/2\) plays the role, yielding \(m_{N_{1}}=0.89\)-\(0.96\) TeV for the correct abundance, whereas the \(Z^{\prime\prime}\) resonance is excluded by the WIMP unstable regime, namely \(m_{N_{1}}<1.63\) TeV. However, for the case \((w,\Lambda)=(9,3)\) TeV, both the resonances \(m_{N_{1}}=m_{Z^{\prime}}/2\) by \(Z^{\prime}\) and \(m_{N_{1}}=m_{Z^{\prime\prime}}/2\) by \(Z^{\prime\prime}\) takes place. They indicate to \(m_{N_{1}}=1.06\)-\(1.21\) TeV and \(m_{N_{1}}=2.68\)-\(2.93\) TeV, for correct abundance. Here note that the relic density is only satisfied for a part of the second resonance by \(Z^{\prime\prime}\), since \(m_{N_{1}}<2.93\) TeV ensuring WIMP stability.
Figure 3: New physics scales \((w,\Lambda)\) bounded by LEPII and FCNC.
Using the above limits for new physics scales \(w,\Lambda\) and the input values for \(s_{W},\alpha,g_{X},g_{G}\) parameters, we make a contour of the SD cross-section of dark matter with nuclei in direct detection experiment (cf. Sec. V) as a function of \((w,\Lambda)\) as given in Fig. 5. It is clear that the SD cross-section is more sensitive to \(\Lambda\) than \(w\). Additionally, for viable regime \(w\geq 4\) TeV and \(\Lambda\geq 2.68\) TeV, this model predicts the dark matter signal strength in direct detection to be \(\sigma_{N_{1}}^{\rm SD}<10^{-46}\) cm\({}^{2}\) much below the current bound of \(10^{-42}\) cm\({}^{2}\) order for a typical WIMP with mass beyond 1 GeV [68].
Figure 4: Dark matter relic density plotted as function of its mass according to two cases: \(w=5\) TeV and \(\Lambda=4.5\) TeV (upper panel); \(w=9\) TeV and \(\Lambda=3\) TeV (lower panel).
## VII Realization of the dark charge
In this section, we consider an alternative scenario that reveals the main role of the dark charge by assuming the scalar triplet \(\chi\) to be superheavy, possessing a VEV \(w\gg\Lambda\), and of course \(\Lambda\gg u,v\).4 Hence, the scheme of symmetry breaking is now
Footnote 4: This case presents two new phases of the new physics similar to a matter discussed in [69].
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\] \[\downarrow w\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{D}\] \[\downarrow\Lambda\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes P_{D}\] \[\downarrow u,v\] \[SU(3)_{C}\otimes U(1)_{Q}\otimes P_{D}\]
Indeed, when \(\chi\) develops a VEV, \(\langle\chi\rangle=(0,0,w/\sqrt{2})\), it breaks all new charges \(T_{4,5,6,7,8}\), \(X\), and \(G\) but conserves \(T_{1,2,3}\), \(Y=-1/\sqrt{3}T_{8}+X\), and \(D=-2/\sqrt{3}T_{8}+G\), besides the color, which match the standard model symmetry and \(U(1)_{D}\), as expected. This breaking by \(\chi\) decomposes every \(SU(3)_{L}\) multiplet into a normal isomultiplet with \(D=0\) and a dark
isomultiplet with \(D\neq 0\)--known as dark isopartner of the normal isomultiplet--which all are possibly seen in Tab. 1. Given that the scale \(w\) is very high, i.e. \(w\gg\Lambda\sim\) TeV, the new physics related to it, such as dark vectors \(X,Y\) coupled to broken \(T_{4,5,6,7}\), \(Z^{\prime\prime}\) coupled to broken combination of \(T_{8},X,G\), relevant Goldstone bosons \(G_{X}\), \(G_{Y}\), and \(G_{Z^{\prime\prime}}\) eaten by \(X\), \(Y\), and \(Z^{\prime\prime}\) respectively, and its Higgs fields, is all decoupled/integrated out. What imprinted at scale \(\Lambda\sim\) TeV is a novel theory \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{D}\), explicitly recognizing the dark charge \(D\), directly affecting the standard model.
Notice that for \(w\gg\Lambda\), the \(Z^{\prime},Z^{\prime\prime}\) masses are
\[m_{Z^{\prime}}^{2}\simeq\frac{4g_{G}^{2}(3+t_{X}^{2})}{4t_{G}^{2}+3+t_{X}^{2} }\Lambda^{2},\ \ \ \ m_{Z^{\prime\prime}}^{2}\simeq\frac{g^{2}}{9}(4t_{G}^{2}+3+t_{X}^{2})w^{2}, \tag{78}\]
and the \({\cal Z}^{\prime}\)-\(C\) mixing angle is
\[t_{\theta}\simeq\frac{\sqrt{3+t_{X}^{2}}}{2t_{G}}. \tag{79}\]
As mentioned, \(Z^{\prime\prime}\) is decoupled, while \(Z^{\prime}\) now governs the collider and FCNC, bounded by \(m_{Z^{\prime}}>2.35\) TeV, for our choice of \(t_{G}=1\). In this case, \(t_{\theta}\simeq 0.91\), i.e. \(\theta\simeq 42.4^{\rm o}\), which determines the \(Z^{\prime}\) coupling with fermions, such as
\[{\cal L}\supset g_{G}s_{\theta}\sum_{f}\bar{f}\gamma^{\mu}\left(-\frac{2}{3}t _{W}^{2}Y+D\right)fZ^{\prime}_{\mu}, \tag{80}\]
where \(f\) runs over usual lepton and quark isomultiplets. The presence of the \(Y\) term like that from a kinetic mixing effect results from 3-3-1-1 breaking. That said, if the standard model fields have no dark charge \(D\), they may interact with the dark boson \(Z^{\prime}\) through scotoelectroweak unification governed by the hypercharge \(Y\). This effect is smaller than the dark force by one order, say \(2/3t_{W}^{2}\sim 0.1\)
Although \(\chi\) is superheavy, it can induce appropriate neutrino masses by the same mechanism and the result discussed above. But, the contribution of new physics in (60) must be reordered, \((u/w)^{2}=(u/\Lambda)^{2}\times(\Lambda/w)^{2}\sim 10^{-3}\times 10^{-3}=10^{-6}\), the loop factor \((1/32\pi^{2})\sim 10^{-3}\) as retained, the \(N\) mass matrix being pseudo-Dirac such that \((h^{N}V)^{2}M\sim(h^{N}\Lambda/w)^{2}\times w=10^{-3}(h^{N})^{2}w\), the scalar mass splitting as \(\Delta m^{2}/m^{2}\sim(f_{1},f_{2}\lambda_{17})\Lambda/w^{2}\). Hence, the neutrino masses are of order of eV,
\[m_{\nu}\sim(h^{N})^{2}\times\left(\frac{f_{1},f_{2}\lambda_{17}}{w}\right) \times\left(\frac{\Lambda}{\rm TeV}\right)\times{\rm eV}, \tag{81}\]
given that \(h^{N}\sim 1\), \(\Lambda\sim\) TeV, and \(f_{1,2}\sim w\), where the soft term (\(f_{1,2}\)) would mount to the scale of the 3-3-1-1 breaking.
After decoupling by the large scale \(w\), the intermediate TeV phase with \(U(1)_{D}\) symmetry can contain some dark fields survived, such as \(N_{1}\), \(\xi\), and \(\phi\) by choosing appropriate Yukawa couplings and scalar potential parameters. The dark matter phenomenology is similar to the above model, but it is now governed by only \(Z^{\prime}\) boson, coupled to normal matter via (80). For the dark fermion, the \(Z^{\prime}\) mass resonance sets its relic density. Alternatively, for the dark scalar, the new Higgs \(\phi\) portal takes place annihilating to the standard model Higgs fields, since the dark scalar mass splitting in this case is large.
## VIII Conclusion
The idea of a dark photon associated with a conserved, dark (abelian) charge is interesting as it provides potential solutions to a number of the current issues [70]. As electric charge is a result of electroweak breaking, this work has probed that a dark charge may result from a more fundamental theory, called scotoelectroweak. Moreover, the content of dark fields and the way they interact with normal matter are completely determined by the 3-3-1-1 symmetry of the theory.
We have examined the pattern of the 3-3-1-1 symmetry breaking, obtaining a residual dark parity that both stabilizes dark matter candidates and governs scotogenic neutrino mass generation. The small neutrino masses are suppressed by loop induced and ratio between electroweak to new physics scales, not requiring the soft terms to be too small. The fermion dark matter abundance is generically set by \(Z^{\prime},Z^{\prime\prime}\) mass resonances. Even in a scenario that the 3-3-1-1 breaking scale is very high, the light boson \(Z^{\prime}\) associated with the dark charge still plays the role due to a coupling to normal matter via the hypercharge.
We have investigated the model under constraints from LEPII and FCNCs. However, given a stronger bound it is easily evaded by enhancing \(w,\Lambda\) as the parameter space supplied in the figures. In all case, the signal for fermion dark matter in direct detection is very small. Embedding 3-3-1-1 symmetry in a GUT may be worth exploring as dark charge and its field contents may contribute to gauge coupling unification successfully.
|
2302.14792 | Continuous Stability Conditions of Type A and Measured Laminations of
the Hyperbolic Plane | We introduce stability conditions (in the sense of King) for representable
modules of continuous quivers of type A along with a special criteria called
the four point condition. The stability conditions are defined using a
generalization of delta functions, called half-delta functions. We show that
for a continuous quiver of type A with finitely many sinks and sources, the
stability conditions satisfying the four point condition are in bijection with
measured laminations of the hyperbolic plane. Along the way, we extend an
earlier result by the first author and Todorov regarding continuous cluster
categories for linear continuous quivers of type A and laminations of the
hyperbolic plane to all continuous quivers of type A with finitely many sinks
and sources. We also give a formula for the continuous cluster character. | Kiyoshi Igusa, Job Daisie Rock | 2023-02-28T17:44:32Z | http://arxiv.org/abs/2302.14792v1 | Continuous stability conditions of type \(\mathbb{A}\) and measured laminations of the hyperbolic plane
###### Abstract.
We introduce stability conditions (in the sense of King) for representable modules of continuous quivers of type \(\mathbb{A}\) along with a special criteria called the four point condition. The stability conditions are defined using a generalization of \(\delta\) functions, called half-\(\delta\) functions. We show that for a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, the stability conditions satisfying the four point condition are in bijection with measured laminations of the hyperbolic plane. Along the way, we extend an earlier result by the first author and Todorov regarding continuous cluster categories for linear continuous quivers of type \(\mathbb{A}\) and laminations of the hyperbolic plane to all continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources. We also give a formula for the continuous cluster character.
_Dedicated to Idun Reiten for her kind support and encouragement_
###### Contents
* 1 The Finite Case
* 2 Continuous stability conditions
* 3 Continuous tilting
* 4 Measured Laminations and Stability Conditions
* ## Introduction
### History
The type of stability conditions in the present paper were introduced by King in order to study the moduli space of finitely generated representations of a finite-dimensional algebra [13].
There is recent work connecting stability conditions to wall and chamber structures for finite-dimensional algebras [6] and real Grothendieck groups [2]. There is also work studying the linearity of stability conditions for finite-dimensional algebras [9].
In 2015, the first author and Todorov introduced the continuous cluster category for type \(\mathbb{A}\)[12]. More recently, both authors and Todorov introduced continuous quivers of type \(\mathbb{A}\) and a corresponding weak cluster category [10, 11]. The second author also generalized the Auslander-Reiten quiver of type \(\mathbb{A}_{n}\) to the Auslander-Reiten space for continuous type \(\mathbb{A}\) and a geometric model to study these weak cluster categories [16, 17].
#### Contributions and Organization
In the present paper, we generalize stability conditions, in the sense of King, to continuous quivers of type \(\mathbb{A}\). In Section 1 we recall facts about stability conditions and reformulate them for our purposes. In Section 2 we recall continuous quivers of type \(\mathbb{A}\), representable modules, and then introduce our continuous stability conditions.
At the beginning of Section 2.2 we define a half-\(\delta\) function, which can be thought of as a Dirac \(\delta\) function that only exists on the "minus side" or "plus side" of a point. We use the half-\(\delta\) functions to define useful functions (Definition 2.8), which are equivalent to functions with bounded variation but better suited to our purposes. Then we define a stability condition as an equivalence class of pairs of useful functions with particular properties, modulo shifting the pair of functions up and down by a constant (Definitions 2.14 and 2.16).
We use some auxiliary constructions to define a semistable module (Definition 2.19). Then we recall \(\mathbf{N}_{\pi}\)-compatibility (Definition 2.23), which can be thought of as the continuous version of rigidity in the present paper. We link stability conditions to maximally \(\mathbf{N}_{\pi}\)-compatible sets using a criteria called the four point condition (Definition 2.21). By \(\mathcal{S}_{\mathrm{fpc}}(Q)\) we denote the set of stability conditions of a continuous quiver \(Q\) of type \(\mathbb{A}\) that satisfy the four point condition.
**Theorem A** (Theorem 2.25).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources and let \(\sigma\in\mathcal{S}(Q)\). Then the following are equivalent._
* \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\)_._
* _The set of_ \(\sigma\)_-semistable indecomposables is maximally_ \(\mathbf{N}_{\pi}\)_-compatible._
In Section 3 we define a continuous version of tilting. That is, for a continuous quiver \(Q\) of type \(\mathbb{A}\) we define a new continuous quiver \(Q^{\prime}\) of type \(\mathbb{A}\) together with an induced map on the set of indecomposable representable modules. This is not to be confused with reflection functors for continuous quivers of type \(\mathbb{A}\), introduced by Liu and Zhao [14]. For each stability condition \(\sigma\) of \(Q\) that satisfies the four point condition, we define a new stability condition \(\sigma^{\prime}\) of \(Q^{\prime}\) (Definition 3.12). We show that continuous tilting induces a bijection on indecomposable representable modules, preserves \(\mathbf{N}_{\pi}\)-compatibility, and includes a bijection on stability conditions for \(Q\) and \(Q^{\prime}\) that satisfy the four point condiiton. Denote by \(\mathrm{mod}^{\mathrm{r}}(Q)\) the category of representable modules over \(Q\).
**Theorem B** (Theorems 3.2 and 3.17).: _Let \(Q\) and \(Q^{\prime}\) be continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources. Continuous tilting yields a triple of bijections: \(\phi\), \(\Phi\), and \(\Psi\)._
* _A bijection_ \(\phi:\mathrm{Ind}(\mathrm{mod}^{\mathrm{r}}(Q))\to\mathrm{Ind}(\mathrm{mod}^{ \mathrm{r}}(Q^{\prime}))\)_._
* _A bijection_ \(\Phi\) _from maximal_ \(\mathbf{N}_{\pi}\)_-compatible sets of_ \(\mathrm{mod}^{\mathrm{r}}(Q)\) _to maximal_ \(\mathbf{N}_{\pi}\)_-compatible sets of_ \(\mathrm{mod}^{\mathrm{r}}(Q^{\prime})\)_. Furthermore if_ \(\mu:T\to T^{\prime}\) _is a mutation then so is_ \(\Phi(\mu):\Phi T\to\Phi T^{\prime}\) _given by_ \(\phi(M_{I})\mapsto\phi(\mu(M_{I}))\)_._
* _A bijection_ \(\Psi:\mathcal{S}_{\mathrm{fpc}}(Q)\to\mathcal{S}_{\mathrm{fpc}}(Q^{\prime})\) _such that if_ \(T\) _is the set of_ \(\sigma\)_-semistable modules then_ \(\Phi(T)\) _is the set of_ \(\Psi(\sigma)\)_-semistable modules._
In Section 4 we define a measured lamination to be a lamination of the (Poincare disk model of the) hyperbolic plane together with a particular type of measure on the set of geodesics (Definition 4.1). We denote the Poincare disk model of the hyperbolic plane by \(\mathfrak{h}^{2}\). Then we recall the correspondence between laminations of \(\mathfrak{h}^{2}\) and maximally \(\mathbf{N}_{\pi}\)-compatible sets of indecomposable representable modules
over the straight descending continuous quiver of type \(\mathbb{A}\), from the first author and Todorov (Theorem 4.4 in the present paper) [12]. We extend this correspondance to stability conditions that satisfy the four point condition and measured laminations (Theorem 4.12). Combining this with Theorems A and B, we have the last theorem.
**Theorem C** (Corollary 4.13).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) and \(\mathcal{L}\) the set of measured laminations of \(\mathfrak{h}^{2}\). There are three bijections: \(\phi\), \(\Phi\), and \(\Psi\)._
* _The bijection_ \(\phi\) _from_ \(\operatorname{Ind}(\operatorname{mod}^{\tau}(Q))\) _to geodesics in_ \(\mathfrak{h}^{2}\)_._
* _The bijection_ \(\Phi\) _from maximally_ \(\mathbf{N}_{\pi}\)_-compatible sets to (unmeasured) laminations of_ \(\mathfrak{h}^{2}\) _such that, for each maximally_ \(\mathbf{N}_{\pi}\)_-compatible set_ \(T\)_,_ \(\phi|_{T}\) _is a bijection from the indecomposable modules in_ \(T\) _to the geodesics in_ \(\Phi(T)\)_._
* _The bijection_ \(\Psi:\mathcal{S}_{\text{fpc}}(Q)\to\mathcal{L}\) _such that if_ \(T\) _is the set of_ \(\sigma\)_-semistable modules then_ \(\Phi(T)\) _is the set of geodesics in_ \(\Psi(\sigma)\)_._
In Section 4.3, we give a formula for a continuous cluster character \(\chi(M_{ab})\). This is a formal expression in formal variables \(x_{t}\), one for every real number \(t\). We verify some cluster mutation formulas, but leave further work for a future paper.
In Section 4.5, we relate continuous tilting to cluster categories of type \(\mathbb{A}_{n}\). In particular, we discuss how a particular equivalence between type \(\mathbb{A}_{n}\) cluster categories is compatible with continuous tilting. We conclude our contributions with an example for type \(\mathbb{A}_{4}\) (Section 4.5.1). Then we briefly describe some directions for further related research.
### Acknowledgements
The authors thank Gordana Todorov for helpful discussions. KI was supported by Simons Foundation Grant #686616. Part of this work was completed while JDR was at the Hausdorff Research Institute for Mathematics (HIM); JDR thanks HIM for their support and hospitality. JDR is supported at Ghent University by BOF grant 01P12621. JDR thanks Aran Tattar and Shijie Zhu for helpful conversations.
## 1. The Finite Case
There is a relation between stability conditions and generic decompositions which will become more apparent in the continuous case. Here we examine the finite case and impose continuous structures onto the discrete functions in order to give a preview of what will happen in the continuous quiver case.
For a finite quiver of type of \(\mathbb{A}_{n}\) with vertices \(1,\cdots,n\), we need a piecewise continuous functions on the interval \([0,n+1]\) which has discontinuities at the vertices which are sources or sinks. The stability function will be the derivative of this function. It will have Dirac delta functions at the sources and sinks. Since this is a reformulation of well-known results, we will not give proofs. We also review the Caldero-Chapoton cluster character for representations of a quiver of type \(A_{n}\)[7] in order to motivate the continuous case in Section 4.3.
### Semistability condition
Recall that a stability function is a linear map
\[\theta:K_{0}\Lambda=\mathbb{Z}^{n}\to\mathbb{R}.\]
A module \(M\) is \(\theta\)-semistable if \(\theta(\underline{\dim}M)=0\) and \(\theta(\underline{\dim}M^{\prime})\leq 0\) for all submodules \(M^{\prime}\subset M\). We say \(M\) is \(\theta\)-stable if, in addition, \(\theta(\underline{\dim}M^{\prime})<0\) for all \(0\neq M^{\prime}\subsetneq M\). For \(\Lambda\) of type \(\mathbb{A}_{n}\), we denote by \(M_{(a,b]}\) the indecomposable module with support
on the vertices \(a+1,\cdots,b\). For example \(M_{(i-1,i]}\) is the simple module \(S_{i}\). Let \(F:\{0,1,\cdots,n\}\to\mathbb{R}\) be the function
\[F(k)=\sum_{0<i\leq k}\theta(\underline{\dim}S_{i})=\theta(\underline{\dim}M_{(0,k]})\]
Then we have \(\theta(M_{(a,b]})=F(b)-F(a)\).
Thus, for \(M_{(a,b]}\) to be \(\theta\)-semistable we need \(F(a)=F(b)\) and another condition to make \(\theta(\underline{\dim}M^{\prime})\leq 0\). For example, take the quiver of type \(\mathbb{A}_{n}\) having a source at vertex \(c\) and sinks at \(1,n\). Then the indecomposable submodules of \(M_{(a,b]}\) are \(M_{(a,x]}\) for \(a<x<c\), \(x\leq b\) and \(M_{(y,b]}\) for \(c\leq y<b\), \(a\leq y\). Therefore, we also need \(F(x)\leq F(a)=F(b)\leq F(y)\) for such \(x,y\). (And strict inequalities \(F(x)<F(a)=F(b)<F(y)\) to make \(M_{(a,b]}\) stable.)
A simple characterization of \(x,y\) is given by numbering the arrows. Let \(\alpha_{i}\) be the arrow between vertices \(i,i+1\). Then the arrows connecting vertices in \((a,b]\) are \(\alpha_{i}\) for \(a<i<b\). \(M_{(a,x]}\subset M_{(a,b]}\) if \(\alpha_{x}\) points left (and \(a<x<b\)). \(M_{(y,b]}\subset M_{(a,b]}\) if \(\alpha_{y}\) points to the right (and \(a<y<b\)). More generally, we have the following.
**Proposition 1.1**.: \(M_{(a,b]}\) _is \(\theta\)-semistable if and only if the following hold._
1. \(F(a)=F(b)\)__
2. \(F(x)\leq F(a)\) _if_ \(\alpha_{x}\) _points left and_ \(a<x<b\)_._
3. \(F(y)\geq F(b)\) _if_ \(\alpha_{y}\) _points right and_ \(a<y<b\)_._
_Furthermore, if the inequalities in (2),(3) are strict, \(M_{(a,b]}\) is \(\theta\)-stable. _
For example, take the quiver
\[1\stackrel{{\alpha_{1}}}{{\longleftarrow}}2\stackrel{{ \alpha_{2}}}{{\longrightarrow}}3\stackrel{{\alpha_{3}}}{{ \longrightarrow}}4 \tag{1}\]
with \(\theta=(-1,2,-1,-2)\), \(F=(0,-1,1,0,-1)\). Then \(F(1)<F(0)=F(3)=0<F(2)\), with \(\alpha_{1}\) pointing left and \(\alpha_{2}\) pointing right. So, \(M_{(0,3]}\) is \(\theta\)-stable. Similarly, \(F(1)=F(4)=-2<F(2),F(3)\) implies \(M_{(1,4]}\) is also \(\theta\)-stable
One way to visualize the stability condition is indicated in Figure 1.
Figure 1. The graph of \(F:\{0,1,2,3,4\}\to\mathbb{R}\) shows the \(\theta\)-semistable modules. When \(M_{(a,b]}\) is \(\theta\)-stable, \(F(a)=F(b)\) making the line segment connecting \((a,F(a))\) to \((b,F(b))\) horizontal. Also, the intermediate red points are below and the blue points are above the line segment if we draw as red/blue, the spot \((x,F(x))\) for \(\alpha_{x}\) pointing left/right, respectively.
### Generic decomposition
Stability conditions for quivers of type \(\mathbb{A}_{n}\) also give the generic decomposition for dimension vectors \(\mathbf{d}\in\mathbb{N}^{n}\). This becomes more apparent for large \(n\) and gives a preview of what happens in the continuous case.
Given a dimension vector \(\mathbf{d}\in\mathbb{N}^{n}\), there is, up to isomorphism, a unique \(\Lambda\)-module \(M\) of dimension vector \(\mathbf{d}\) which is rigid, i.e., \(\operatorname{Ext}^{1}(M,M)=0\). The dimension vectors \(\beta_{i}\) of the indecomposable summands of \(M\) add up to \(\mathbf{d}\) and the expression \(\mathbf{d}=\sum\beta_{i}\) is called the "generic decomposition" of \(\mathbf{d}\). We use the notation \(\beta_{ab}=\underline{\dim}M_{(a,b]}\) and \(\mathbf{d}=(d_{1},\cdots,d_{n})\).
There is a well-known formula for the generic decomposition of a dimension vector [1] which we explain with an example. Take the quiver of type \(\mathbb{A}_{9}\):
\[1\xleftarrow{\alpha_{1}}2\xleftarrow{\alpha_{2}}3\xleftarrow{\alpha_{3}}4 \xleftarrow{\alpha_{4}}5\xleftarrow{\alpha_{5}}6\xleftarrow{\alpha_{6}}7 \xrightarrow{\alpha_{7}}8\xrightarrow{\alpha_{8}}9 \tag{2}\]
with dimension vector \(\mathbf{d}=(3,4,1,3,2,4,3,1,3)\). To obtain the generic decomposition for \(\mathbf{d}\), we draw \(d_{i}\) spots in vertical columns as shown in (3) below.
\[1\xleftarrow{\alpha_{1}}2\xleftarrow{\alpha_{2}}3\xleftarrow{\alpha_{4}}5 \xleftarrow{\alpha_{5}}6\xleftarrow{\alpha_{6}}7\xleftarrow{\alpha_{7}}8 \xleftarrow{\alpha_{8}}9 \tag{3}\]
For arrows going left, such as \(3\gets 4\), \(5\gets 6\) the top spots should line up horizontally. For arrows going right, such as \(6\to 7,7\to 8\) the bottom spots should line up horizontally as shown. Consecutive spots in any row are connected by horizontal lines. For example, the spots in the first row are connected giving \(M_{(0,6]}\) but the second row of spots is connected in three strings to give \(M_{(0,2]},M_{(3,7]}\) and \(S_{9}=M_{(8,9]}\). The generic decomposition is given by these horizontal lines. Thus
\[\mathbf{d}=(3,4,1,3,2,4,3,1,3)=\beta_{06}+2\beta_{02}+\beta_{37}+2\beta_{89}+ \beta_{12}+\beta_{34}+\beta_{57}+\beta_{59}\]
is the generic decomposition of \(\mathbf{d}=(3,4,1,3,2,4,3,1,3)\).
We construct this decomposition using a stablity function based on (3). We explain this with two examples without proof. The purpose is to motivate continuous stability conditions.
Take real numbers \(d_{0},d_{1},\cdots,d_{n},d_{n+1}\) where \(d_{0}=d_{n+1}=0\). Draw arrows where the arrow \(\alpha_{i}\) connecting \(i-1,i\) where \(\alpha_{0}\) points in the same direction as \(\alpha_{1}\) and \(\alpha_{n}\) points in the same direction as \(\alpha_{n-1}\). To each arrow \(\alpha_{i}\) we associate the real number which is \(d_{i}\) of the target minus \(d_{i}\) of the source. We write this difference below the arrow if the arrow points left and above the arrow when the arrow points right. Then we compute the partial sums for the top numbers and the bottom numbers. Let \(B,R\) denote these functions. Thus \(B(6)=0,B(7)=-1,B(8)=-3,B(9)=-1,B(10)=-4\) and \(R(0)=0,R(1)=-3\), etc. as shown below.
The generic decomposition of \(\mathbf{d}\) is given by \(\mathbf{d}=\sum a_{i}\beta_{i}\) where the coefficient \(a_{i}\) of \(\beta_{i}=\beta_{ab}\) is the linear measure of the set of all \(c\in\mathbb{R}\) so that \(M_{(x,y]}\) is semistable with \(F(x)=c=F(y)\) and so that \(\mathbb{Z}\cap(x,y]=\{a+1,\cdots,b\}\). For example, in Figure 2, the coefficient of \(\beta_{02}\) is the measure of the vertical interval \([-3,-1]\) which is \(2\). For \(c\) in this vertical interval the horizontal line at level \(c\) goes from the red line between \(\frac{1}{3}\) and \(1\) to the blue line between \(\frac{7}{3}\) and \(3\) with blue lines above and red lines below. (We extend the red and blue functions to the interval \((0,10]\) as indicated.) We require \(R(x)\leq B(x)\) for all \(x\in\mathbb{R}\).
We interpret the stability function \(\theta\) to be the derivative of \(F\) where we consider \(R,B\) separately. So, \(\theta\) is a step function equal to \(-3,-1,3,-2,1,-2\) on the six red unit intervals between \(0\) and \(6\) and \(\theta\) is \(-1,-2,2,-3\) on the four blue intervals from \(6\) to \(10\). \(\theta\) is \(4\) times the dirac delta function at \(6\). For example,
\[\theta(M_{(a,b]})=\int_{a}^{b}\theta(x)\,\mathrm{d}x=F(b)-F(a)=0\]
for \(a=3+\varepsilon,b=7+\varepsilon\) with \(0\leq\varepsilon\leq 1\) since \(F(a)=F(b)=-1-2\varepsilon\) in this range. However, \(F(5)=-2\) which is greater than \(-1-2\varepsilon\) for \(\varepsilon>1/2\). So, \(M_{(3+\varepsilon,7+\varepsilon]}\) is semistable only when \(0\leq\varepsilon\leq 1/2\). Taking only the integers in the interval \((3+\varepsilon,7+\varepsilon]\), we get \(M_{(3,7]}\) to be semistable.
Figure 2. The function \(F:(0,n+1]\to\mathbb{R}\) is given by the red function \(R\) on \((0,6]\) since the first \(5\) arrows point left and by the blue function \(B\) on \((6,10]\). A module \(M_{(a,b]}\) is semistable if there is a horizontal line from \((a,y)\) to \((b,y)\) so that \(R(x)\leq y\leq B(x)\) for all \(a\leq x\leq b\). “Islands” \(M_{(a,b]}\) for \(b<5\) are shaded.
### APR-tilting
For quivers of type \(\mathbb{A}_{n}\), we would like all arrows to be pointing in the same direction. We accomplish this with APR-tilting [3].
We recall that APR-tilting of a quiver \(Q\) is given by choosing a sink and reversing all the arrows pointing to that sink, making it a source in a new quiver \(Q^{\prime}\). Modules \(M\) of \(Q\) correspond to modules \(M^{\prime}\) of \(Q^{\prime}\) with the property that
\[\operatorname{Hom}(M^{\prime},N^{\prime})\oplus\operatorname{Ext}(M^{\prime},N^{\prime})\cong\operatorname{Hom}(M,N)\oplus\operatorname{Ext}(M,N)\]
for all pairs of \(\Bbbk Q\)-modules \(M,N\). This gives a bijection between exceptional sequences for \(\Bbbk Q\) and for \(\Bbbk Q^{\prime}\). However, generic modules are given by sets of ext-orthogonal modules. So, we need to modify this proceedure.
In our example, we have a quiver \(Q\) with \(5\) arrows pointing left. By a sequence of APR-tilts we can make all of these point to the right. The new quiver \(Q^{\prime}\) will have all arrows pointing to the right. Any \(\Bbbk Q\)-module \(M_{(a,b]}\) with \(a\leq 5<6\) gives ath \(\Bbbk Q^{\prime}\)-module \(M_{(5-a,b]}\). For example \(M_{(0,6]},M_{(3,7]},M_{(5,7]},M_{(5,9]}\) become \(M_{(5,6]},M_{(2,7]},M_{(0,7]},M_{(0,9]}\). See Figure 3. For \(a>5\), such as \(M_{(8,9]}=S_{9}\), the module is "unchanged". For \(b\leq 5\), the APR-tilt of \(M_{(a,b]}\) is \(M_{(5-b,5-a]}\). However, these are not in general ext-orthgonal to the other modules in our collection. For example, the APR-tilt of \(S_{4}=M_{(3,4]}\) is \(M_{(1,2]}=S_{2}\) which extends \(M_{(2,7]}\). So we need to shift it by \(\tau^{-1}\) to get \(\tau^{-1}M_{(5-b,5-a]}=M_{(4-b,4-a]}\). There is a problem when \(b=5\) since, in that case \(4-b=-1\). This problem will disappear in the continuous case. We call modules \(M_{(a,b]}\) with \(b<5\)_islands_. We ignore the problem case \(b=5\). Islands are shaded in Figure 2. Shifts of their APR-tilts are shaded in Figure 3.
### Clusters and cluster algebras
The components of a generic decomposition of any module form a partial tilting object since they do not extend each other. In the example shown in Figure 3, we have \(8\) objects:
\[M_{01},M_{07},M_{09},M_{23},M_{24},M_{27},M_{56},M_{89}.\]
Figure 3. This is given by APR-tilting of Figure 2. The modules \(M_{(a,b]}\) from Figure 2 with \(a\leq 5<b\) become \(M_{(5-a,b]}\) by APR-tilting. The “islands” \(M_{(a,b]}\) in Figure 2 gave \(\tau^{-1}M_{(5-b,5-a]}=M_{(4-b,4-a]}\) above (shaded).
Since the quiver \(A_{9}\) has rank 9, we need one more to complete the tilting object. There are two other modules that we could add to complete this tilting object. They are \(X=M_{26}\) and \(X^{*}=M_{57}\). There are always at most two objects that will complete a tilting object with \(n-1\) components. Tilting objects are examples of clusters and, in the cluster category [5], there are always exact two objects which complete a cluster with \(n-1\) components.
These two objects \(M_{26}\), \(M_{57}\) extend each other in the cluster category with extensions:
\[M_{57}\to M_{27}\oplus M_{56}\to M_{26}\]
and
\[M_{26}\to M_{24}\to M_{46}[1]=\tau^{-1}M_{57}[1].\]
In the cluster category, a module \(M\) over any hereditary algebra is identified with \(\tau^{-1}M[1]\). Thus, an exact sequence \(\tau^{-1}M\hookrightarrow A\twoheadrightarrow B\) gives an exact triangle \(M[-1]\to A\to B\to M\) in the cluster category since \(\tau^{-1}M=M[-1]\).
In the cluster algebra [8], which is the subalgebra of \(\mathbb{Q}(x_{1},\cdots,x_{n})\) generated by "cluster variables", we have a formula due to Caldero and Chapoton [7] which associates a cluster variable \(\chi(M)\) to every rigid indecomposable module \(M\) and, in this case, satisfies the equation:
\[\chi(X)\chi(X^{*})=\chi(M_{27})\chi(M_{56})+\chi(M_{24}) \tag{4}\]
The Caldero-Chapoton formula for the cluster character of \(M_{ab}\) for \(1<a<b<n\) with arrows going right is the sum of the inverses of exponential \(g\)-vectors of all submodules \(x^{g(M_{ib})}=x_{i}/x_{b}\) times that of the duals of their quotients \(M_{ab}/M_{ib}=M_{ai}\) (see [15]):
\[\chi(M_{ab})=\sum_{i=a}^{b}x^{-g(M_{ib})}x^{-g(DM_{ai})}=\sum_{i=a}^{b}\frac{x _{b}x_{a-1}}{x_{i}x_{i-1}}. \tag{5}\]
So, \(\chi(M_{aa})=\chi(0)=1\). When \(b=n+1\), \(M_{ab}\) is projective with support \([a,n+1)=[a,n]\). So,
\[\chi(P_{a})=\chi(M_{a,n+1})=\sum_{i=a}^{n+1}\frac{x_{a-1}}{x_{i}x_{i-1}}\]
where \(x_{n+1}=1\). This yields:
\[\chi(M_{ab})=x_{b}\chi(P_{a})-x_{a-1}\chi(P_{b+1}).\]
Then, the mutation equation (4) becomes the Plucker relation for the \(2\times 4\) matrix:
\[\begin{bmatrix}x_{1}&x_{4}&x_{6}&x_{7}\\ \chi(P_{2})&\chi(P_{5})&\chi(P_{7})&\chi(P_{8})\end{bmatrix}.\]
## 2. Continuous stability conditions
### Continuous quivers of type \(\mathbb{A}\)
Recall that in a partial order \(\preceq\), a element \(x\) is a **sink** if \(y\preceq x\) implies \(y=x\). Dually, \(x\) is a **source** if \(x\preceq y\) implies \(y=x\).
**Definition 2.1**.: Let \(\preceq\) be a partial order on \(\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,+\infty\}\) with finitely many sinks and sources such that, between sinks and sources, \(\preceq\) is either the same as the usual order or the opposite. Let \(Q=(\mathbb{R},\preceq)\) where \(\preceq\) is the same partial
order on \(\mathbb{R}\subseteq\overline{\mathbb{R}}\). We call \(Q\) a **continuous quiver of type \(\mathbb{A}\)**. We consider \(Q\) as a category where the objects of \(Q\) are the points in \(\mathbb{R}\) and
\[\operatorname{Hom}_{Q}(x,y)=\begin{cases}\{*\}&y\preceq x\\ \emptyset&\text{otherwise}.\end{cases}\]
**Definition 2.2**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). A **pointwise finite-dimensional \(Q\) module** over the field \(\Bbbk\) is a functor \(V:Q\to\operatorname{vec}(\Bbbk)\). Let \(I\subset\mathbb{R}\) be an interval. An **interval indecomposable module**\(M_{I}\) is given by
\[M_{I}(x): =\begin{cases}\Bbbk&x\in I\\ 0&x\notin I\end{cases} M_{I}(x,y): =\begin{cases}1_{\Bbbk}&y\preceq x,\,x,y\in I\\ 0&\text{otherwise},\end{cases}\]
where \(I\subseteq\mathbb{R}\) is an interval.
By results in [4, 10] we know that every pointwise finite-dimensional \(Q\) module is isomorphic to a direct sum of interval indecomosables. In particular, this decomposition is unique up to isomorphism and permutation of summands. In [10] it is shown that the category of pointwise finite-dimensional modules is abelian, interval indecomposable modules are indecomposable, and there are indecomposable projectives \(P_{a}\) for each \(a\in\mathbb{R}\) given by
\[P_{a}(x)=\begin{cases}\Bbbk&x\preceq a\\ 0&\text{otherwise}\end{cases} P_{a}(x,y) =\begin{cases}1_{\Bbbk}&y\preceq x\preceq a\\ 0&\text{otherwise}.\end{cases}\]
These projectives are representable as functors.
**Definition 2.3**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). We say \(V\) is **representable** if there is a finite direct sum \(P=\bigoplus_{i=1}^{n}P_{a_{i}}\) and an epimorphism \(P\twoheadrightarrow V\) whose kernal is a direct sum \(\bigoplus_{j=1}^{m}P_{a_{j}}\).
By [10, Theorem 3.0.1], \(V\) is isomorphic to a finite direct sum of interval indecomosables. By results in [16], the subcategory of representable modules is abelian (indeed, a wide subcategory) but has no injectives. When \(\preceq\) is the standard total order on \(\mathbb{R}\), the representable modules are the same as those considered in [12].
**Notation 2.4**.: We denote the abelian subcategory of representable modules over \(Q\) by \(\operatorname{mod}^{\operatorname{r}}(Q)\). We denote the set of isomorphism classes of indecomosables in \(\operatorname{mod}^{\operatorname{r}}(Q)\) by \(\operatorname{Ind}^{\operatorname{r}}(Q)\).
**Definition 2.5**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\), \(s\in\overline{\mathbb{R}}\) a sink, and \(s^{\prime}\in\overline{\mathbb{R}}\) an adjacent source.
* If \(s<s^{\prime}\) and \(x\in(s,s^{\prime})\) we say \(x\) is **red** and \((s,s^{\prime})\) is **red**.
* If \(s^{\prime}<s\) and \(x\in(s^{\prime},s)\) we say \(x\) is **blue** and \((s^{\prime},s)\) is **blue**.
Let \(I\) be an interval in \(\mathbb{R}\) such that neither \(\inf I\) nor \(\sup I\) is a source. We will need to refer to the endpoints of \(I\) as being red or blue the following way.
* If \(\inf I\) is a sink and \(\inf I\in I\) we say \(\inf I\) is **blue**.
* If \(\inf I\) is a sink and \(\inf I\notin I\) we say \(\inf I\) is **red**.
* If \(\sup I\) is a sink and \(\sup I\in I\) we say \(\sup I\) is **red**.
* If \(\sup I\) is a sink and \(\sup I\notin I\) we say \(\sup I\) is **blue**.
* If \(\inf I\) is not a sink (\(\sup I\) is not a sink) then we say \(\inf I\) (\(\sup I\)) is red or blue according to the first part of the definition.
Note that \(\inf I\) could be \(-\infty\), in which case it is red. Similarly, if \(\sup I=+\infty\) then it is blue.
**Definition 2.6**.: We say \(I\) is **left red** (respectively, **left blue**) if \(\inf I\) is red (respectively, if \(\inf I\) is blue).
We say \(I\) is **right red** (respectively, **right blue**) if \(\sup I\) is red (respectively, if \(\sup I\) is blue).
We have the following characterization of support intervals.
**Proposition 2.7**.: _Let \(I\subset\mathbb{R}\) be the support of an indecomposable representable module \(M_{I}\in\operatorname{Ind}^{r}(Q)\). Then an endpoint of \(I\) lies in \(I\) if and only if it is either left blue or right red (or both, as in the case \(I=[s,s]\) where \(s\) is a sink)._
### Half-\(\delta\) functions and red-blue function pairs
To define continuous stability conditions we need to introduce half-\(\delta\) functions. A half-\(\delta\) function \(\delta_{x}^{-}\) at \(x\in\overline{\mathbb{R}}\) has the following property. Let \(f\) some integrable function on \([a,b]\subset\overline{\mathbb{R}}\) where \(a<x<b\). Then the following equations hold:
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{-}\right)\mathrm{d}t=\left(\int_{a}^{x}f(t )\,\mathrm{d}t\right)+1,\hskip 14.226378pt\int_{x}^{b}\left(f(t)+\delta_{x}^{-} \right)\mathrm{d}t=\int_{x}^{b}f(t)\,\mathrm{d}t.\]
The half-\(\delta\) function \(\delta_{x}^{+}\) at \(x\in\mathbb{R}\) has a similar property for an \(f\) integrable on \([a,b]\subset\overline{\mathbb{R}}\) with \(a<x<b\):
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{+}\right)\mathrm{d}t=\int_{a}^{x}f(t)\, \mathrm{d}t,\hskip 14.226378pt\int_{x}^{b}\left(f(t)+\delta_{x}^{+}\right) \mathrm{d}t=\left(\int_{x}^{b}f(t)\,\mathrm{d}t\right)+1.\]
Consider \(f+\delta_{x}^{-}-\delta_{x}^{+}\). Then we have
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\left(\int_{a}^{x}f(t)\,\mathrm{d}t\right)+1,\] \[\int_{x}^{b}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\left(\int_{x}^{b}f(t)\,\mathrm{d}t\right)-1,\] \[\int_{a}^{b}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\int_{a}^{b}f(t)\,\mathrm{d}t.\]
For each \(x\in\mathbb{R}\), denote the functions
\[\Delta_{x}^{-}(z) =\int_{-\infty}^{z}\delta_{x}^{-}\,\mathrm{d}t=\begin{cases}0&z<x \\ 1&z\geq x\end{cases}\] \[\Delta_{x}^{+}(z) =\int_{-\infty}^{z}\delta_{x}^{+}\,\mathrm{d}t=\begin{cases}0&z \leq x\\ 1&z>x.\end{cases}\]
Though not technically correct, we write that a function \(f+u_{x}^{-}\Delta_{x}^{-}+u_{x}^{+}\Delta_{x}^{+}\) is from \(\mathbb{R}\) to \(\mathbb{R}\). See Figure 4 for an example. We also allow \(\delta_{+\infty}^{-}\) and \(\delta_{-\infty}^{+}\), which satisfy the relevant parts of the equations above. We don't allow the other half-\(\delta\) functions at \(\pm\infty\) because it does not make sense in terms of integration.
Our stability conditions will be comprised of equivalence classes of pairs of useful functions.
**Definition 2.8**.: We call a function \(F:\mathbb{R}\to\mathbb{R}\)**useful** if it satisfies the following.
1. \(F=f+\sum_{x\in\mathbb{R}\cup\{+\infty\}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in \mathbb{R}\cup\{-\infty\}}u_{x}^{+}\Delta_{x}^{+}\), where \(f:\mathbb{R}\to\mathbb{R}\) is a continuous function of bounded variation and each \(u_{x}^{-},u_{x}^{+}\) are in \(\mathbb{R}\).
2. The sums \(\sum_{x\in\mathbb{R}\cup\{+\infty\}}|u_{x}^{-}|\) and \(\sum_{x\in\mathbb{R}\cup\{-\infty\}}|u_{x}^{+}|\) both converge in \(\mathbb{R}\).
**Remark 2.9**.: Note Definition 2.8(2) implies the set \(\{u_{x}^{-}\mid u_{x}^{-}\neq 0\}\cup\{u_{x}^{+}\mid u_{x}^{+}\neq 0\}\) is at most countable. Combining (1) and (2) in Definition 2.8 means we think of \(F\) as having the notion of bounded variation.
We think of the value of a useful function \(F\) at \(x\) as being "the integral from \(-\infty\) to \(x\)" where the integrand is some function that includes at most countably-many half-\(\delta\) functions.
**Proposition 2.10**.: _Let \(F\) be a useful function and let \(a\in\overline{\mathbb{R}}\)._
1. _If_ \(a>-\infty\) _then_ \(\lim_{x\to a^{-}}F(x)\) _exists._
2. _If_ \(a<+\infty\) _then_ \(\lim_{x\to a^{+}}F(x)\) _exists._
3. _If_ \(a\in\mathbb{R}\) _then_ \(F(a)=\lim_{x\to a^{-}}F(x)+u_{a}^{-}\) _and_ \(F(a)+u_{a}^{+}=\lim_{x\to a^{+}}F(x)\)_._
Proof.: (1) and (2). Straightforward computations show that
\[\lim_{x\to a^{-}}F(x) =\lim_{x\to a^{-}}f(x)+\sum_{-\infty<x<a}u_{x}^{-}+\sum_{- \infty\leq x<a}u_{x}^{+} \text{if }a>-\infty\] \[\lim_{x\to a^{+}}F(x) =\lim_{x\to a^{+}}f(x)+\sum_{-\infty<x\leq a}u_{x}^{-}+\sum_{- \infty\leq x\leq a}u_{x}^{+} \text{if }a<+\infty.\]
Thus, (1) and (2) hold.
(3). By definition, we see that
\[F(a)=f(a)+\sum_{-\infty<x\leq a}u_{x}^{-}+\sum_{-\infty\leq x<a}u_{x}^{+}.\]
Thus, using (1) and (2), we see that (3) holds.
**Notation 2.11**.: Let \(F\) be a useful function. For each \(x\in\mathbb{R}\), we define
\[F_{\min}(a): =\min\{F(a),\lim_{x\to a^{-}}F(x),\lim_{x\to a^{+}}F(x)\}\] \[F_{\max}(a): =\max\{F(a),\lim_{x\to a^{-}}F(x),\lim_{x\to a^{+}}F(x)\}.\]
We also define
\[F(-\infty): =\lim_{x\to-\infty^{+}}F(x)-u_{-\infty}^{+}\qquad\quad F(+\infty): =\lim_{x\to+\infty^{-}}F(x)+u_{+\infty}^{-}\] \[F_{\min}(-\infty): =\min\{F(-\infty),\lim_{x\to-\infty^{+}}F(x)\}\] \[F_{\min}(+\infty): =\min\{F(-\infty),\lim_{x\to+\infty^{-}}F(x)\}\] \[F_{\max}(-\infty): =\max\{F(-\infty),\lim_{x\to-\infty^{+}}F(x)\}\] \[F_{\max}(+\infty): =\max\{F(-\infty),\lim_{x\to+\infty^{-}}F(x)\}.\]
**Definition 2.12**.: Let \(F\) be a useful function. We define the **graph**\(\mathcal{G}(F)\) of \(F\) to be the following subset of \(\mathbb{R}^{2}\):
\[\left\{(x,y)\,|\,x\in\mathbb{R},\ F_{\min}(x)\leq y\leq F_{\max}(x)\right\}\,.\]
The **completed graph**, denoted \(\overline{\mathcal{G}(F)}\) of \(F\) is the following subset of \(\overline{\mathbb{R}}\times\mathbb{R}\):
\[\left\{(x,y)\,\big{|}\,x\in\overline{\mathbb{R}},\ F_{\min}(x)\leq y\leq F_{ \max}(x)\right\}\,.\]
**Remark 2.13**.: Let \(F=f+\sum_{x\in\mathbb{R}\cup\{+\infty\}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in \mathbb{R}\cup\{-\infty\}}u_{x}^{+}\Delta_{x}^{+}\) be a useful function. For any \(a\leq b\in\mathbb{R}\) there exists \(c\leq d\in\mathbb{R}\), such that \(\mathcal{G}(F)\cap([a,b]\times\mathbb{R})=\mathcal{G}(F)\cap([a,b]\times[c,d])\).
We now define red-blue function pairs, which are used to define equivalence classes of pairs of useful functions. The red-blue function pairs are analogs of the red and blue functions from Section 1.
**Definition 2.14**.: Let \(R=r+\sum_{x\in\mathbb{R}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in\mathbb{R}}u_{x}^{ +}\Delta_{x}^{+}\) and let \(B=b+\sum_{x\in\mathbb{R}}v_{x}^{-}\Delta_{x}^{-}+\sum_{x\in\mathbb{R}}v_{x}^{ +}\Delta_{x}^{+}\) be useful functions. We say the pair \((R,B)\) is a **red-blue function pair** if the following criteria are satisfied.
1. For all \(x\in\mathbb{R}\), we have \(R_{\max}(x)=R(x)\) and \(B_{\min}(x)=B(x)\).
2. If \(s\) is a source, \(u_{s}^{-}=u_{s}^{+}=v_{x}^{-}=v_{x}^{+}=0\).
3. For all \(x\in\overline{\mathbb{R}}\), \[R(x)\leq B_{\max}(x)\qquad\qquad\text{ and }\qquad\qquad R_{\min}(x)\leq B(x).\]
4. We have \(R(-\infty)=B(-\infty)\) and \(R(+\infty)=B(+\infty)\).
5. The useful function \(R\) is constant on blue intervals. That is: for \(s\leq x<y<s^{\prime}\) in \(\mathbb{R}\) where \((s,s^{\prime})\) is blue, we have \(r(x)=r(y)\) and \(u_{y}^{-}=u_{y}^{+}=0\).
6. The useful function \(B\) is constant on red intervals. That is: for \(s<x<y\leq s^{\prime}\) in \(\mathbb{R}\) where \((s,s^{\prime})\) is red, we have \(b(x)=b(y)\) and \(v_{x}^{-}=v_{x}+=0\).
**Lemma 2.15**.: _Let \((R,B)\) be a red-blue function pair._
1. _For any_ \(a\leq b\) _and_ \(c\leq d\) _in_ \(\mathbb{R}\)_, the set_ \(\mathcal{G}(R)\cap([a,b]\times[c,d])\) _is closed in_ \(\mathbb{R}^{2}\)_._
2. _For any_ \(a\leq b\in\mathbb{R}\) _the useful function_ \(R\) _has a local maximum on_ \([a,b]\) _in the sense that there exists_ \(x\in[a,b]\) _such that for all_ \(y\in[a,b]\)_:_ \(R_{\max}(y)\leq R_{\max}(x)\)_._
3. _For any_ \(a\leq b\in\mathbb{R}\) _the useful function_ \(R\) _has a local minimum on_ \([a,b]\) _in the sense that there exists_ \(x\in[a,b]\) _such that for all_ \(y\in[a,b]\)_:_ \(R_{\min}(x)\leq R_{\min}(y)\)
_Statements (1)-(3) are true when we replace \(R\), \(r\), and \(u\) with \(B\), \(b\), and \(v\), respectively._
Proof.: We first prove (1) for \(R\) as the proof for \(B\) is identical. Let \(\{(x_{i},y_{i})\}\) be a sequence in \(\mathcal{G}(R)\cap([a,b]\times[c,d])\) that converges to \((w,z)\). Since \(\{x_{i}\}\) converges to \(w\) we assume, without loss of generality, that \(\{x_{i}\}\) is monotonic. If there exists \(i\in\mathbb{N}\) such that \(x_{i}=w\) then, assuming monotonicity, we know \((w,z)\in\mathcal{G}(R)\). Thus, assume \(x_{i}\neq w\) for all \(i\in\mathbb{N}\).
Without loss of generality, assume \(\{x_{i}\}\) is increasing. The decreasing case is similar. Since \(\sum_{x\in\mathbb{R}}|u_{x}^{-}|+|u_{x}^{+}|<\infty\), we know that
\[\lim_{i\to\infty}|R_{\max}(x_{i})-R_{\min}(x_{i})|=0.\]
And so,
\[\lim_{i\to\infty}R_{\max}(x_{i})=\lim_{i\to\infty}R_{\min}(x_{i})=\lim_{i\to \infty}R(x_{i})=\lim_{\to w^{-}}R(x).\]
Then we must have
\[\lim_{i\to\infty}y_{i}=\lim_{\to w^{-}}R(x).\]
Therefore, \((w,z)\in\mathcal{G}(R)\).
Next, we only prove (2) for \(R\) as the remaining proofs are similar and symmetric. By Remark 2.13 there exists \(c\leq d\in\mathbb{R}\) such that
\[\mathcal{G}(R)\cap([a,b]\times\mathbb{R})=\mathcal{G}(R)\cap([a,b]\times[c,d]).\]
Then there must be a greatest lower bound \(d_{0}\geq c\) for all \(d\) such that the equation above holds. Since \(\mathcal{G}(R)\cap([a,b]\times[c,d_{0}])\) must be closed by Lemma 2.15(1), there must be a point \((x,d_{0})\in\mathcal{G}(R)\) for \(a\leq x\leq b\). This is the desired \(x\).
### Stability conditions
**Definition 2.16**.: Let \((R,B)\) and \((R^{\prime},B^{\prime})\) be red-blue function pairs. We say \((R,B)\) and \((R^{\prime},B^{\prime})\) are **equivalent** if there exists a constant \(\mathfrak{c}\in\mathbb{R}\) such that, for all \(x\in\overline{\mathbb{R}}\) and \(y\in\mathbb{R}\), we have
\[(x,y)\in\overline{\mathcal{G}(R)}\text{ if and only if }(x,y+\mathfrak{c})\in \overline{\mathcal{G}(R^{\prime})}\]
and
\[(x,y)\in\overline{\mathcal{G}(B)}\text{ if and only if }(x,y+\mathfrak{c})\in \overline{\mathcal{G}(B^{\prime})}.\]
A **stability condition on \(Q\)**, denoted \(\sigma\), is an equivalence class of red-blue function pairs. We denote by \(\mathcal{S}(Q)\) the set of stability conditions on \(Q\).
We now define the **modified** versions of a continuous quiver \(Q\) of type \(\mathbb{A}\), an interval \(I\) of a module \(M_{I}\) in \(\operatorname{Ind}^{\mathrm{r}}(Q)\), and graphs of red-blue function pairs. This makes it easier to check whether or not an indecomposable module is semistable with respect to a particular stability condition.
**Definition 2.17**.:
1. Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources. We define a totally ordered set \(\widehat{Q}\), called the **modified quiver** of \(Q\), in the following way. First we define the elements. * For each \(x\in\mathbb{R}\) such that \(x\) is not a sink nor a source of \(Q\), \(x\in\widehat{Q}\). * If \(s\in\mathbb{R}\) is a source of \(Q\), then \(s\notin\widehat{Q}\).
* If \(s\in\mathbb{R}\) is a sink of \(Q\), then \(s_{-},s_{+}\in\widehat{Q}\). These are distinct elements, neither of which is in \(\mathbb{R}\).
* If \(-\infty\) (respectively, \(+\infty\)) is a sink then \(-\infty_{+}\in\widehat{Q}\) (respectively, \(+\infty_{-}\in\widehat{Q}\)).
Now, the partial order on \(\widehat{Q}\) is defined in the following way. Let \(x,y\in\widehat{Q}\).
* Suppose \(x,y\in\mathbb{R}\cap\widehat{Q}\). Then \(x\leq y\) in \(\widehat{Q}\) if and only if \(x\leq y\) in \(\mathbb{R}\).
* Suppose \(x\in\mathbb{R}\) and \(y=s_{\pm}\), for some sink \(s\) of \(Q\) in \(\mathbb{R}\). If \(x<s\) in \(\mathbb{R}\) then \(x<y\) in \(\widehat{Q}\). If \(s<x\) in \(\mathbb{R}\) then \(y<x\) in \(\widehat{Q}\).
* Suppose \(x=s_{\varepsilon}\) and \(y=s^{\prime}_{\varepsilon^{\prime}}\) for two sinks \(s,s^{\prime}\) of \(Q\) in \(\mathbb{R}\). We consider \(-<+\). Then \(x\leq y\) if and only if (i) \(s<s^{\prime}\) or (ii) \(s=s^{\prime}\) and \(\varepsilon\leq\varepsilon^{\prime}\).
* If \(x=-\infty_{+}\in\widehat{Q}\) (respectively, \(y=+\infty_{-}\in\widehat{Q}\)), then \(x\) is the minimal element (respectively, \(y\) is the maximal element) of \(\widehat{Q}\). If \(s\in\mathbb{R}\) is a sink of \(Q\) then we say \(s_{-}\) is blue and \(s_{+}\) is red. If \(-\infty_{+}\in\widehat{Q}\) we say \(-\infty_{+}\) is blue. All other \(x\in\widehat{Q}\) are red (respectively, blue) if and only if \(x\in\mathbb{R}\) is red (respectively, blue).
2. Let \(I\) be an interval of \(\mathbb{R}\) such that \(M_{I}\) is in \(\operatorname{Ind}^{\mathrm{r}}(Q)\). The **modified interval**\(\widehat{I}\) of \(\widehat{Q}\) has minimum given by the following conditions. * If \(\inf I\) is not \(-\infty\) nor a sink of \(Q\) then \(\min\widehat{I}=\inf I\). * If \(\inf I\) is a sink \(s\) of \(Q\) then (i) \(\min\widehat{I}=s_{-}\) if \(\inf I\in I\) or (ii) \(\min\widehat{I}=s_{+}\) if \(\inf I\notin I\). * If \(\inf I=-\infty\) then \(\min\widehat{I}=-\infty_{+}\). The maximal element of \(\widehat{I}\) is defined similarly.
3. Let \((R,B)\) be a red-blue funtion pair. The **modified graph** of \((R,B)\) is a subset of \(\widehat{Q}\times\mathbb{R}\). It is defined as follows. * For each \(x\in\mathbb{R}\) not a sink nor a source of \(Q\) and each \(y\in\mathbb{R}\), \((x,y)\in\widehat{G}(R,B)\) if and only if \([[(x,y)\in\mathcal{G}(B)\) and \(x\) is blue] or \([(x,y)\in\mathcal{G}(R)\) and \(x\) is red]] * For each \(s\in\mathbb{R}\) a sink of \(Q\) and each \(y\in\mathbb{R}\), \[(s_{-},y)\in\widehat{G}(R,B)\text{ if and only if }(s,y)\in\mathcal{G}(B)\] \[(s_{+},y)\in\widehat{G}(R,B)\text{ if and only if }(s,y)\in\mathcal{G}(R).\] * If \(-\infty_{+}\in\widehat{Q}\), then for all \(y\in\mathbb{R}\), \[(-\infty_{+},y)\in\widehat{\mathcal{G}}(R,B)\text{ if and only if }(-\infty,y)\in\overline{\mathcal{G}(R)}.\] * If \(+\infty_{-}\in\widehat{Q}\), then for all \(y\in\mathbb{R}\), \[(+\infty_{-},y)\in\widehat{\mathcal{G}}(R,B)\text{ if and only if }(+\infty,y)\in\overline{\mathcal{G}(B)}.\]
The following proposition follows from straightforward checks.
**Proposition 2.18**.: _There is a bijection between \(\operatorname{Ind}^{\mathrm{r}}(Q)\) and intervals of \(\widehat{Q}\) with distinct minimal and maximal element._
Using the modified definitions, we now define what it means to be semistable.
**Definition 2.19**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, \(\sigma\in\mathcal{S}(Q)\), and \((R,B)\) be a representative of \(\sigma\).
We say an indecomposable module \(M_{I}\) in \(\operatorname{Ind}^{\mathrm{r}}(Q)\) is \(\boldsymbol{\sigma}\)**-semistable** if there exists a horizontal line \(\ell=\widehat{I}\times\{h\}\subset\widehat{Q}\times\mathbb{R}\) satisfying the following conditions.
1. The endpoints of \(\ell\) touch \(\widehat{\mathcal{G}}(R,B)\). That is, \((\min\widehat{I},h),(\max\widehat{I},h)\in\widehat{\mathcal{G}}(R,B)\).
2. The line \(\ell\) may touch but not cross \(\widehat{\mathcal{G}}(R,B)\). That is, for each \(x\in\widehat{I}\) such that \(x\notin\{\max\widehat{I},\min\widehat{I}\}\), we have \[R_{\max}(x)\leq h\leq B_{\min}(x),\] where if \(x=s_{\pm}\) then \(R_{\max}(x)=R_{\max}(s)\) and \(B_{\min}(x)=B_{\min}(s)\).
**Remark 2.20**.: Notice that \(M_{I}\) is \(\sigma\)-semistable whenever the following are satisfied:
* We have \([F_{\min}(\inf I),F_{\max}(\inf I)]\cap[{F^{\prime}}_{\min}(\sup I),{F^{\prime}} _{\max}(\sup I)]\neq\emptyset\), where \(F\) is \(R\) if \(\inf I\) is red and is \(B\) if \(\inf I\) is blue and similarly for \(F^{\prime}\) and \(\sup I\).
* For all submodules \(M_{J}\) of \(M_{I}\), \({F^{\prime}}_{\min}(\sup J)\leq F_{\min}(\inf J)\), where \(F,\inf J\) and \(F^{\prime},\sup J\) are similar to the previous point.
Thus, this is a continuous analogue to the semistable condition in the finite case.
**Definition 2.21**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, \(\sigma\in\mathcal{S}(Q)\), and \((R,B)\) be a representative of \(\sigma\).
We say \(\sigma\) satisfies the **four point condition** if, for any \(\sigma\)-semistable module \(M_{I}\), we have \(|(\widehat{I}\times\{h\})\cap(\widehat{Q}\times\mathbb{R})|\leq 3\), where \(\widehat{I}\times\{h\}\) is as in Definition 2.19. We denote the set of stability conditions that satisfy the four point condition as \(\mathcal{S}_{\mathrm{fpc}}(Q)\).
Recall Definition 2.5.
**Lemma 2.22**.: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources and let \(M_{I}\), \(M_{J}\) be indecomposables in \(\mathrm{Ind}^{r}(Q)\). Let \(a=\inf I\), \(b=\sup I\), \(c=\inf J\), and \(d=\sup J\). Then \(\mathrm{Ext}^{1}(M_{J},M_{I})\cong\Bbbk\cong\mathrm{Hom}(M_{I},M_{J})\) if and only if one of the the following hold:_
* \(a<c<b<d\)_, and_ \(b,c\) _are red;_
* \(c<a<d<b\)_, and_ \(a,d\) _are blue;_
* \(c<a\leq b<d\)_, and_ \(a\) _is blue, and_ \(b\) _is red; or_
* \(a<c<d<b\)_, and_ \(c\) _is red, and_ \(d\) _is blue._
Proof.: It is shown in [10] that \(\mathrm{Hom}\) and \(\mathrm{Ext}\) between indecomposables must be \(0\) or \(1\) dimensional.
Since \(\mathrm{Hom}(M_{I},M_{J})\neq 0\) we obtain one of the items in the list where the first or last inequality may not be strict. Since \(\mathrm{Ext}(M_{I},M_{J})\neq 0\) we see all the inequalities must be strict.
The itemized list implies \(\mathrm{Hom}(M_{I},M_{J})\neq 0\). Then there is a short exact sequence \(M_{I}\hookrightarrow M_{I\cup J}\oplus M_{I\cap J}\twoheadrightarrow M_{J}\) and so \(\mathrm{Ext}(M_{J},M_{I})\neq 0\).
**Definition 2.23**.: Let \(M_{I}\) and \(M_{J}\) be indecomposables in \(\mathrm{Ind}^{r}(Q)\) for some continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources. We say \(M_{I}\) and \(M_{J}\) are \(\mathbf{N}_{\pi}\)**-compatible** if both of the following are true:
\[\dim_{\Bbbk}(\mathrm{Ext}(M_{J},M_{I})\oplus\mathrm{Hom}(M_{I},M_ {J})) \leq 1\] \[\dim_{\Bbbk}(\mathrm{Ext}(M_{I},M_{J})\oplus\mathrm{Hom}(M_{J},M _{I})) \leq 1.\]
One can verify this is equivalent to Igusa and Todorov's compatibility condition in [12] when \(Q\) has the straight descending orientation.
In terms of colors and set operations, \(\mathbf{N}_{\pi}\)-compatibility can be expressed as follows.
**Lemma 2.24**.: \(M_{I}\) _and \(M_{J}\) are \(\mathbf{N}_{\pi}\)-compatible if one of the following is satisfied._
1. \(I\cap J=\emptyset\)_,_
2. \(I\subset J\) _and_ \(J\setminus I\) _is connected, or vice versa,_
3. \(I\subset J\) _and both endpoints of_ \(I\) _are the same color, or vice versa,_
4. \(I\cap J\neq I\)_,_ \(I\cap J\neq J\)_, and_ \(I\cap J\) _has endpoints of opposite color._
**Theorem 2.25**.: _Let \(\sigma\in\mathcal{S}(Q)\). The following are equivalent._
* \(\sigma\in\mathcal{S}_{\mathit{fpc}}(Q)\)_._
* _The set of_ \(\sigma\)_-semistable indecomposables is maximally_ \(\mathbf{N}_{\pi}\)_-compatible._
Proof.: Let \((R,B)\) be a representative of \(\sigma\).
\(\Leftarrow\)**.** We prove the contrapositive. Suppose \(\sigma\) does not satisfy the four point condition. Then there are \(a<b<c<d\) in \(\widehat{Q}\) that determine indecomposable modules \(M_{a,b}\), \(M_{a,c}\), \(M_{a,d}\), \(M_{b,c}\), \(M_{b,d}\), \(M_{c,d}\). Here, the notation \(M_{x,y}\) means the interval indecomposable with interval \(I\) such that \(\min\widehat{I}=x\) and \(\max\widehat{I}=y\). Using Lemma 2.24 we see that at least two of the modules must be not \(\mathbf{N}_{\pi}\)-compatible.
\(\Rightarrow\)**.** Now suppose \(\sigma\) satisfies the four point condition. By Lemma 2.24 we see that the set of \(\sigma\)-semistable indecomposables is \(\mathbf{N}_{\pi}\)-compatible. We now check maximality.
Let \(M_{J}\) be an indecomposable in \(\operatorname{Ind}^{\mathrm{r}}(Q)\) such that \(M_{J}\) is not \(\sigma\)-semistable. Recall left and right colors from Definition 2.6. There are four cases depending on whether \(J\) is left red or left blue and whether \(J\) is right red or right blue. However, the case where \(J\) is both left and right red red is similar to the case where \(J\) is both left and right blue. Furthermore, the cases where \(J\) is left red and right blue is similar to the case where \(J\) left blue and right red. Thus we reduce to two cases where \(J\) is left red: either (1) \(J\) is right blue or (2) \(J\) is right red. (Notice the case where \(M_{J}\) is a simple projective \(M_{[s,s]}\) is similar to the case where \(J\) is left red and right blue.)
**Case (1)**. Since \(M_{J}\) is not \(\sigma\)-semistable, we first consider that \(M_{J}\) fails Definition 2.19(1) but satisfies Definition 2.19(2). Notice that, in this case, it is not possible that \(\inf J=-\infty\) or \(\sup J=+\infty\). Since \(M_{J}\) is left red, right blue, and fails Definition 2.19(1), we must have \(R_{\max}(\inf J)<B_{\min}(\sup J)\). Otherwise, we could create a horizontal line segment in \(\widehat{Q}\times\mathbb{R}\) satisfying Definition 2.19(1). Let \(\varepsilon>0\) such that \(0<\varepsilon<B_{\min}(\sup J)-R_{\max}(\inf J)\). Let
\[\ell=\widehat{Q}\times\{R_{\max}(\inf J)+\varepsilon\}.\]
By Lemma 2.15(1), there exists \(w<\min\widehat{J}\) and \(z>\max\widehat{J}\) in \(\widehat{Q}\) such that the module \(M_{I}\) corresponding to \([w,z]\subset\widehat{Q}\) (Proposition 2.18) is \(\sigma\)-semistable.
Now suppose \(M_{J}\) does not satisfy Definition 2.19(2). First suppose there exists \(x\in J\) such that \(R_{\max}(x)>R_{\max}(\inf J)\). We extend the argument of the proof of Lemma 2.15 to show that \(\overline{\mathcal{G}(R)}\) must have global maxima in the following sense. There is some set \(X\) such that, for all \(x\in X\) and \(y\notin X\), we have \(R_{\max}(y)<R_{\max}(x)\) and, for each \(x,x^{\prime}\in X\), we have \(R_{\max}(x)=R_{\max}(x^{\prime})\). In particular, there is \(z\in\widehat{Q}\) such that \(\min\widehat{J}<z<\max\widehat{J}\) and for all \(x\) such that \(\min\widehat{J}\leq x<z\) we have \(R_{\max}(x)<R_{\max}(z)\). If there is \(x\in[\min\widehat{J},z]\) such that \(B_{\min}(x)<R_{\max}(z)\) then there is \(w\in[\min\widehat{J},z]\) such that the module \(M_{I}\) corresponding to \([w,z]\) is \(\sigma\)-semistable. In particular, \(M_{I}\) is left blue and right red. By Lemma 2.24 we see that \(M_{J}\) and \(M_{I}\) are not \(\mathbf{N}_{\pi}\)-compatible. If no such \(x\in[\min\widehat{J},z]\) exists then there
is a \(w<\min\widehat{J}\) such that the module \(M_{I}\) corresponding to \([w,z]\) is \(\sigma\)-semistable. Since \(M_{I}\) is right red we again use Lemma 2.24 and see that \(M_{I}\) and \(M_{J}\) are not \(\mathbf{N}_{\pi}\)-compatible.
**Case (2)**. If \(M_{J}\) satisfies Definition 2.19(2) but fails Definition 2.19(1), then the function \(R_{\max}(x)\) must be monotonic. If \(R_{\max}(x)\) is decreasing then let \(x^{\prime}=\inf J+\varepsilon\) be red. By Lemma 2.15(1) we can find some \(\widehat{I}\) with left endpoint \(x+\varepsilon\) and blue right endpoint \(y^{\prime}\) such that \(y^{\prime}>\sup J\) and \(M_{I}\) is \(\sigma\)-semistable. By Lemma 2.24, \(M_{I}\) and \(M_{J}\) are not \(\mathbf{N}_{\pi}\)-compatible. A similar argument holds if \(R_{\max}(x)\) is monotonic increasing.
Now suppose \(M_{J}\) fails Definition 2.19(2). The argument for the second half of Case (1) does not depend on whether \(J\) is right red or right blue. Therefore, the theorem is true.
Let \(T\) and \(T^{\prime}\) be maximally \(\mathbf{N}_{\pi}\)-compatible sets. We call a bijection \(\mu:T\to T^{\prime}\) a **mutation** if \(T^{\prime}=(T\setminus\{M_{I}\})\cup\{M_{J}\}\), for some \(M_{I}\in T\) and \(M_{J}\in T^{\prime}\), and \(\mu(M_{K})=M_{K}\) for all \(K\neq I\). (Then \(\mu(M_{I})=M_{J}\).)
## 3. Continuous tilting
We construct a continuous version of tilting. Consider a stability condition \(\sigma\) on a continuous quiver of type \(\mathbb{A}\) where \(-\infty\) is a sink and \(s\) is either the smallest source or a real number less than the smallest source. Then continuous tilting at \(s\) will replace the red interval \(K=[-\infty,s)\) with the blue interval \(K^{*}=(-\infty,s]\) and keep the rest of \(Q\) unchanged. Thus, \(\widehat{Q}=K\coprod Z\) is replaced with \(\widehat{Q}^{*}=K^{*}\coprod Z\). We have an order reversing bijection \(\mathfrak{t}:K\to K^{*}\) given by
\[\mathfrak{t}(x)=\tan\left(\tan^{-1}s-\tan^{-1}x-\frac{\pi}{2}\right).\]
This extends, by the identity on \(Z\), to a bijection \(\overline{\mathfrak{t}}:\widehat{Q}\to\widehat{Q}^{*}\).
### Compatibility conditions
We start with the continuous compatibility conditions for representable modules over the real line. Given a continuous quiver \(Q\) of type \(\mathbb{A}\), we consider intervals \(I\) in \(\mathbb{R}\). Let \(M_{I}\) denote the indecomposable module with support \(I\). We say that \(I\) is **admissible** if \(M_{I}\) is representable. It is straightforward to see that \(I\) is admissible if and only if the following hold.
1. \(\inf I\in I\) if and only if it is blue, and
2. \(\sup I\in I\) if and only if it is red.
By Definition 2.3, neither endpoint of \(I\) can be a source. When \(I=[s,s]\) is a sink, \(\widehat{I}=[s_{-},s_{+}]\). We use notation to state this concisely: For any \(a<b\in\widehat{Q}\), let \(\widehat{I}(a,b)\) be the unique admissible interval in \(\mathbb{R}\) with endpoints \(a,b\). Thus \(a\in\widehat{I}(a,b)\)if and only if\(a\) is blue and \(b\in\widehat{I}(a,b)\) if and only if \(b\) is red. (Recall that every element of \(\widehat{Q}\) is colored red or blue.)
Recall that for each \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\), the set of \(\sigma\)-semistable modules form a maximally \(\mathbf{N}_{\pi}\)-compatible set (Theorem 2.25).
### Continuous tilting on modules
**Lemma 3.1**.:
1. _Continuous tilting gives a bijection between admissible intervals_ \(I=\widehat{I}(a,b)\) _for_ \(Q\) _and admissible intervals_ \(I^{\prime}\) _for_ \(Q^{\prime}\) _given by_ \(I^{\prime}=\widehat{I}(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b))\) _if_ \(\overline{\mathfrak{t}}(a)<\overline{\mathfrak{t}}(b)\) _in_ \(\widehat{Q}^{\prime}\) _and_ \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t} }(a))\) _if_ \(\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\)_._
_._
2. _Furthermore,_ \(M_{I},M_{J}\) _are_ \(\mathbf{N}_{\pi}\)_-compatible for_ \(Q\) _if and only if_ \(M_{I^{\prime}},M_{J^{\prime}}\) _are_ \(\mathbf{N}_{\pi}\)_-compatible for_ \(Q^{\prime}\)_._
For each admissible interval \(I\) for \(Q\), denote by \(\phi(M_{I})\) the module \(M_{I^{\prime}}\), where \(I^{\prime}\) is the admissible interval of \(Q^{\prime}\) obtained from \(I\) by continuous tilting.
Lemma 3.1 immediately implies the following.
**Theorem 3.2**.: _Continuous tilting gives a bijection \(\Phi\) between maximal compatible sets of representable indecomposable modules over \(Q\) and those of \(Q^{\prime}\). Furthermore if \(\mu:T\to T^{\prime}\) is a mutation then so is \(\Phi(\mu):\Phi T\to\Phi T^{\prime}\) given by \(\phi(M_{I})\mapsto\phi(\mu(M_{I}))\)._
Proof of Lemma 3.1.: (a) Since \(\overline{\mathfrak{t}}:\widehat{Q}\to\widehat{Q}^{\prime}\) is a bijection and \(\widehat{I}(a,b)\) is admissible by notation, we get a bijection with admissible \(Q^{\prime}\) intervals by definition.
(b) Suppose that \(I=\widehat{I}(a,b)\) and \(J=\widehat{I}(c,d)\) with \(a\leq c\) by symmetry. We use Lemma 2.24 to check \(\mathbf{N}_{\pi}\)-compatibility. For this proof, we say "\(I\) and \(J\) are compatible" to mean "\(M_{I}\) and \(M_{J}\) are \(\mathbf{N}_{\pi}\)-compatible".
1. If \(a,b,c,d\) are not distinct then \(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b),\overline{\mathfrak{t} }(c),\overline{\mathfrak{t}}(d)\) are also not distinct. So, \(I,J\) are compatible for \(Q\) and \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\) in this case. So, suppose \(S=\{a,b,c,d\}\) has size \(|S|=4\).
2. If \(S\cap K=\emptyset\) then \(I,J\subset Z\). So, \(I^{\prime}=I\) and \(J^{\prime}=J\) are compatible for \(Q^{\prime}\) if and only if \(I,J\) are compatible for \(Q\).
3. If \(|S\cap K|=1\) then \(S\cap K=\{a\}\). Then \(\overline{\mathfrak{t}}\) does not change the order of \(a,b,c,d\) and does not change the colors of \(b,c,d\). So, \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
4. If \(|S\cap K|=2\) there are three cases: (a) \(a<b<c<d\), (b) \(a<c<b<d\) or (c) \(a<c<d<b\). If \(I,J\) are in case (a) then so are \(I^{\prime},J^{\prime}\) and both pairs are compatible. If \(I,J\) are in case (b) then \(I^{\prime},J^{\prime}\) are in case (c) and vise versa. Since the colors of \(a,c\) change in both cases (from red to blue), \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
5. If \(|S\cap K|=3\) there are the same three cases as in case (4). If \(I,J\) are in case (a), then \(I^{\prime},J^{\prime}\) are in case (c) and vise-versa. Since the middle two vertices are the same color, both pairs are compatible. If \(I,J\) are in case (b) then so are \(I^{\prime},J^{\prime}\) and both pairs are not compatible.
6. If \(S\subset K\) then \(a,b,c,d\) reverse order and all become blue. So, \(I,J\) are compatible if and only if they are in cases (a) or (c) and \(I^{\prime},J^{\prime}\) are in the same case and are also compatible.
In all cases, \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
We can relate continuous tilting to cluster theories, introduced by the authors and Todorov in [11].
**Definition 3.3**.: Let \(\mathcal{C}\) be an additive, \(\Bbbk\)-linear, Krull-Remak-Schmidt, skeletally small category and let \(\mathbf{P}\) be a pairwise compatibility condition on the isomorphism classes of indecomposable objects in \(\mathcal{C}\). Suppose that for any maximally \(\mathbf{P}\)-compatible set \(T\) and \(X\in T\) there exists at most one \(Y\notin T\) such that \((T\setminus\{X\})\cup\{Y\}\) is \(\mathbf{P}\)-compatible.
Then we call maximally \(\mathbf{P}\)-compatible sets \(\mathbf{P}\)**-clusters**. We call bijections \(\mu:T\to(T\setminus\{X\})\cup\{Y\}\) of \(\mathbf{P}\)-clusters \(\mathbf{P}\)**-mutations**. We call the groupoid whose objects are \(\mathbf{P}\)-clusters and whose morphisms are \(\mathbf{P}\)-mutations (and identity functions) the \(\mathbf{P}\)**-cluster theory of \(\mathcal{C}\).** We denote this groupoid by \(\mathscr{T}_{\mathbf{P}}(\mathcal{C})\) and denote the
inclusion functor into the category of sets and functions by \(I_{\mathcal{C},\mathbf{P}}:\mathscr{T}_{\mathbf{P}}(\mathcal{C})\to\mathscr{S}et\). We say \(\mathbf{P}\)**induces** the \(\mathbf{P}\)-cluster theory of \(\mathcal{C}\).
The isomorphism of cluster theories was introduced by the second author in [17].
**Definition 3.4**.: An **isomorphism of cluster theories** is a pair \((F,\eta)\) with source \(\mathscr{T}_{\mathbf{P}}(\mathcal{C})\) and target \(\mathscr{T}_{\mathbf{Q}}(\mathcal{D})\). The \(F\) is a functor \(F:\mathscr{T}_{\mathbf{P}}(\mathcal{C})\to\mathscr{T}_{\mathbf{Q}}(\mathcal{D})\) such that \(F\) induces a bijection on objects and morphisms. The \(\eta\) is a natural transformation \(\eta:I_{\mathcal{C},\mathbf{P}}\to I_{\mathcal{D},\mathbf{Q}}\circ F\) such that each component morphism \(\eta_{T}:T\to F(T)\) is a bijection.
We see that, for any continuous quiver \(Q\) of type \(\mathbb{A}\), the pairwise compatibility condition \(\mathbf{N}_{\pi}\) induces the cluster theory \(\mathscr{T}_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q))\). The following corollary follows immediately from Theorem 3.2.
**Corollary 3.5** (to Theorem 3.2).: _For any pair of continuous quivers \(Q\) and \(Q^{\prime}\) of type \(\mathbb{A}\) with finitely many sinks and sources, there is an isomorphism of cluster theories \(\mathscr{T}_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q))\to\mathscr{T }_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q^{\prime}))\)._
### Continuous tilting of stability conditions
Given a stability condition \(\sigma\) for \(Q\), we obtain a stability condition \(\sigma^{\prime}\) for \(Q^{\prime}\) having the property that the \(\sigma^{\prime}\)-semistable modules are related to the \(\sigma\)-semistable modules for \(Q\) by continuous tilting (the bijection \(\Phi\) of Theorem 3.2). Later we will see that these stability conditions give the same measured lamination on the Poincare disk.
We continue with the notation from sections 3.1 and 3.2 above. If the stability condition \(\sigma\) on \(Q\) is given by the red-blue pair \((R,B)\), the tilted stability condition \(\sigma^{\prime}\) on \(Q^{\prime}\) will be given by \((R^{\prime},B^{\prime})\) given as follows.
1. The pair \((R^{\prime},B^{\prime})\) will be the same as \((R,B)\) on \([s,\infty)\).
2. On \(K^{\prime}=(-\infty,s_{-}]\subseteq\widehat{Q}^{\prime}\), the new red function \(R^{\prime}\) will be constantly equal to \(R_{-}(s)\).
3. On \(K^{\prime}=(-\infty,s_{-}]\), the new blue function \(B^{\prime}\) can be given by "flipping" \(R\) horizontally and flipping each "island" vertically, in either order.
**Notation 3.6**.: Let \(F\) be a useful function. By \(F_{-}(a)\) we denote \(\lim_{x\to a^{-}}F(a)\), for any \(a\in(-\infty,+\infty]\). By \(F_{+}(a)\) we denote \(\lim_{x\to a^{+}}F(a)\), for any \(a\in[-\infty,+\infty)\).
**Definition 3.7**.: A (red) **island** in \(K=[-\infty,s)\subseteq\widehat{Q}\) is an open interval \((x,y)\) in \(K\) which is either:
1. \((x,r)\) where \(x<r\) so that \(R(x)\geq R_{-}(s)\) and \(R(z)<R_{-}(s)\) for all \(x<z<s\) or
2. \((x,y)\) where \(x<y<s\), \(R(x)\geq R(y)\geq R_{-}(s)\), \(R(z)<R(y)\) for all \(x<z<y\) and \(R(w)\leq R(y)\) for all \(y<w<s\).
**Lemma 3.8**.: \(z\in(-\infty,s)\) _is in the interior of some island in \(K\) if and only if there exists \(y\in(z,s)\) so that \(R(z)<R(y)\)._
Proof.: \((\Rightarrow)\) If \(z\) lies in the interior of an island \((x,y)\) there are two cases. (1) For \(y<s\), \(R(z)<R(y)\). (2) For \(y=s\), \(R(z)<R_{-}(s)\). But \(R_{-}(s)\) is a limit, so there is a \(y<s\) arbitrarily close to \(s\) so that \(R(z)<R(y)\) and \(z<y<s\).
\((\Leftarrow)\) Let \(y\in(z,s)\) so that \(R(z)<R(y)\). Let \(r=sup\{R(y)\,:\,y\in(z,s)\}\). If \(r=R(y)\) for some \(y\in(z,s)\), let \(y\) be minimal. (By the 4 point condition there are at most 2 such \(y\).) Then \(z\) lies in an island \((x,y)\) for some \(x<z\).
If the maximum is not attained, there exists a sequence \(y_{i}\) so that \(R(y_{i})\) converges to \(r\). Then \(y_{i}\) converges to some \(w\in[z,s]\). If \(w\in(z,s)\) then \(R(z)=r\) and we are reduced to the previous case. Since \(R(z)<r\), \(w\neq z\). So, \(w=s\) and \(r=R_{-}(s)\). Then \(z\) lies in an island \((x,s)\) for some \(s<z\). (\(x=\max\{w<z\,:\,R(w)\geq r\}\)) In both cases, \(z\) lies in an island as claimed.
To define the new blue function \(B^{\prime}\), we need a function \(H\) defined as follows.
\[H(z):=\begin{cases}R(y)&\text{if $z\in(x,y]$ for some island $(x,y)$ where $y<s$}\\ R_{-}(s)&\text{if $z\in(x,s)$ and $(x,s)$ is an island}\\ R(z)&\text{for all other $z\in[-\infty,s)$}\end{cases}\]
**Remark 3.9**.: Note that \(H(z)>R(z)\) if \(z\) is in the interior of an island and \(H(z)=R(z)\) otherwise.
**Lemma 3.10**.: \(H\) _is a nonincreasing function, i.e., \(H(x)\geq H(y)\) for all \(x<y<s\). Also, \(H(z)=H_{-}(z)=\lim_{y\to z-}H(y)\) for all \(z<s\) and \(H_{-}(s)=R_{-}(s)\)._
**Remark 3.11**.: Since \(H\) is decreasing and converging to \(R_{-}(s)\) we must have: \(H(x)=H_{-}(x)\geq H_{+}(x)\geq R_{-}(s)\) for all \(x<s\).
Proof.: If \(H(u)<H(z)\) for some \(u<z<s\) then \(R(u)\leq H(u)<H(z)\). But \(H(z)\) is equal to either \(R(z),R_{-}(s)\) or \(R(y)\) for some \(y>z\). So, \(R(u)<R(y)\) for some \(y\in(u,s)\). By Lemma 3.8, \(u\) lies in the interior of some island, say \((x,y)\) and, by definition of \(H\), \(H(u)=R(y)\geq R(w)\) for all \(w\geq y\) and \(H(u)=H(z)=H(y)\) for all \(u\leq z\leq y\). Thus, \(H\) is nonincreasing.
To see that \(H(z)=H_{-}(z)\) suppose first that \(z\in(x,y]\) for some island \((x,y)\). Then \(H(z)=R(y)\) is constant on the interval \((x,y]\). So, \(H(z)=H_{-}(z)=R(y)\). Similarly, \(H(z)=H_{-}(z)\) if \(z\in(x,s)\) and \((x,s)\) is an island. If \(z\) is not in any island, \(H(z)=R(z)\) and \(R(z)=R_{-}(z)\) since, otherwise, \(z\) would be on the right end of an island. And, \(H_{-}(z)\) would be the limit of those \(H(x)\) where \(x<z\) and \(H(x)=R(x)\). So, \(H_{-}(z)=R_{-}(z)=H(z)\) as claimed.
Since \(H(y)\geq R(y)\), we have: \(H_{-}(s)=\lim_{y\to s-}H(y)\geq R_{-}(s)\). If \(H_{-}(s)>R_{-}(s)\), say \(H_{-}(s)=R_{-}(s)+c\) then there is a sequence \(z_{i}\to s-\) so that \(H(z_{i})>R_{-}(s)+c/2\). For each \(z_{i}\) there is \(y_{i}\in[z_{i},s)\) so that \(H(z_{i})=R(y_{i})\). Then \(R(y_{i})>R_{-}(s)+c/2\) for all \(i\) which is not possible since \(y_{i}\to s-\). So, \(H_{-}(s)=R_{-}(s)\).
The monotonicity of \(H\) implies that its variation \(\mathsf{var}_{H}I\) on any interval \(I\) is the difference of its limiting values on the endpoints. The formula is:
\[\mathsf{var}_{H}(a,b)=H_{+}(a)-H_{-}(b).\]
Using \(H=H_{-}\) and \(H_{+}\) we can "flip" the islands up to get \(\widetilde{R}\):
\[\widetilde{R}(z)=H(z)+H_{+}(z)-R(z).\]
**Definition 3.12**.: The new blue function \(B^{\prime}\), shown in Figure 5, is given on \(K^{*}=(-\infty,s]\) by
\[B^{\prime}(z)=\widetilde{R}(\mathfrak{t}(z)).\]
The new red function is constant on \(K^{*}\) with value \(R^{\prime}(x)=R_{-}(s)\) for all \(x\in K^{*}\). On the complement of \(K^{*}\) in \(\widehat{Q}^{\prime}\), the red-blue pair \((R^{\prime},B^{\prime})\) is the same as before.
We will now show \(B^{\prime}\) is a useful function with the same variation on \((-\infty,s]\) as \(R\) has on \([-\infty,s)\). More precisely:
**Lemma 3.13**.: _The variation of \(R\) on any open interval \((a,b)\subset[-\infty,s)\) is equal to the variation of \(B^{\prime}\) on \((\mathfrak{t}(b),\mathfrak{t}(a))\)._
Proof.: Since \(B^{\prime}\) is obtained from \(\widetilde{R}\) by reversing the order of the first coordinate, we have \(\mathsf{var}_{B^{\prime}}(\mathfrak{t}(b),\mathfrak{t}(a))=\mathsf{var}_{ \widetilde{R}}(a,b)\). Thus, it suffices to show that \(\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{R}(a,b)\).
First, we do the case when \((a,b)\) is an island. Then \(H(z)=H_{+}(z)=R(b)>R(z)\) are constant for all \(z\in(a,b)\). So, \(\widetilde{R}=H+H_{+}-R\) has the same variation as \(R\) on \((a,b)\).
Write \(R=H+(R-H)\). Then we claim that
\[\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b).\]
To see this take any sequence \(a<x_{0}<x_{1}<\cdots<x_{n}<b\). Then the sum
\[\sum_{i=1}^{n}|R(x_{i})-R(x_{i-1})|\]
can be broken up into parts. Let \(A_{1},\cdots,A_{m}\) be the sequence of disjoint subsets of \(S=\{x_{0},\cdots,x_{n}\}\) so that \(A_{j}\) is the intersection of \(S\) with some island \((a_{j},b_{j})\). We may assume that \(a_{j}\) for \(1<j\leq m\) and \(b_{j}\) for \(1\leq j<m\) are in the set \(S\) since they lie in the interval \((a,b)\). For \(1<j\leq m\), if \(x_{i}\) is the smallest element of \(A_{j}\), then \(x_{i-1}=a_{j}\) and the \(x_{i},x_{i-1}\) term in the approximation of \(\mathsf{var}_{H}(a,b)+\mathsf{var}_{H-R}(a,b)\) is
\[|H(a_{j})-H(x_{i})|+|(R-H)(a_{j})-(R-H)(x_{i})|=|R(a_{j})-H(x_{i})|+|H(x_{i})- R(x_{i})|\]
since \(H(a_{j})=R(a_{j})\). This sum is equal to \(|R(a_{j})-R(x_{i})|\), the corresponding term in the approximation of \(\mathsf{var}_{R}(a,b)\), since \(R(a_{j})\geq H(x_{i})>R(x_{i})\). Similarly, \(H(b_{j})=R(b_{j})\) by definition and \(R(b_{j})=H(x_{k})>R(x_{k})\) for any \(x_{k}\in A_{j}\). So,
\[|H(b_{j})-H(x_{k})|+|(R-H)(b_{j})-(R-H)(x_{k})|=|R(b_{j})-R(x_{k})|.\]
If \(x_{i},x_{i+1}\) both lie in \(A_{j}\) then \(H(x_{i})=H(x_{i+1})\). So,
\[|R(x_{i})-R(x_{i+1})|=|(R-H)(x_{i})-(R-H)(x_{i+1})|+|H(x_{i})-H(x_{i+1})|.\]
This equation also holds if \(x_{i},x_{i+1}\) do not lie in any \(A_{j}\) since, in that case, \(R=H\) at both \(x_{i}\) and \(x_{i+1}\). Thus every term in the sum approximating \(\mathsf{var}_{R}(a,b)\) is equal to the sum of the corresponding terms for \(\mathsf{var}_{H}(a,b)\) and \(\mathsf{var}_{R-H}(a,b)\). Taking supremum we get the equation \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b)\) as claimed.
Figure 5. The function \(R\) is in red. \(H\), black, flattens the islands of \(R\). When the islands are flipped up, we get \(\widetilde{R}\) in green. The horizontal mirror image of this is the new blue function \(B^{\prime}\) on the right. Figures 8, 10 give another example.
A similar calculation shows that
\[\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{H_{+}}(a,b)+\mathsf{var}_{ \widetilde{R}-H_{+}}(a,b).\]
But this is equal to \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b)\) since \(H-R=\widetilde{R}-H_{+}\) by definition of \(\widetilde{R}\) and \(\mathsf{var}_{H}(a,b)=H_{+}(a)-H_{-}(b)=\mathsf{var}_{H_{+}}(a,b)\). Thus \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{B^{ \prime}}(\mathfrak{t}(b),\mathfrak{r}(a))\).
For \(x_{0}\) in the interior of the domain of \(f\) let
\[\mathsf{var}_{f}(x_{0}):=\lim_{\delta\to 0}\mathsf{var}_{f}(x_{0}-\delta,x_{0}+ \delta)=\lim_{\delta\to 0}\mathsf{var}_{f}[x_{0}-\delta,x_{0}+\delta]\]
We call this the **local variation** of \(f\) at \(x_{0}\). If \(x_{0}\in(a,b)\) this is equivalent to:
\[\mathsf{var}_{f}(x_{0})=\mathsf{var}_{f}(a,b)-\mathsf{var}_{f}(a,x_{0})- \mathsf{var}_{f}(x_{0},b)\]
since this is the limit of \(\mathsf{var}_{f}(a,b)-\mathsf{var}_{f}(a,x_{0}-\delta)-\mathsf{var}_{f}[x_{0} +\delta,b)=\mathsf{var}_{f}[x_{0}-\delta,x_{0}+\delta]\).
To show that \(B^{\prime}\) is a useful function we need the following lemma.
**Lemma 3.14**.: _A real valued function \(f\) of bounded variation defined in a neighborhood of \(x_{0}\) is continuous at \(x_{0}\) if and only if its local variation, \(\mathsf{var}_{f}(x_{0})=0\). In particular, \(R\) is continuous at \(x\in K\) if and only if \(B^{\prime}\) is continuous at \(\mathfrak{t}(x)\in K^{*}\)._
Proof.: Suppose that \(\mathsf{var}_{f}(x_{0})=0\). Then, for any \(\varepsilon>0\) there is a \(\delta>0\) so that
\[\mathsf{var}_{f}(x_{0}-\delta,x_{0}+\delta)<\varepsilon.\]
Then \(|f(x)-f(x_{0})|<\varepsilon\) for all \(x\in(x_{0}-\delta,x_{0}+\delta)\). So, \(f\) is continuous at \(x_{0}\).
Conversely, suppose \(f\) is continuous at \(x_{0}\). Then, for any \(\varepsilon>0\) there is a \(\delta>0\) so that \(|f(x)-f(x_{0})|<\varepsilon\) for \(|x-x_{0}|<\delta\). Let \(V=\mathsf{var}_{f}[x_{0},x_{0}+\delta)\). By definition of variation there exist \(x_{0}<x_{1}<\cdots<x_{n}<x_{0}+\delta\) so that
\[\sum_{i=1}^{n}|f(x_{i})-f(x_{i-1})|>V-\varepsilon.\]
Since \(|f(x_{1})-f(x_{0})|<\varepsilon\) this implies \(\sum_{i=2}^{n}|f(x_{i})-f(x_{i-1})|>V-2\varepsilon\). So, \(\mathsf{var}_{f}[x_{0},x_{1})<2\varepsilon\). Similarly, there exists \(x_{-1}<x_{0}\) so that \(\mathsf{var}_{f}(x_{-1},x_{0})<2\varepsilon\). So, \(\mathsf{var}_{f}(x_{-1},x_{1})<4\varepsilon\) which is arbitrarily small.
For a useful function \(F\), recall that \(u_{a}^{-}=F(a)-\lim_{x\to a-}F(x)\) and \(u_{a}^{+}=\lim_{x\to a+}F(x)-F(a)\) (Proposition 2.10).
**Proposition 3.15**.: _Let \(F\) be a useful function. Then, the local variation of \(F\) at any point \(a\) is_
\[\mathsf{var}_{F}(a)=|u_{a}^{-}|+|u_{a}^{+}|.\]
Proof.: It follows from the triangle inequality that the variation of \(f+g\) on any open interval is bounded above and below by the sum and differences of the variations of \(f,g\) on that interval. This holds for local variations as well:
\[|\mathsf{var}_{g}(x)-\mathsf{var}_{f}(x)|\leq\mathsf{var}_{f+g}(x)\leq\mathsf{ var}_{f}(x)+\mathsf{var}_{g}(x)\]
Let \(g_{x}=u_{x}^{-}\Delta_{x}^{-}+u_{x}^{+}\Delta_{x}^{+}\). Then
\[\mathsf{var}_{F}(x)=\mathsf{var}_{g_{x}}(x)=|u_{x}^{-}|+|u_{x}^{+}|\]
since \(F-g_{x}\) is continuous at \(x\) and thus, by Lemma 3.14, has \(\mathsf{var}_{F-g_{x}}(x)=0\).
We can say slightly more for the functions \(R\) and \(B^{\prime}\). (See also Figure 6.)
**Lemma 3.16**.: _For any \(a\in K=[-\infty,s)\) let \(b=\mathfrak{t}(a)\in K^{*}\). Then \(v_{b}^{-}=B^{\prime}(b)-B^{\prime}_{-}(b)\leq 0\) and \(v_{b}^{+}=B^{\prime}_{+}(b)-B^{\prime}(b)\geq 0\). In particular, \(B^{\prime}(b)=min\,B^{\prime}(b)\)._
Proof.: Since \(B^{\prime}\) is the mirror image of \(\widetilde{R}\), \(v_{b}^{-}\) and \(v_{b}^{+}\) for \(B^{\prime}\) are equal to \(-v_{a}^{+},-v_{a}^{-}\) for \(\widetilde{R}\), respectively, where \(v_{a}^{-}=\widetilde{R}(a)-\widetilde{R}_{-}(a)\) and \(v_{a}^{+}=\widetilde{R}_{+}(a)-\widetilde{R}(a)\). Thus it suffices to show that \(v_{a}^{-}\leq 0\) and \(v_{a}^{+}\geq 0\).
We have \(u_{a}^{-}=R(a)-R_{-}(a)\geq 0\). Also, \(\widetilde{R}_{-}=(H+H_{+}-R)_{-}=2H-R_{-}\). So,
\[v_{a}^{-} =(\widetilde{R}(a)-H_{+}(a))-(\widetilde{R}_{-}(a)-H_{+}(a))\] \[=(H(a)-R(a))+R_{-}(a)-2H(a)+H_{+}(a)\] \[=-u_{a}^{-}-(H(a)-H_{+}(a))\leq 0\]
Similarly, we have \(u_{a}^{+}=R_{+}(a)-R(a)\leq 0\) and \(\widetilde{R}_{+}(a)=2H_{+}(a)-R_{+}(a)\). So,
\[v_{a}^{+} =(\widetilde{R}_{+}(a)-H_{+}(a))-(\widetilde{R}(a)-H_{+}(a))\] \[=(H_{+}(a)-R_{+}(a))-(H(a)-R(a))\] \[=(H_{+}(a)-H(a))-u_{a}^{+}\]
To show that \(v_{a}^{+}\geq 0\), there are two cases. If \(a\) lies in an island \((x,y)\), then \(H_{+}(a)=H(a)=R(y)\) (or \(R_{-}(s)\) if \(y=s\)) and \(v_{a}^{+}=-u_{a}^{+}\geq 0\). If \(a\) does not lie in an island then \(H(a)=R(a)\) and \(H_{+}(a)\geq R_{+}(a)\). So, \(v_{a}^{+}\geq 0\).
**Theorem 3.17**.: _The new pair \((R^{\prime},B^{\prime})\) is a red-blue pair for the quiver \(Q^{\prime}\) and the \(\sigma^{\prime}\)-semistable \(Q^{\prime}\) modules given by this pair are the continuous tilts of the \(\sigma\)-semistable \(Q\)-modules given by the original pair \((R,B)\)._
Proof.: Lemmas 3.13 implies that \(R\) and \(B^{\prime}\) have the same local variation at the corresponding points \(x\) and \(\mathfrak{t}(x)\). In particular, \(R\) and \(B^{\prime}\) have discontinuities at corresponding points by Lemma 3.14 and the \(B^{\prime}(a)=min\,B^{\prime}(a)\) by Lemma 3.16.
The new red function \(R^{\prime}\) is constantly equal to \(R_{-}(s)\) on \(K^{*}\) and equal to the old function \(R\) on the complement \(Z\). So, \(B^{\prime}(x)\geq R^{\prime}(x)\) and they have the same limit as \(x\to-\infty\) by Remark 3.11. Thus \((R^{\prime},B^{\prime})\) form a red-blue pair for \(Q^{\prime}\).
Let \(\sigma,\sigma^{\prime}\) be the stability conditions on \(Q,Q^{\prime}\) given by the red-blue pairs \((R,B)\) and \((R^{\prime},B^{\prime})\), resp. It remains to show that the admissible interval \(I=\widehat{I}(a,b)\) is
Figure 6. This red function \(R\) has a spike on the right end \(b\) of an island \((a,b)\) and a discontinuity at the left end \(a\). When the island is flipped, we get a downward spike at \(a\) and a discontinuity at \(b\). The function \(R\) is the maximum and the tilted functions \(\widetilde{R}\) and \(B^{\prime}\) are minimums on vertical lines.
\(\sigma\)-semistable for \(Q\) if and only if the corresponding interval \(I^{\prime}\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\) where \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t }}(b))\) if \(\overline{\mathfrak{t}}(a)<\overline{\mathfrak{t}}(b)\) in \(\widehat{Q}^{\prime}\) and \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{ \mathfrak{t}}(a))\) if \(\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\).
Consider \(a<b\) in \(\overline{\mathbb{R}}\). There are three cases.
1. \(a=\overline{\mathfrak{t}}(a)\) and \(b=\overline{\mathfrak{t}}(b)\) both lie in \(Z\).
2. \(-\infty\leq a<b<s\)\((a,b\in K)\) and \(-\infty<\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\leq s\)\((\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b)\in K^{*})\).
3. \(a\in K\), \(\overline{\mathfrak{t}}(a)\in K^{*}\) and \(b=\overline{\mathfrak{t}}(b)\in Z\).
In Case (1), the stability conditions \(\sigma,\sigma^{\prime}\) given by the red and blue functions are the same on \(Z\). So, \(\widehat{I}(a,b)\) is \(\sigma\)-semistable if and only if \(\widehat{I}^{\prime}(a,b)=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(a), \overline{\mathfrak{t}}(b))\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\).
In Case (2), we claim that \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable at height \(h^{\prime}\) where \(h^{\prime}=2H(b)-h\).
An example can be visualized in Figure 6 by drawing horizontal lines at height \(h<H\) and \(h^{\prime}>H_{+}\) under the line \(H\) on the left and over \(H_{+}\) on the right.
To see this in general, note that if \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable then, for all \(z\in(a,b)\), \(R(z)\leq h\) (with equality holding for at most one value of \(z\), call it \(z=c\)) and \(R(a),R(b)\geq h\). Then for each \(z\in[a,b]\), \(H(z)\geq h\). So, for each \(z\in(a,b)\), \(z\neq c\), we have \(H(z)>R(z)\). By Remark 3.9, \(z\) lies in the interior of an island for \(R\). But \(\widetilde{R}(z)-H_{+}(z)=H(z)-R(z)>0\). So, the same values of \(z\) lie in islands for \(\widetilde{R}\) and \(\widetilde{R}(z)-h^{\prime}=h-R(z)\geq 0\). Also, \(\widetilde{R}(a),\widetilde{R}(b)\leq h^{\prime}\) since:
\[h^{\prime}-\widetilde{R}(b) =2H(b)-h-H(b)-H_{+}(b)+R(b)\] \[=(H(b)-H_{+}(b))+(R(b)-h)\geq 0\]
and, since \(H_{+}(a)=H(b)\) and \(H(a)=\) either \(H(b)\) or \(R(a)\),
\[h^{\prime}-\widetilde{R}(a) =2H(b)-h-H(a)-H_{+}(a)+R(a)\] \[=R(a)-h+H(b)-H(a)\] \[\text{either }=R(a)-h\geq 0\] \[\text{or }=H(b)-h\geq 0\]
Therefore, \([a,b]\times h^{\prime}\) is a chord for \(\widetilde{R}\), making its mirror image \([\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a)]\times h^{\prime}\) a chord for \(B^{\prime}\) and thus \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\) at height \(h^{\prime}\). An analogous argument shows the converse. So, \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable for \(Q\) if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable at height \(h^{\prime}\) for \(Q^{\prime}\).
In Case (3), we change notation to match Figure 6. Suppose we have \(b\in K\), \(\overline{\mathfrak{t}}(b)\in K^{*}\) and \(c=\overline{\mathfrak{t}}(c)\in Z\). We claim that \(\widehat{I}(b,c)\) is \(\sigma\)-semistable at height \(h\) if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(c))\) is \(\sigma^{\prime}\)-semistable at the same height \(h\).
In Figure 6, the chord \([b,c]\times h\) would be a horizontal line starting at any point on the vertical red line at \(b\) and going to the right. For \(\widetilde{R}\), we have \(H(b)\geq h\geq H_{+}(b)\), so a horizontal line at height \(h\) starting anywhere on the vertical segment \(b\times[H_{+}(b),H(b)]\) could go left without hitting the function \(\widetilde{R}\) except at height \(h=H_{+}(a)=H(b)\) where it would touch the function at \((a,H_{+}(a))\) then continue. For \(B^{\prime}\), the horizontal line starting at \((\overline{\mathfrak{t}}(b),h)\) would go right, possibly touch the curve at \(\overline{\mathfrak{t}}(a)\) and continue to the point \((c,h)\).
The situation in general is very similar. \(\widehat{I}(b,c)\) is \(\sigma\)-semistable at height \(h\) for some \(c\in Z\) if and only if \(H_{+}(b)\leq h\leq H(b)=R(b)\). Since \(H_{+}(b)\) is the supremum of \(R(x)\) for all \(b<x<s\), this is equivalent to saying the horizontal line at \(h\) does
not touch the curve \(R\) except possibly at one point (not more by the four point condition). If \(h=H(b)\), this horizontal line might continue to the left of \((b,h)\) an hit at most one point \((a,h)\) on the curve \(R\).
If \(h<H(b)\) then the horizontal line at \((b,h)\) on \(\widetilde{R}\), would go to the left and not hit anything since, for all \(x<b\), we have \(\widetilde{R}(x)\geq H_{+}(x)\geq H(b)>h\). So, the line from \((\widetilde{\mathfrak{t}}(b),h)\) to \((\widetilde{\mathfrak{t}}(c),h)\) would not hit \(B^{\prime}\).
If \(h=H(b)\), then, for all \(x<b\), \(\widetilde{R}(x)\geq H_{+}(x)\geq H(b)=h\). So, the line going left from \((b,h)=(b,H(b))\) would stay under \(\widetilde{R}\) possibly touching it at most once, say at \((a,h)\). Then \((a,b)\) would be an island and we have the situation in Figure 6. By the four point condition we cannot have another point \(a^{\prime}\) with the same property since \((a,h),(b,h),(c,h)\) are already on a line. The horizontal line going right from \((\widetilde{\mathfrak{t}}(b),h)\) would touch the curve \(B^{\prime}\) at \((\widetilde{\mathfrak{t}}(a),h)\) and continue to \((\widetilde{\mathfrak{t}}(c),h)\).
So, \(\widetilde{I}(b,c)\) being \(\sigma\)-semistable at height \(h\) implies that \(\widetilde{I}^{\prime}(\widetilde{\mathfrak{t}}(b),\widetilde{\mathfrak{t}}(c))\) is \(\sigma^{\prime}\)-semistable at the same height \(h\). The converse is similar since going from \(B^{\prime}\) to \(R\) is analogous (change \(B^{\prime}\) to \(-B^{\prime}\) and make it red). This concludes the proof in all cases.
## 4. Measured Laminations and Stability Conditions
In this section we connect measured laminations of the hyperbolic plane to stability conditions for continuous quivers of type \(\mathbb{A}\). We first define measured laminations (Definition 4.1) of the hyperbolic plane and prove some basic results we need in Section 4.1. In Section 4.2 we describe the correspondence that connects stability conditions to measured laminations. In Section 4.3 we present a candidate for continuous cluster characters. In Section 4.4 we briefly describe how all maximally \(\mathbf{N}_{\pi}\)-compatible sets come from a stability condition. In Section 4.5 we describe maps between cluster categories of type \(\mathbb{A}_{n}\) that factor through our continuous tilting. We also give an example for type \(\mathbb{A}_{4}\).
### Measured Laminations
We denote by \(\mathfrak{h}^{2}\) the Poincare disk model of the hyperbolic plane and by \(\partial\mathfrak{h}^{2}\) the boundary of the disk such that \(\partial\mathfrak{h}^{2}\) is the unit circle in \(\mathbb{C}\). Recall a **lamination** of \(\mathfrak{h}^{2}\) is a maximal set of noncrossing geodesics and that a geodesic in \(\mathfrak{h}^{2}\) is uniquely determined by a distinct pair of points on \(\partial\mathfrak{h}^{2}\).
Let \(L\) be a lamination of \(\mathfrak{h}^{2}\). Choose two open interval subsets \(A\) and \(B\) of \(\partial\mathfrak{h}^{2}\), each of which may be all of \(\partial\mathfrak{h}^{2}\) or empty. Let \(O_{A,B}\) be the set of geodesics with one endpoint in \(A\) and the other in \(B\). We call \(O_{A,B}\) a **basic open subset** of \(L\). Notice that \(O_{A,B}=O_{B,A}\). The basic open sets define a topology on \(L\).
**Definition 4.1**.: Let \(L\) be a lamination of \(\mathfrak{h}^{2}\) and \(\mathcal{M}:L\to\mathbb{R}_{\geq 0}\) a measure on \(L\). We say \((L,\mathcal{M})\) is a **measured lamination** if \(0<\mathcal{M}(O_{A,B})<\infty\) for every \(O_{A,B}\neq\emptyset\).
Notice that we immediately see any measured lamination \((L,\mathcal{M})\) has finite measure. That is, \(0<\mathcal{M}(L)<\infty\).
We now define some useful pieces of laminations.
**Definition 4.2**.: Let \(L\) be a lamination of \(\mathfrak{h}^{2}\).
1. Let \(\gamma\in L\) be a geodesic determined \(a,b\in\partial\mathfrak{h}^{2}\). We say \(\gamma\) is a **discrete arc** if there exists non-intersecting open subsets \(A\ni a\) and \(B\ni b\) of \(\partial\mathfrak{h}^{2}\) such that \(O_{A,B}=\{\gamma\}\).
2. Let \(a\in\partial\mathfrak{h}^{2}\). Let \(A\) be some interval subset of \(\partial\mathfrak{h}^{2}\) with more than one element such that for every geodesic \(\gamma\in L\) determined by some \(a^{\prime}\in A\) and \(b\in\partial\mathfrak{h}^{2}\), we have \(b=a\). Then we define the set \(K\) of geodesics determined by the pair \(a,A\) to be called a **fountain**. We say \(K\) is **maximal** if a fountain determined by \(a,A^{\prime}\), where \(A^{\prime}\supseteq A\), is precisely \(K\).
3. Let \(A,B\) be interval subsets of \(\partial\mathfrak{h}^{2}\) whose intersection contains at most one point. Suppose that for every geodesic \(\gamma\in L\) determined by \(a,b\in\partial\mathfrak{h}^{2}\), we have \(a\in A\setminus\partial A\) if and only if \(b\in B\setminus\partial B\). If there is more than one such geodesic, we call the set \(K\) of all such geodesics determined by \(a,b\) with \(a\in A\) and \(b\in B\) a **rainbow**. We say \(K\) is **maximal** if a rainbow determined by \(A^{\prime}\supseteq A\) and \(B^{\prime}\supseteq B\) is precisely \(K\).
From the definitions we have a result about discrete arcs, fountains, and rainbows.
**Proposition 4.3**.: _Let \(L\) be a lamination of \(\mathfrak{h}^{2}\) and let \(K\) be a discrete geodesic, a fountain, or a rainbow. Then \(\mathcal{M}(K)>0\)._
Proof.: By definition, if \(K=\{\gamma\}\) is a discrete arc then \(K=O_{A,B}\) and so \(\mathcal{M}(K)>0\). Additionally, if \(K=L\) then \(K=O_{\partial\mathfrak{h}^{2},\partial\mathfrak{h}^{2}}\) and so \(\mathcal{M}(K)>0\). So we will assume \(K\) is either a fountain or a rainbow and \(K\neq L\); in particular \(K\) has more than one element.
First suppose \(K\) is a fountain determined by \(a\in\partial\mathfrak{h}^{2}\) and \(A\subset\partial\mathfrak{h}^{2}\). By definition \(K\) has more than one element and so \(A\setminus\partial A\neq\emptyset\). If \(a\notin A\) then let \(B\ni a\) be a small open ball around \(a\) in \(\partial\mathfrak{h}^{2}\) such that \(B\cap A=\emptyset\). Now consider \(O_{A\setminus\partial A,B}\). We see \(O_{A\setminus\partial A,B}\subset K\) and \(\mathcal{M}(O_{A\setminus\partial A,B})>0\). If \(a\in A\) then every geodesic determined by an \(a^{\prime}\) and \(b\) with \(a^{\prime}\in A\setminus(\{a\}\cup\partial A)\) has \(b=a\). Let \(A^{\prime}=A\setminus(\{a\}\cup\partial A)\) and let \(B\ni a\) be an open ball such that \(A\setminus\partial A\not\subset B\). Now we have \(O_{A^{\prime},B}\subset K\) and \(\mathcal{M}(O_{A^{\prime},B})>0\). Therefore \(\mathcal{M}(K)>0\).
Now suppose \(K\) is a rainbow determined by \(A\) and \(B\). Again we know \(K\) has more than one element so both \(A\setminus\partial A\) and \(B\setminus\partial B\) are nonempty. Take \(A^{\prime}=A\setminus\partial A\) and \(B^{\prime}=B\setminus\partial B\). Then \(O_{A^{\prime},B^{\prime}}\subset K\) and \(\mathcal{M}(O_{A^{\prime},B^{\prime}})>0\). Therefore, \(\mathcal{M}(K)>0\).
### The Correspondence
In this section, we recall the connection between \(\mathbf{N}_{\pi}\)-clusters and (unmeasured) laminations of \(\mathfrak{h}^{2}\) for the straight descending orientation of a continuous quiver of type \(\mathbb{A}\), from [12]. We then extend this connection to measured laminations and stability conditions that satisfy the four point condition, obtaining a "2-bijection" (Theorem 4.12). Then we further extend this "2-bijection" between measured laminations and stability conditions to all continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources (Corollary 4.13). We conclude that section with an explicit statement that tilting a stability condition \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\) to a stability condition \(\sigma^{\prime}\in\mathcal{S}_{\mathrm{fpc}}(Q^{\prime})\) yields the _same_ measured lamination, for continuous quivers \(Q,Q^{\prime}\) of type \(\mathbb{A}\) (Theorem 4.14).
**Theorem 4.4** (from [12]).: _There is a bijection \(\Phi\) from maximally \(\mathbf{N}_{\pi}\)-compatible sets to laminations of \(\mathfrak{h}^{2}\). For each maximally \(\mathbf{N}_{\pi}\)-compatible set \(T\) and corresponding lamination \(\Phi(T)\), there is a bijection \(\phi_{T}:T\to\Phi(T)\) that takes objects in \(T\) to geodesics in \(\Phi(T)\)._
Before we proceed we introduce some notation to make some remaining definitions and proofs in this section more readable. First, we fix an indexing on \(\partial\mathfrak{h}^{2}\) in
the following way. To each point \(x\in\mathbb{R}\cup\{-\infty\}\) we assign the point \(e^{i\arctan(x)}\) in \(\partial\mathfrak{h}^{2}\). We now refere to points in \(\partial\mathfrak{h}^{2}\) as points in \(\mathbb{R}\cup\{-\infty\}\).
**Notation 4.5**.: Let \((L,\mathcal{M})\) be a measured lamination of \(\mathfrak{h}^{2}\).
* For each \(\gamma\in L\) we denote by \(a_{\gamma}\) and \(b_{\gamma}\) the unique points in \(\partial\mathfrak{h}^{2}\) that determine \(\gamma\) such that \(a_{\gamma}<b_{\gamma}\) in \(\mathbb{R}\cup\{-\infty\}\).
* For each \(x\in\partial\mathfrak{h}^{2}\) such that \(x\neq-\infty\), \[\frac{L}{x}:= \{\gamma\in L\mid\gamma_{a}<x<\gamma_{b}\}\] \[L\cdot x:= \{\gamma\in L\mid\gamma_{b}=x\}\] \[x\cdot L:= \{\gamma\in L\mid\gamma_{a}=x\}.\]
* For \(-\infty\), \[\frac{L}{-\infty}:= \emptyset\] \[L\cdot(-\infty):= \emptyset\] \[(-\infty)\cdot L:= \{\gamma\in L\mid\gamma_{a}=-\infty\}.\]
* Finally, for some interval \(I\subset\mathbb{R}\), \[I\cdot L:= \bigcup_{x\in I}x\cdot L=\{\gamma\in L\mid\gamma_{b}\in I\}\] \[L\cdot I:= \bigcup_{x\in I}L\cdot x=\{\gamma\in L\mid\gamma_{a}\in I\}.\]
We denote by \(\mathcal{L}\) the set of measured laminations of \(\mathfrak{h}^{2}\) and by \(\overline{\mathcal{L}}\) the set of laminations of \(\mathfrak{h}^{2}\) (without a measure).
Now we define how to obtain a useful function \(F\) from any measured lamination \(L\in\mathcal{L}\). We will use this to define a function \(\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\), where \(Q\) is the continuous quiver of type \(\mathbb{A}\) with straight descending orientation.
**Definition 4.6**.: Let \((L,\mathcal{M})\in\mathcal{L}\). We will define a useful function \(F\) on \(-\infty\), \(+\infty\), and then all of \(\mathbb{R}\). For \(-\infty\), define
\[u_{-\infty}^{-}:= 0 u_{-\infty}^{+}:= -\mathcal{M}((-\infty)\cdot L)\] \[F(-\infty):= 0 f(-\infty):= 0.\]
For \(+\infty\), define
\[u_{+\infty}^{-}=u_{+\infty}^{+}=F(+\infty)=f(+\infty)=0.\]
For each \(a\in\mathbb{R}\), define
\[u_{a}^{-}:= \mathcal{M}(L\cdot a) u_{a}^{+}:= -\mathcal{M}(a\cdot L)\] \[F(a):= -\mathcal{M}\left(\frac{L}{a}\right) f(a):= F(a)-\left(\sum_{x\leq a}u_{x}^{-}\right)-\left(\sum_{x<a}u_{x}^{+} \right).\]
First, note that since \(\mathcal{M}(L)<\infty\), each of the assignments is well-defined. It remains to show that \(F\) is a useful function.
**Proposition 4.7**.: _Let \((L,\mathcal{M})\in\mathcal{L}\) and let \(F\) be as in Definition 4.6. Then \(F\) is useful._
Proof.: Since \(\mathcal{M}(L)<\infty\), we see \(\sum_{x\in\mathbb{R}\cup\{+\infty\}}|u_{x}^{-}|+\sum_{x\in\mathbb{R}\cup\{-\infty \}}|u_{x}^{+}|<\infty\). Now we show \(f\) is continuous. Consider \(\lim_{x\to a^{-}}f(x)\) for any \(a\in\mathbb{R}\):
\[\lim_{x\to a^{-}}f(x) =\lim_{x\to a^{-}}\left[F(x)-\left(\sum_{y\leq x}u_{y}^{-}\right)- \left(\sum_{y<x}u_{y}^{+}\right)\right]\] \[=\lim_{x\to a^{-}}\left[-\mathcal{M}\left(\frac{L}{x}\right)- \left(\sum_{y\leq x}\mathcal{M}(L\cdot y)\right)-\left(\sum_{y<x}\mathcal{M}(y \cdot L)\right)\right]\] \[=-\mathcal{M}\left(\frac{L}{a}\right)-\mathcal{M}(L\cdot a)- \left(\sum_{x<a}\mathcal{M}(L\cdot a)\right)-\left(\sum_{x<a}\mathcal{M}(a \cdot L)\right)\] \[=F(a)-\left(\sum_{x\leq a}u_{x}^{-}\right)-\left(\sum_{x<a}u_{x} ^{+}\right)\] \[=f(a).\]
A similar computation shows \(\lim_{x\to a^{+}}f(x)=f(a)\). Therefore, \(f\) is continuous on \(\mathbb{R}\). We also note that \(\lim_{x\to\pm\infty}f(x)=0\), using similar computations.
It remains to show that \(f\) has bounded variation. Let \(a<b\in\mathbb{R}\) and let \(F_{0}=f\). Denote by \(\mathsf{var}_{f}([a,b))\) the variance of \(f\) over \([a,b)\). We see that
\[\mathsf{var}_{f}([a,b))=\mathcal{M}(([a,b)\cdot L)\cup(L\cdot[a,b)))-\sum_{x \in[a,b)}(\mathcal{M}(x\cdot L)+\mathcal{M}(L\cdot x).\]
That is, \(\mathsf{var}_{f}([a,b)\) is the measure the geodesics with endpoints in \([a,b)\) that are not discrete and do not belong to a fountain. So,
\[\mathsf{var}_{f}([a,b))\leq\mathcal{M}(([a,b)\cdot L)\cup(L\cdot[a,b))).\]
Then we have
\[\mathsf{var}_{f}(\mathbb{R})=\sum_{i\in\mathbb{Z}}\mathsf{var}_{f}([i,i+1)) \leq\sum_{i\in\mathbb{Z}}\mathcal{M}(([i,i+1)\cdot L)\cup(L\cdot[i,i+1)))<\infty.\]
Thus, \(f\) has bounded variation.
We state the following lemma without proof, since the proof follows directly from Definition 2.14 and 4.6 and Proposition 4.7.
**Lemma 4.8**.: _Let \((L,\mathcal{M})\in\mathcal{L}\) and let \(F\) be as in Definition 4.6. Then \((F,0)\) is a red-blue function pair for the continuous quiver \(Q\) of type \(\mathbb{A}\) with straight descending orientation._
Now now define the function \(\mathcal{L}\to\mathcal{S}(Q)\).
**Definition 4.9**.: Let \((L,\mathcal{M})\in\mathcal{L}\), let \(F\) be as in Definition 4.6, and let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. The map \(\Phi:\mathcal{L}\to\mathcal{S}(Q)\) is defined by setting \(\Phi((L,\mathcal{M}))\) equal to the equivalence class of \((F,0)\).
**Lemma 4.10**.: _Let \(L\in\mathcal{L}\) and let \(\partial\mathfrak{h}^{2}\) be indexed as \(\mathbb{R}\cup\{-\infty\}\), as before. Suppose there are points \(a,b\in\partial\mathfrak{h}^{2}\) such that for all \(x\in(a,b)\) we have \(\mathcal{M}(\frac{L}{x})\geq\mathcal{M}(\frac{L}{a})\) and \(\mathcal{M}(\frac{L}{x})\geq\mathcal{M}(\frac{L}{b})\). Then the geodesic in \(\mathfrak{h}^{2}\) uniquely determined by \(a\) and \(b\) is in \(L\)._
Proof.: For contradiction, suppose there is \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(a<c<b<d\). Then, we must have \(\beta\in L\) uniquely determined by \(c\) and \(d\), or else there is a set \(K\) with positive measure such that \(K\subset\frac{L}{b}\) but \(K\not\subset\frac{L}{c}\). Similarly, we must have \(\gamma\in L\) uniquely determined by \(a\) and \(c\). Now, we cannot have a fountain at \(c\) or else we will have a set with positive measure \(K\) such that \(K\subset\frac{L}{b}\) or \(K\subset\frac{L}{a}\) but \(K\not\subset\frac{L}{c}\). Since \(c\) has a geodesic to both the left and right, both \(\alpha\) must be discrete. But then \(\{\alpha\}\) has positive measure, a contradiction. Thus, there is no \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(a<c<b<d\). Similarly, there is no \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(c<a<d<b\). Therefore, since \(L\) is maximal, we must have the geodesic uniquely determined by \(a\) and \(b\) in \(L\).
**Proposition 4.11**.: _Let \((L,\mathcal{M})\in\mathcal{L}\), let \(F\) be as in Definition 4.6, and let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. Then \(\Phi((L,\mathcal{M}))\in\mathcal{S}_{\text{fpc}}(Q)\)._
Proof.: For contradiction, suppose there exists a \(\Phi((L,\mathcal{M}))\)-semistable module \(M_{I}\) such that \(|(\widehat{I}\times\{h\})\cap(\widehat{Q}\times\mathbb{R})|\geq 4\). Choose \(4\) points \(a<b<c<d\) in \(\widehat{Q}\) corresponding to four intersection points.
For the remainder of this proof, write \(x\)-\(y\) to mean the geodesic in \(\mathfrak{h}^{2}\) uniquely determined by \(x\neq y\in\partial\mathfrak{h}^{2}\). By Lemma 4.10, we have the following geodesics in \(L\): \(a\)-\(b\), \(a\)-\(c\), \(a\)-\(d\), \(b\)-\(c\), \(b\)-\(d\), and \(c\)-\(d\). However, this is a quadrilateral with _both_ diagonals, as shown in Figure 7. Since \(L\) is a lamination, this is a contradiction.
**Theorem 4.12**.: _Let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. Then \(\Phi:\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\) is a bijection. Furthermore, for a measured lamination \(L\) and stability condition \(\Phi(L)\), there is a bijection \(\phi_{L}\) from \(L\) to \(\Phi(L)\)-semistable indecomposable modules._
Proof.: By the proof of Proposition 4.11, we see that the second claim follows. Thus, we now show \(\Phi\) is a bijection.
**Injectivity.** Consider \((L,\mathcal{M})\) and \((L^{\prime},\mathcal{M}^{\prime})\) in \(\mathcal{L}\). Let \(\sigma=\Phi(L,\mathcal{M})\) and \(\sigma^{\prime}=\Phi(L^{\prime},\mathcal{M}^{\prime})\). If \(L\neq L^{\prime}\) then we see that the set of \(\sigma\)-semistable modules is different from the set of \(\sigma^{\prime}\)-semistable modules. Thus, \(\sigma\neq\sigma^{\prime}\). If \(L=L^{\prime}\) but \(\mathcal{M}\neq\mathcal{M}^{\prime}\) there must be some \(x\in\mathbb{R}\cup\{-\infty\}\) such that \(\mathcal{M}(\frac{L}{x})\neq\mathcal{M}^{\prime}(\frac{L^{\prime}}{x})\). But the functions \(F\) and \(F^{\prime}\) from \(L\) and \(L^{\prime}\), respectively using Definition 4.6, both have the same limits at \(\pm\infty\). Thus, \(\widehat{\mathcal{G}}(F,0)\) is not a vertical translation of \(\widehat{\mathcal{G}}(F^{\prime},0)\) in \(\mathbb{R}^{2}\). Therefore, \(\sigma\neq\sigma^{\prime}\).
Figure 7. The geodesics \(a\)-\(b\), \(a\)-\(c\), \(a\)-\(d\), \(b\)-\(c\), \(b\)-\(d\), and \(c\)-\(d\) used in the proof of Proposition 4.11. Notice \(a\)-\(b\), \(b\)-\(c\), \(c\)-\(d\), and \(a\)-\(d\) form a quadrilateral and its diagonals, \(a\)-\(c\) and \(b\)-\(d\), cross.
**Surjectivity.** Let \(\sigma\) be a stability condition. Let \(T\) be the maximal \(\mathbf{N}_{\pi}\)-compatible set of indecomposable modules determined by \(\sigma\) (Theorem 2.25). Let \(L\) be the lamination of \(\mathfrak{h}^{2}\) uniquely determined by \(T\) (Theorem 4.4). In particular, the indecomposable \(M_{I}\) corresponds to the geodesic uniquely determined by \(\inf I\) and \(\sup I\).
Let \((R,B)\) be the representative of \(\sigma\) such that \(B=0\); that is, \((R,B)=(R,0)\). For each \(x\in\partial\mathfrak{h}^{2}\), let
\[\mathcal{M}(L\cdot x) =u_{x}^{-} \mathcal{M}(x\cdot L) =-u_{x}^{+}\] \[\mathcal{M}\left(\frac{L}{x}\right) =R(x).\]
Since \(r\) must have bounded variation and \(\sum_{x\in\overline{R}}|u_{x}^{-}|+|u_{x}^{+}|<\infty\), we see \(\mathcal{M}(L)<\infty\).
Let \(O_{A,B}\subset L\) be a basic open subset. If \(O_{A,B}=\emptyset\) then we're done.
Now we assume \(O_{A,B}\neq\emptyset\) and let \(\gamma\in O_{A,B}\). If there exist two stability indicators for the indecomposable \(M_{I}\) corresponding to \(\gamma\), with heights \(h_{0}<h_{1}\), then we know \(\mathcal{M}(\{\gamma\})>|h_{1}-h_{0}|>0\) and so \(\mathcal{M}(O_{A,B})>0\).
We now assume there is a unique stability indicator of height \(h\) for the indecomposable \(M_{I}\) corresponding to \(\gamma\). Without loss of generality, since \(O_{A,B}=O_{B,A}\), assume \(a=\gamma_{a}\in A\) and \(b=\gamma_{b}\in B\). We know that, for all \(a<x<b\), we have \(R(x)\leq R(a)\) and \(R(x)\leq R(b)\). There are two cases: (1) \(R(x)<R(a)\) and \(R(x)<R(b)\), for all \(x\in(a,b)\), and (2) there exists \(x\in(a,b)\) such that \(R(x)=R(a)\) or \(R(x)=R(b)\).
**Case (1).** Let \(e=\tan(\frac{1}{2}(\tan^{-1}(a)+\tan^{-1}(b)))\). Let \(\{h_{i}\}_{i\in\mathbb{N}}\) be a strictly increasing sequence such that \(h_{0}=R(e)\) and \(\lim_{i\to\infty}h_{i}=h\). By Lemma 2.15(1) and our assumption that \(\mathcal{M}(\{\gamma\})=0\), for each \(i>0\), there is a stability indicator with height \(h_{i}\) and endpoints \(a_{i},b_{i}\) such that \(a<a_{i}<b_{i}<b\). Then \(\lim_{i\to\infty}a_{i}=a\) and \(\lim_{i\to\infty}b_{i}=b\), again by Lemma 2.15(1). Since \(A\) and \(B\) are open, there is some \(N\in\mathbb{N}\) such that, for all \(i\geq N\), we have \(a_{i}\in A\) and \(b_{i}\in B\). Let \(C=(a,a_{N})\) and \(D=(b_{N},b)\). Then, \(\mathcal{M}(O_{C,D})\geq|h-h_{N}|\) and so \(\mathcal{M}(O_{A,B})>0\).
**Case (2).** Assume there exists \(x\in(a,b)\) such that \(R(x)=R(a)\) or \(R(x)=R(b)\). Let \(e\) be this \(x\). If \(R(x)=R(a)\) and \(a=-\infty\), then \(R(b)=0\) (or else \(\gamma\notin O_{A,B}\subset L\)). Then we use the technique from Case (1) with \(b\) and \(+\infty\) to obtain some \(C=(b,d)\) and \(D=(c,+\infty)\) such that \(\mathcal{M}(O_{C,D})>0\). Thus, \(\mathcal{M}(O_{A,B})>0\).
Now we assume \(a>-\infty\) and \(R(x)=R(a)\) or \(R(x)=R(b)\). We consider \(R(x)=R(b)\) as the other case is similar. Since \(\sigma\) satisfies the four point condition, we know that for any \(\varepsilon>0\) such that \(R(b+\varepsilon)<R(b)\) we must have \(0<\lambda<\varepsilon\) such that \(R(b+\lambda)>R(b)\). Similarly, for any \(\varepsilon>0\) such that \(R(a-\varepsilon)<R(b)\) we must have \(0\leq\lambda<\varepsilon\) such that \(R(a-\lambda)>R(b)\). Notice the strict inequality in the statement about \(R(b+\lambda)\) and the weak inequality in the statement about \(R(a-\lambda)\).
Let \(\{h_{i}\}\) be a strictly decreasing sequence such that \(h_{0}=0\) and \(\lim_{i\to\infty}h_{i}=h\). By Lemma 2.15(1) and our assumption that \(\mathcal{M}(\{\gamma\})=0\), for each \(i>0\), there is a stability indicator with height \(h_{i}\) and endpoints \(a_{i},b_{i}\) such that \(a_{i}\leq a<b<b_{i}\). Since \(\sigma\) satisfies the four point condition, and again by Lemma 2.15(1), \(\lim_{i\to\infty}b_{i}=b\). Since \(A\) and \(B\) are open, there is \(N\in\mathbb{N}\) such that, if \(i\geq N\), we have \(a_{i}\in A\) and \(b_{i}\in B\). If \(a_{i}=a\) for any \(i>N\), let \(C\) be a tiny epsilon ball around \(a\) that does not include \(b\). Otherwise, let \(C=(a_{N},a)\). Let \(D=(b,b_{N})\). Then \(\mathcal{M}(O_{C,D})\geq|h_{N}-h|\) and so \(\mathcal{M}(O_{A,B})>0\).
**Conclusion.** Since \(\mathcal{M}(L)<\infty\), we know \(\mathcal{M}(O_{A,B})<\infty\) for each \(O_{A,B}\). This proves \((L,\mathcal{M})\) is a measured lamination. By the definition of \(\Phi\), we see that \(\Phi(L,\mathcal{M})=\sigma\). Therefore, \(\Phi\) is surjective and thus bijective.
**Corollary 4.13** (to Theorems 3.17 and 4.12).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). Then there is a bijection \(\Phi:\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\). Furthermore, for a measured lamination \(L\) and stability condition \(\Phi(L)\), there is a bijection \(\phi_{L}\) from \(L\) to \(\Phi(L)\)-semistable indecomposable modules._
**Theorem 4.14**.: _Let \(\sigma\in\mathcal{S}_{\text{fpc}}(Q)\) be the stability condition given by \((R,B)\) and let \(\sigma^{\prime}\in\mathcal{S}_{\text{fpc}}(Q^{\prime})\) be given by \((R^{\prime},B^{\prime})\). Then \(\sigma,\sigma^{\prime}\) give the same measured lamination on the Poincare disk._
Proof.: The set of geodesics going from intervals \((a,b)\) to \((x,y)\) has the same measure as those going from \((\mathfrak{t}(a),\mathfrak{t}(b))\) to \((\mathfrak{t}(x),\mathfrak{t}(y))\) where we may have to reverse the order of the ends. We can break up the intervals into pieces and assume that \((a,b),(x,y)\) are either both in \(K\), both in \(Z\) or one is in \(K\) and the other in \(Z\). The only nontrivial case is when \((a,b)\) is in \(K\) and \((x,y)\) is in \(Z\). In that case, the measure of this set of geodesics for \(\sigma\) is equal to the variation of \(H\) on \((a,b)\) since the islands don't "see" \(Z\). Similarly, the measure of the same set of geodesics for \(\sigma^{\prime}\), now parametrized as going from \((\mathfrak{t}(b),\mathfrak{t}(a))\) to \((x,y)\) is equal to the variation of \(H^{\prime}\) on \((\mathfrak{t}(b),\mathfrak{t}(a))\) where \(H^{\prime}(z)=H_{+}(\mathfrak{t}(z))\).
There is one other case that we need to settle: We need to know that the local variation of \(H\) at \(-\infty\) is equal to the local variation of \(H^{\prime}\) at \(r\). But this holds by definition of \(H,H^{\prime}\).
An example of a stability condition \(\sigma\) and corresponding measured lamination are shown in Figures 8, 9. The continuously tilted stability condition \(\sigma^{\prime}\) is shown in Figure 10.
Figure 8. The modified graph of the red-blue function pair \((R,B)\). Horizontal lines indicated semistable indecomposable representations. The rectangle labeled \(X\) represents one object with positive measure. The measure of a region is given by its height.
### Continuous cluster character
We present a candidate for the continuous cluster character using the formula from [15] which applies in the continuous case. This lives in a hypothetical algebra having a variable \(x_{t}\) for every real number \(t\). In this algebra which we have not defined, we give a simple formula for the cluster
Figure 10. This is the continuous tilting of Figure 8. There are two islands \(F1\) and \(R3\) which have been flipped up. The measured lamination (Figure 9) is unchanged, only relabeled.
Figure 9. The lamination of the hyperbolic plain {corresponding to the stability condition shown in Figure 8. The thick arc labeled \(X\) is an isolated geodesic with positive measure. The measure is equal to the height of the rectangle labeled \(X\) in Figure 8.
variable of an admissible module \(M_{ab}\) where \(a<b\) and the quiver \(Q\) is oriented to the left (is red) in a region containing \((a,b]\) in its interior. In analogy with the cluster character in the finite case (5) or [15], replacing summation with integration, we define \(\chi(M_{ab})\) to be the formal expression:
\[\chi(M_{ab})=\int_{a}^{b}\frac{x_{a}x_{b}\,\mathrm{d}t}{x_{t}^{2}}. \tag{6}\]
This could be interpreted as an actual integral of some function \(x_{t}\). For example, if we let \(x_{t}=t\) then we get \(\chi(M_{ab})=b-a\), the length of the support of \(M_{ab}\). The constant function \(x_{t}=1\) gives the same result.
The same cluster character formula will be used for modules with support \([a,b)\) in the blue region (where the quiver is oriented to the right).
This can also be written as
\[\chi(M_{ab})=x_{a}\chi(P_{b})-x_{b}\chi(P_{a})\]
where \(P_{b}\) is the projective module at \(b\) with cluster character
\[\chi(P_{b})=\int_{-\infty}^{b}\frac{x_{b}\,\mathrm{d}t}{x_{t}^{2}}.\]
Then the cluster mutation equation
\[\chi(M_{ac})\chi(M_{bd})=\chi(M_{ab})\chi(M_{cd})+\chi(M_{bc})\chi(M_{ad})\]
follows, as in the finite \(A_{n}\) case, from the Plucker relation on the matrix:
\[\begin{bmatrix}x_{a}&x_{b}&x_{c}&x_{d}\\ \chi(P_{a})&\chi(P_{b})&\chi(P_{c})&\chi(P_{d})\end{bmatrix}.\]
In Figures 8 and 9, if the measure of \(X=M_{df}\) is decreased to zero, the height of the rectangle in Figure 8 will go to zero, the four point condition will be violated and we can mutate \(X\) to \(X^{*}=M_{bh}\). Then the cluster characters are mutated by the Ptolemy equation:
\[\chi(X)\chi(X^{*})=\chi(M_{fh})\chi(M_{bd})+\chi(M_{dh})\chi(M_{bf})\]
where \(\chi(M_{bd})\) and \(\chi(M_{fh})\) are given by (6) and the other four terms have a different equation since there is a source (0) in the middle (\(d<0<f\)):
\[\chi(X)=\chi(M_{df})=\int_{d}^{0}\int_{0}^{f}\frac{x_{d}x_{0}x_{f}}{x_{s}^{2} x_{t}^{2}}\,\mathrm{d}s\,\mathrm{d}t+\frac{x_{d}x_{f}}{x_{0}}.\]
The double integral counts the proper submodules \(M_{ds}\oplus M_{tf}\subset X\) and there is one more term for the submodule \(X\subseteq X\).
The continuous cluster character will be explained in more detail in another paper.
### Every \(\mathbf{N}_{\pi}\)-cluster comes from a stability condition
Let \(L\) be a lamination of \(\mathfrak{h}^{2}\). Then there exists a measured lamination \((L,\mathcal{M})\) in the following way. There are at most countably many discrete arcs in \(L\). Assign each discrete arc a natural number \(n\). Then, set \(\mathcal{M}(\{\gamma_{n}\})=\frac{1}{1+n^{2}}\), for \(n\in\mathbb{N}\). Let \(K\) be the set of all discrete geodesics in \(L\). On \(L\setminus K\), give each \(O_{A,B}\) it's transversal measure. Thus, we have given \(L\) a finite measure satisfying Definition 4.1. Therefore, \((L,\mathcal{M})\) is a measured lamination. This means the set of measured laminations, \(\mathcal{L}\), surjects on to the set of laminations, \(\overline{\mathcal{L}}\), by "forgetting" the measure. Then, the set \(\mathcal{S}_{\mathrm{fpc}}(Q)\)
for some continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, surjects onto the set of \(\mathbf{N}_{\pi}\)-clusters, \(\mathcal{T}_{\mathbf{N}_{\pi}}\), in the following way.
Essentially, there is a surjection \(\mathcal{S}_{\mathrm{fpc}}(Q)\twoheadrightarrow\mathcal{T}_{\mathbf{N}_{\pi}}\) defined using the surjection \(\mathcal{L}\twoheadrightarrow\overline{\mathcal{L}}\). If we follow the arrows around, we see that each stability condition \(\sigma\) is set to the set of \(\sigma\)-semistable modules, which form an \(\mathbf{N}_{\pi}\)-cluster.
### Maps between cluster categories of type \(\mathbb{A}_{n}\)
Let \(Q\) be a quiver of type \(\mathbb{A}_{n}\), for \(n\geq 2\). Label the vertices \(1,\ldots,n\) in \(Q\) such that there is an arrow between \(i\) and \(i+1\) for each \(1\leq i<n\).
For each \(i\in\{-1,0,\ldots,n,n+1,n+2\}\) let
\[x_{i}=\tan\left(\frac{i+1}{n+3}\pi-\frac{\pi}{2}\right).\]
We define a continuous quiver \(\mathcal{Q}\) of type \(\mathbb{A}\) based on \(Q\), called the **continuification** of \(Q\). If \(1\) is a sink (respectively, source) then \(-\infty\) is a sink (respectively, source) in \(\mathcal{Q}\). If \(n\) is a sink (respectively, source) then \(+\infty\) is a sink (respectively, source in \(\mathcal{Q}\)). For all \(i\) such that \(2\leq i\leq n-1\), we have \(x_{i}\) is a sink (respectively, source) in \(\mathcal{Q}\) if and only if \(i\) is a sink (respectively, source) in \(Q\).
Define a map \(\Omega:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}^{\mathrm{r}}( \mathcal{Q})\) in Figure 11 on page 1.
Let \(1\leq m<n\) such that there is a path \(1\to m\) in \(Q\) or a path \(m\to 1\) in \(Q\) (possibly trivial). Let \(Q^{\prime}\) be obtained from from \(Q\) by reversing the path between \(1\) and \(m\) (if \(m=1\) then \(Q=Q^{\prime}\)). It is well known that \(\mathcal{D}^{b}(Q)\) and \(\mathcal{D}^{b}(Q^{\prime})\) are equivalent as triangulated categories. Let \(F:\mathcal{D}^{b}(Q)\to\mathcal{D}^{b}(Q^{\prime})\) be a triangulated equivalence determined by sending \(P_{n}[0]\) to \(P_{n}[0]\). Furthermore, we know \(\tau\circ F(M)\cong F\circ\tau(M)\) for every object \(M\) in \(\mathcal{D}^{b}(Q)\), where \(\tau\) is the Auslander-Reiten translation. Then this induces a functor \(\overline{F}:\mathcal{C}(Q)\to\mathcal{C}(Q^{\prime})\). Overloading notation, we denote by \(\overline{F}:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}( \mathcal{C}(Q^{\prime}))\) the induced map on isomorphism classes of indecomposable objects.
Let \(\mathcal{Q}^{\prime}\) be the continuification of \(Q^{\prime}\) and \(\Omega^{\prime}:\operatorname{Ind}(\mathcal{C}(Q^{\prime}))\to \operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime})\) the inclusion defined in the same way as \(\Omega\). Notice that that orientation of \(\mathcal{Q}\) and \(\mathcal{Q}^{\prime}\) agree above \(x_{m}\). Furthermore, if \(m>1\), the interval \((-\infty,x_{m})\) is blue in \(\mathcal{Q}\) if and only if it is red in \(\mathcal{Q}^{\prime}\) and vice versa. Using Theorem 3.2, there is a map \(\phi:\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q})\to\operatorname{Ind}^{ \mathrm{r}}(\mathcal{Q}^{\prime})\) such that \(\{M,N\}\subset\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q})\) are \(\mathbf{N}_{\pi}\)-compatible if and only if \(\{\phi(M),\phi(N)\}\subset\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime}))\) are \(\mathbf{N}_{\pi}\)-compatible. Following tedious computations, we have the following commutative diagram that preserves compatibility:
\[\diagram{\nodenode{}}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\{}\node{}\node{}\node{}\node{}\node{}\{}\node{}\node{}\{}\node{} \node{}\{}\node{}\node{}\node{}\{}\node{}\node{}\{}\node{}\{}\node{}\{}\node{}\{} \node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{} \node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\)\(\{}\)\(\{}\{}\)\(\{}\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\)\)\(\}\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\
#### 4.5.1. An example for \(\mathbb{A}_{4}\) quivers
Let \(Q,Q^{\prime}\) be the following quivers and let \(\mathcal{Q},\mathcal{Q}^{\prime}\) be the respective continuifications defined above with functions \(\Omega,\Omega^{\prime}\).
Let \(\overline{F}:\mathcal{C}(Q)\to\mathcal{C}(Q^{\prime})\) be defined as above. A visualization of the commutative diagram above in \(\mathfrak{h}^{2}\) is contained in Figure 12 on page 37.
For \(\Omega:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}^{\mathrm{r}}( \mathcal{Q})\):
\[A =\Omega(P_{4}) F =\Omega(M_{23}) K =\Omega(P_{3}[1])\] \[B =\Omega(P_{3}) G =\Omega(I_{3}) L =\Omega(I_{1})\] \[C =\Omega(P_{2}) H =\Omega(P_{4}[1])) M =\Omega(P_{2}[1])\] \[D =\Omega(P_{1}) I =\Omega(S_{2}) N =\Omega(P_{1}[1])\] \[E =\Omega(S_{3}) J =\Omega(I_{2}).\]
To save space we will indicate an indecomposable module by its support interval.
\[A =[\tan(3\pi/14),+\infty) F =[\tan(-\pi/14),\tan(5\pi/14)) K =[\tan(-5\pi/14),\tan(3\pi/14))\] \[B =[\tan(\pi/14),+\infty) G =[\tan(-3\pi/14),\tan(5\pi/14)) L =[\tan(-3\pi/14),\tan(\pi/14))\] \[C =[\tan(-\pi/14),+\infty) H =[\tan(-5\pi/14),\tan(5\pi/14)) M =[\tan(-5\pi/14),\tan(\pi/14))\] \[D =[\tan(-3\pi/14),+\infty) I =[\tan(-\pi/14),\tan(3\pi/14)) N =[\tan(-5\pi/14),\tan(-\pi/14))\] \[E =[\tan(\pi/14),\tan(5\pi/14)) J =[\tan(-3\pi/14),\tan(3\pi/14)).\]
For \(\Omega^{\prime}:\operatorname{Ind}(\mathcal{C}(Q^{\prime}))\to \operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime})\):
\[A =\Omega^{\prime}(P_{4}^{\prime}) F =\Omega^{\prime}(I_{2}^{\prime}) K =\Omega^{\prime}(P_{3}^{\prime}[1])\] \[B =\Omega^{\prime}(P_{3}^{\prime}) G =\Omega^{\prime}(I_{3}^{\prime}) L =\Omega^{\prime}(P_{1}^{\prime})\] \[C =\Omega^{\prime}(M_{23}^{\prime}) H =\Omega^{\prime}(P_{4}^{\prime}[1]) M =\Omega^{\prime}(P_{2}^{\prime})\] \[D =\Omega^{\prime}(I_{4}^{\prime}) I =\Omega^{\prime}(P_{1}^{\prime}[1]) N =\Omega^{\prime}(S_{2}^{\prime})\] \[E =\Omega^{\prime}(I_{1}^{\prime}) J =\Omega^{\prime}(P_{2}^{\prime}[1]).\]
In \(\operatorname{mod}^{\mathrm{r}}(\mathcal{Q}^{\prime}))\):
\[A =[\tan(3\pi/14),+\infty) F =(\tan(-5\pi/14),\tan(5\pi/14)) K =(\tan(-\pi/14),\tan(3\pi/14))\] \[B =(-\infty,+\infty) G =(\tan(-3\pi/14),\tan(5\pi/14)) L =(-\infty,\tan(-3\pi/14)]\] \[C =(\tan(-5\pi/14),+\infty) H =(\tan(-\pi/14),\tan(5\pi/14)) M =(-\infty,\tan(\pi/14))\] \[D =(\tan(-3\pi/14),+\infty) I =(\tan(-5\pi/14),\tan(3\pi/14)) N =(\tan(-5\pi/14),\tan(-\pi/14)]\] \[E =(-\infty,\tan(5\pi/14)) J =(\tan(-3\pi/14),\tan(3\pi/14)).\]
The orange highlights changes due to tilting. The purple highlights a _coincidental_ fixed endpoint (but notice the change in open/closed).
## Future Work
There are a few questions that naturally arise from our results. What is the connection between our tilting and the reflection functors introduced in [14]? What if we considered _all_ modules over a continuous quiver of type \(\mathbb{A}\), instead of just those that are representable. Can we expand Section 4.3 and describe a continuous cluster algebra? The authors plan to explore some of these questions in future research.
There is still much work to do with general continuous stability, as well. What can we learn by studying measured laminations of other surfaces? For example, can we connect a continuous type \(\mathbb{D}\) quiver to measured laminations of the punctured (Poincare) disk? In the present paper, we consider stability conditions in the sense of King. What about other kinds of stability conditions? Furthermore, can the connections between stability conditions and moduli spaces be generalized to the continuous case?
Figure 12. \(\mathbb{A}_{4}\) example – arcs in \(\mathfrak{h}^{2}\). Continuous tilting doesn’t move arcs in the hyperbolic plane. We can see this by relabeling the boundary of \(\mathfrak{h}^{2}\) accordingly. We also see how the diagonals of the heptagon (which models the cluster combinatorics for \(\mathcal{C}(Q)\) and \(\mathcal{C}(Q^{\prime})\)) are preserved by \(\overline{F}\). |
2309.11598 | A theory satisfying a strong version of Tennenbaum's theorem | We answer a question of Pakhomov by showing that there is a consistent, c.e.
theory $T$ such that no theory which is definitionally equivalent to $T$ has a
computable model. A key tool in our proof is the model-theoretic notion of
mutual algebraicity. | Patrick Lutz, James Walsh | 2023-09-20T19:21:11Z | http://arxiv.org/abs/2309.11598v1 | # A theory satisfying a strong version of Tennenbaum's theorem
###### Abstract.
We answer a question of Pakhomov by showing that there is a consistent, c.e. theory \(T\) such that no theory which is definitionally equivalent to \(T\) has a computable model. A key tool in our proof is the model-theoretic notion of mutual algebraicity.
## 1. Introduction
Tennenbaum's theorem states that there is no computable nonstandard model of \(\mathsf{PA}\)[15]. Often, this result is viewed as giving us one reason the standard model of \(\mathsf{PA}\) is special--it is the only computable model--but another perspective is possible: Tennenbaum's theorem is a source of examples of consistent, c.e. theories with no computable models.
To explain this perspective, let us say that a theory \(T\) has the **Tennenbaum property** if \(T\) has no computable models. Tennenbaum's theorem implies that there are many consistent extensions of \(\mathsf{PA}\) with the Tennenbaum property. For example, the theory \(\mathsf{PA}+\neg\mathsf{Con}(\mathsf{PA})\) (which asserts that \(\mathsf{PA}\) is inconsistent) is a consistent extension of \(\mathsf{PA}\) with only nonstandard models and hence, by Tennenbaum's theorem, with no computable models. Furthermore, a slight extension of the proof of Tennenbaum's theorem can be used to prove that many other theories have the Tennenbaum property. For example, it is not hard to show that \(\mathsf{ZFC}\) has no computable models [Ham] and likewise for much weaker theories like \(\mathsf{Z}_{2}\) (the theory of full second order arithmetic), or even \(\mathsf{RCA}_{0}\) (at least if "model" is understood in the usual sense of first order logic). More generally, it seems to be an empirical fact that every natural theory which interprets even a small fragment of second order arithmetic has the Tennenbaum property.
Recently, however, Pakhomov showed that this phenomenon is somewhat fragile: it depends on the specific language in which the theory is presented [14]. To make this idea precise, Pakhomov used the notion of **definitional equivalence** (also known as **synonymy**), a strong form of bi-interpretability introduced by de Bouvere in [1]. Roughly speaking, theories \(T\) and \(T^{\prime}\) in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) are definitionally equivalent if they can be viewed as two instances of a single theory, but with different choices of which notions to take as primitive.
**Theorem 1.1** (Pakhomov).: _There is a theory \(T\) which is definitionally equivalent to \(\mathsf{PA}\) such that any consistent, c.e. extension of \(T\) has a computable model._
This theorem implies that every consistent, c.e. extension of \(\mathsf{PA}\) is definitionally equivalent to a theory with a computable model. Moreover, the techniques used by Pakhomov are not restricted to extensions of \(\mathsf{PA}\). For example, Pakhomov notes that they are sufficient to prove that \(\mathsf{ZF}\) is definitionally equivalent to a theory with a computable model. More generally, Pakhomov's techniques seem sufficient to prove that each example we have given so far of a theory with the Tennenbaum property is definitionally equivalent to a theory without the Tennenbaum property.
In light of these observations, Pakhomov asked how general this phenomenon is [10]. In particular, does it hold for every consistent, c.e. theory?
**Question 1** (Pakhomov).: _Is every consistent, c.e. theory definitionally equivalent to a theory with a computable model?_
The purpose of this paper is to answer this question in the negative. In other words, to give an example of a consistent, c.e. theory which satisfies a strong version of the Tennenbaum property.
**Theorem 1.2**.: _There is a consistent, c.e. theory \(T\) such that no theory which is definitionally equivalent to \(T\) has a computable model._
To prove this theorem, we construct a theory \(T\) which has no computable models but is also model-theoretically tame. A key observation in our proof is that if a theory \(T\) is sufficiently tame then any theory definitionally equivalent to \(T\) must also be fairly tame. In particular, if \(T\) is sufficiently tame then every theory which is definitionally equivalent to \(T\) satisfies a weak form of quantifier elimination.
Here's why this is useful. Suppose that \(M\) is a model of a theory \(T^{\prime}\) which is definitionally equivalent to \(T\). It follows from the definition of "definitionally equivalent" that within \(M\), we can define a model of \(T\). If \(T^{\prime}\) had quantifier elimination then we could assume that this definition is quantifier free and thus \(M\) can compute a model of \(T\). Since \(T\) has no computable models, this would imply that \(M\) itself is not computable. Unfortunately, we can't quite follow this strategy: we don't know that \(T^{\prime}\) has full quantifier elimination, but only a weak version of it. However, using this weak form of quantifier elimination we can show that \(M\) can computably approximate a model of \(T\) and, by picking \(T\) so that its models cannot even be computably approximated, this is enough to show that \(M\) is not computable.
The specific form of model-theoretic tameness that we use in our proof is known as **mutual algebraicity**, first defined in [1] and subsequently developed by Laskowski and collaborators (e.g. [12, 13, 14]). The main result we need from the theory of mutual algebraicity is a quantifier elimination theorem proved by Laskowski in [12].
Our use of tame model theory in this paper is somewhat reminiscent of techniques used by Emil Jerabek in the paper [15]. In that paper, Jerabek separated two conditions which imply that a theory \(T\) is essentially undecideable: the condition that \(T\) can represent all partially recursive functions and the condition that \(T\) interprets Robinson's \(R\). To accomplish this, he used the fact that the model completion of the empty theory in an arbitrary language is model-theoretically tame--in particular, it eliminates \(\exists^{\infty}\) and is \(\mathsf{NSOP}\). He ended the paper by asking whether there are more connections between formal arithmetic and tame model theory. We believe our results constitute a partial answer to his question.
### Acknowledgements
We thank Peter Cholak, Nick Ramsey, Charlie McCoy, Andrew Marks, Forte Shinko, Mariana Vicaria and Kyle Gannon for helpful conversations, James Hanson for pointing us to the literature on mutual algebraicity and Chris Laskowski for help in understanding that literature.
## 2. Preliminaries on definitional equivalence and mutual algebraicity
In this section we will give the formal definition of definitional equivalence, fix some notation related to it and review the facts about mutual algebraicity that we need.
### Definitional equivalence
To define definitional equivalence, we first need the concept of a definitional extension of a theory.
**Definition 2.1**.: Given a theory \(T\) in language \(\mathcal{L}\), a **definitional extension** of \(T\) is a theory \(T^{\prime}\supseteq T\) in a language \(\mathcal{L}^{\prime}\supseteq\mathcal{L}\) such that
1. \(\boldsymbol{T^{\prime}}\) **is conservative over \(\boldsymbol{T}\):** for each sentence \(\varphi\in\mathcal{L}\), \(T^{\prime}\vdash\varphi\) if and only if \(T\vdash\varphi\).
2. **The symbols in \(\boldsymbol{\mathcal{L}^{\prime}}\) are definable in \(\boldsymbol{\mathcal{L}}\):** for each constant symbol \(c\), relation symbol \(R\) and function symbol \(f\) in \(\mathcal{L}^{\prime}\), there is a corresponding formula \(\varphi_{c}\), \(\varphi_{R}\), or \(\varphi_{f}\) in \(\mathcal{L}\) such that \[T^{\prime}\vdash\forall x\,(x=c\leftrightarrow\varphi_{c}(x))\] \[T^{\prime}\vdash\forall\overline{x}\,(R(\overline{x}) \leftrightarrow\varphi_{R}(\overline{x}))\] \[T^{\prime}\vdash\forall\overline{x},y\,(f(\overline{x})=y \leftrightarrow\varphi_{f}(\overline{x},y)).\]
**Definition 2.2**.: Theories \(T\) and \(T^{\prime}\) in disjoint signatures are **definitionally equivalent** if there is a single theory which is a definitional extension of both \(T\) and \(T^{\prime}\).
More generally, theories \(T\) and \(T^{\prime}\) are definitionally equivalent if they are definitionally equivalent after renaming their symbols to make their signatures disjoint. However, there is no loss of generality from ignoring theories with overlapping signatures, so we will do that for the rest of this paper.
**Example 2.3**.: The theories of the integers with plus and with minus--i.e. \(T=\operatorname{Th}(\mathbb{Z},+)\) and \(T^{\prime}=\operatorname{Th}(\mathbb{Z},-)\)--are definitionally equivalent because plus and minus can both be defined in terms of the other. More formally, the theory \(T^{\prime\prime}=\operatorname{Th}(\mathbb{Z},+,-)\) is a definitional extension of both \(T\) and \(T^{\prime}\). In contrast, it is well-known that the theories \(\operatorname{Th}(\mathbb{Z},+)\) and \(\operatorname{Th}(\mathbb{Z},\times)\) are _not_ definitionally equivalent, because neither plus nor times can be defined in terms of the other.
A key point about definitional equivalence is that if \(T\) and \(T^{\prime}\) are definitionally equivalent theories in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), respectively, then every model of \(T\) can be viewed as a model of \(T^{\prime}\) and vice-versa. Likewise, every \(\mathcal{L}\)-formula can be viewed as an \(\mathcal{L}^{\prime}\)-formula and vice-versa. It will be useful to us to make this idea precise and to fix some notation.
Translating models.Suppose that \(T\) and \(T^{\prime}\) are definitionally equivalent theories in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), respectively. Let \(T^{\prime\prime}\) be an \(\mathcal{L}^{\prime\prime}\)-theory witnessing the definitional equivalence of \(T\) and \(T^{\prime}\)--i.e. \(\mathcal{L}\cup\mathcal{L}^{\prime}\subseteq\mathcal{L}^{\prime\prime}\) and \(T^{\prime\prime}\) is a definitional extension of both \(T\) and \(T^{\prime}\).
Suppose that \(R\) is a relation symbol in \(\mathcal{L}\). Since \(T^{\prime\prime}\) is a definitional extension of \(T^{\prime}\), there is an \(\mathcal{L}^{\prime}\)-formula, \(\varphi_{R}\), which \(T^{\prime\prime}\) proves is equivalent \(R\). We will refer to this formula as the \(\boldsymbol{\mathcal{L}^{\prime}}\)**-definition of \(\boldsymbol{R}\)**. Similarly, every other constant, relation and function symbol of \(\mathcal{L}\) has an \(\mathcal{L}^{\prime}\)-definition and vice-versa.
Given a model \(M\) of \(T^{\prime}\), we can turn \(M\) into an \(\mathcal{L}\)-structure by interpreting each constant, relation and function symbol of \(\mathcal{L}\) according to its \(\mathcal{L}^{\prime}\)-definition.1 Furthermore, it is not hard to check that the resulting \(\mathcal{L}\)-structure is always a model of \(T\). We will denote the model produced in this way by \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\). Likewise, if \(M\) is a model of \(T\) then we can transform it into a model of \(T^{\prime}\), which we will denote \(M^{\mathcal{L}\to\mathcal{L}^{\prime}}\).
It is important to note that for any model \(M\) of \(T^{\prime}\), \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) have the same underlying set and \((M^{\mathcal{L}^{\prime}\to\mathcal{L}})^{\mathcal{L}\to\mathcal{L}^{\prime}}=M\). Thus we may think of \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) as two different ways of viewing the same structure.
**Translating formulas.** A similar transformation is possible for formulas. Suppose \(\varphi\) is an \(\mathcal{L}\)-formula. Then by replacing each constant, relation and function symbol in \(\varphi\) by the corresponding \(\mathcal{L}^{\prime}\)-definition, we obtain an \(\mathcal{L}^{\prime}\)-formula, which we will denote \(\varphi^{\mathcal{L}\to\mathcal{L}^{\prime}}\). Likewise we can transform any \(\mathcal{L}^{\prime}\)-formula \(\varphi\) into an \(\mathcal{L}\)-formula, which we will denote \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\).
**Example 2.4**.: Suppose \(f\) is a unary relation symbol in \(\mathcal{L}\), \(\varphi_{f}(x,y)\) is its \(\mathcal{L}^{\prime}\)-definition and \(\psi\) is the \(\mathcal{L}\)-formula \(\forall x,y\left(f(f(x))=f(y)\right)\). Then \(\psi^{\mathcal{L}\to\mathcal{L}^{\prime}}\) is the formula \(\forall x,y\left(\exists z_{1},z_{2},z_{3}\left(\varphi_{f}(x,z_{1})\wedge \varphi_{f}(z_{1},z_{2})\wedge\varphi_{f}(y,z_{3})\wedge z_{2}=z_{3}\right))\).
It is not hard to check that our translations of models and of formulas are compatible with each other. In particular, if \(M\) is a model of \(T^{\prime}\), \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula and \(\overline{a}\) is a tuple in \(M\) then \(M\vDash\varphi(\overline{a})\) if and only if \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash\varphi^{\mathcal{L}^{\prime}\to \mathcal{L}}(\overline{a})\). Note that this implies that \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) have the same algebra of definable sets.
### Mutual algebraicity
As mentioned in the introduction, we will use the model-theoretic notion of mutual algebraicity. The key definitions are of mutually algebraic formulas and mutually algebraic structures.
**Definition 2.5**.: Given a structure \(M\), a formula \(\varphi(\overline{x})\) with parameters from \(M\) is **mutually algebraic over \(M\)** if there is some number \(k\in\mathbb{N}\) such that for every nontrivial partition \(\overline{x}=\overline{x}_{0}\cup\overline{x}_{1}\) and every tuple \(\overline{a}_{0}\) in \(M\), there are at most \(k\) tuples \(\overline{a}_{1}\) such that \(M\vDash\varphi(\overline{a}_{0},\overline{a}_{1})\).
Note that the mutual algebraicity depends on what the free variables of the formula are. In particular, it is not preserved by adding dummy variables. Also note that any formula with at most one free variable is mutually algebraic.
**Example 2.6**.: If \(M\) is the structure \((\mathbb{N},+)\) then the formula \(x=y+5\) is mutually algebraic over \(M\) because if we fix \(x\) there is at most one \(y\) satisfying the formula, and vice-versa. On the other hand, the formula \(x=y+z+5\) is not mutually algebraic over \(M\) because when we fix \(z\) there are infinitely many pairs \(x,y\) which satisfy the formula.
**Definition 2.7**.: A structure \(M\) is **mutually algebraic** if every formula is equivalent to a Boolean combination of formulas which are mutually algebraic over \(M\) (and which are allowed to have parameters from \(M\)).
**Example 2.8**.: The structure \((\mathbb{N},\operatorname{Succ})\) of natural numbers with the successor function has quantifier elimination and thus every formula is equivalent to a Boolean combination of atomic formulas. It is easy to check that the atomic formulas are all mutually algebraic and thus that the structure itself is. In contrast, it is possible to show that the structure \((\mathbb{Q},\leq)\), despite having quantifier elimination, is not mutually algebraic (for example, one can show that the formula \(x\leq y\) is not equivalent to a Boolean combination of mutually algebraic formulas).
### Quantifier elimination for mutually algebraic structures
We will make use of two quantifier elimination theorems for mutually algebraic structures. The first is due to Laskowski.
**Theorem 2.9** ([10], Theorem 4.2).: _If \(M\) is mutually algebraic then every formula \(\varphi(\overline{x})\) is equivalent over \(M\) to a Boolean combination of formulas of the form \(\exists\overline{x}\,\theta(\overline{y},\overline{z})\) (which may have parameters from \(M\)) where \(\theta\) is quantifier free and mutually algebraic over \(M\) and \(\overline{y}\) is a subset of \(\overline{x}\)._
**Theorem 2.10**.: _If \(M\) is a mutually algebraic structure and \(\varphi(\overline{x})\) is mutually algebraic over \(M\), then there is a quantifier free formula \(\theta(\overline{x},\overline{y})\) (which may have parameters from \(M\)) such that \(\exists\overline{y}\,\theta(\overline{x},\overline{y})\) is mutually algebraic over \(M\) and \(M\vDash\varphi(\overline{x})\to\exists\overline{y}\,\theta(\overline{x}, \overline{y})\)._
The second theorem is a relatively straightforward consequence of the first one, together with some facts from the theory of mutual algebraicity. Our goal for the rest of this section is to give the proof. To do so, we will need a lemma about mutually algebraic formulas, due to Laskowski and Terry.
**Lemma 2.11** ([10], Lemma A.1).: _Suppose \(M\) is a structure and_
\[\varphi(\overline{x}):=\bigwedge_{i}\alpha_{i}(\overline{x}_{i})\wedge \bigwedge_{j}\neg\beta_{j}(\overline{x}_{j})\]
_is a formula such that_
1. \(\varphi(\overline{x})\) _is mutually algebraic over_ \(M\)_._
2. \(\{\overline{a}\mid M\vDash\varphi(\overline{a})\}\) _contains an infinite set of pairwise disjoint tuples._
3. _Each_ \(\alpha_{i}(\overline{x}_{i})\) _and_ \(\beta_{j}(\overline{x}_{j})\) _is mutually algebraic over_ \(M\)_._
_Then \(\alpha(\overline{x})=\bigwedge_{i}\alpha_{i}(\overline{x}_{i})\) is mutually algebraic over \(M\)._
Actually we need a slightly stronger version of this lemma. In particular, we need to replace the second condition on \(\varphi\) with the apparently weaker assumption that \(\{\overline{a}\mid M\vDash\varphi(\overline{a})\}\) is infinite. The next lemma, also due to Laskowski, tells us that since \(\varphi\) is mutually algebraic, the two conditions are actually equivalent.
**Lemma 2.12** ([11], Lemma 3.1).: _Suppose \(M\) is a structure and \(\varphi(\overline{x})\) is a formula which is mutually algebraic over \(M\). If \(\{\overline{x}\mid M\vDash\varphi(\overline{a})\}\) is infinite then it contains an infinite set of pairwise disjoint tuples._
We can now prove Theorem 2.10.
Proof of Theorem 2.10.: By applying Laskowski's theorem and writing the resulting formula in disjunctive normal form, we get
\[M\vDash\varphi(\overline{x})\leftrightarrow\bigvee_{i}\left(\bigwedge_{j} \alpha_{i,j}(\overline{x}_{i,j})\wedge\bigwedge_{k}\neg\beta_{i,k}(\overline{ x}_{i,k})\right)\]
where each \(\alpha_{i,j}(\overline{x}_{i,j})\) and each \(\beta_{i,k}(\overline{x}_{i,k})\) is existential and mutually algebraic over \(M\).
For each \(i\), define
\[\varphi_{i}(\overline{x}) :=\bigwedge_{j}\alpha_{i,j}(\overline{x}_{i,j})\wedge\bigwedge_{k }\neg\beta_{i,k}(\overline{x}_{i,k})\] \[\alpha_{i}(\overline{x}) :=\bigwedge_{j}\alpha_{i,j}(\overline{x}_{i,j})\] \[A_{i} =\{\overline{a}\mid M\vDash\varphi_{i}(\overline{a})\}\]
Note that since \(\varphi(\overline{x})\) is mutually algebraic and \(M\vDash\varphi_{i}(\overline{x})\to\varphi(\overline{x})\), \(\varphi_{i}(\overline{x})\) is also mutually algebraic. Thus by Lemma 2.11 above (or rather, its slightly strengthened version), we have that either \(A_{i}\) is finite or \(\alpha_{i}(\overline{x})\) is mutually algebraic.
In the former case, define \(\gamma_{i}(\overline{x}):=\bigvee_{\overline{a}\in A_{i}}\overline{x}= \overline{a}\) and in the latter case, define \(\gamma_{i}(\overline{x}):=\alpha_{i}(\overline{x})\). In either case, note that \(\gamma_{i}\) is existential and mutually algebraic over \(M\) and that
\(M\vDash\varphi_{i}(\overline{x})\to\gamma_{i}(\overline{x})\). Since \(\varphi(\overline{x})\) and \(\bigvee_{i}\varphi_{i}(\overline{x}_{i})\) are equivalent in \(M\), this gives us
\[M\vDash\varphi(\overline{x})\to\bigvee_{i}\gamma_{i}(\overline{x}).\]
Since each \(\gamma_{i}(\overline{x})\) is mutually algebraic, so is their disjunction. Pulling the existential quantifiers to the front, we have the desired formula.
## 3. The counterexample
In this section we will describe the theory we use to answer Pakhomov's question. In order to do so, we need to fix a computable infinite binary tree \(R\) with the property that none of its paths can be computably approximated. More precisely, say that a sequence \(x\in 2^{\omega}\) is **guessable** if there is an algorithm which, for each number \(n\), enumerates a list of at most \(O(n^{2})\) strings of length \(n\), one of which is \(x\!\upharpoonright\!n\). We need a computable infinite binary tree \(R\), none of whose paths are guessable.
It is not hard to directly construct such a tree \(R\) but we can also simply pick a computable infinite binary tree whose paths are all Martin-Lof random. Such a tree is known to exist2 and it is also easy to check that Martin-Lof random sequences are not guessable. See the book _Algorithmic Randomness and Complexity_ by Downey and Hirschfeldt for more details about Martin-Lof randomness [1].
Footnote 2: For example we can simply take the complement of any of the levels of the universal Martin-Lof test.
Essentially, our theory is the simplest theory all of whose models code an infinite path through \(R\). We now give a more precise description.
**The language.** Let \(\mathcal{L}\) be the language whose signature consists of:
1. A constant symbol, \(0\).
2. Two unary function symbols, \(S\) and \(P\).
3. A unary relation symbol, \(A\).
Also, although it is not officially part of the language \(\mathcal{L}\), we will often use the following notation. Given any \(n\in\mathbb{N}\),
* \(\underline{n}\) denotes the \(\mathcal{L}\)-term \(S^{n}(0)\), e.g. \(\underline{3}\) denotes \(S(S(S(0)))\).
* \(\underline{-n}\) denotes the \(\mathcal{L}\)-term \(P^{n}(0)\), e.g. \(-\underline{3}\) denotes \(P(P(P(0)))\).
* \(\overline{x+\underline{n}}\) denotes the \(\mathcal{L}\)-term \(S^{n}(x)\) and \(\underline{x+\underline{-n}}\) denotes the \(\mathcal{L}\)-term \(P^{n}(x)\). We will also sometimes use \(x-\underline{n}\) to denote \(x+\underline{-n}\).
* We will often refer to \(S\) as "successor" and \(P\) as "predecessor."
**The theory.** Fix a computable infinite binary tree \(R\), none of whose infinite paths are guessable, and let \(T\) be the \(\mathcal{L}\)-theory consisting of:
1. The theory of the integers with \(0\), successor and predecessor, i.e. \(\operatorname{Th}(\mathbb{Z},0,\operatorname{Succ},\operatorname{Pred})\).
2. Axioms stating that \(A\) (restricted to the elements \(\underline{0},\underline{1},\underline{2},\ldots\)) describes a path through \(R\). More precisely, for each \(n\in\mathbb{N}\), \(T\) contains the sentence \[\bigvee_{\sigma\in R_{n}}\left[\bigg{(}\bigwedge_{\sigma(i)=0}\neg A(\underline {i})\bigg{)}\wedge\bigg{(}\bigwedge_{\sigma(i)=1}A(\underline{i})\bigg{)}\right]\] where \(R_{n}\) denotes the set of strings in \(R\) of length \(n\).
The second set of axioms ensures that from any model of \(T\), we can computably recover a path through the tree \(R\). We will now explain how this works.
Given a sentence \(\varphi\) and a model \(M\), let's use the notation \(\llbracket\varphi\rrbracket^{M}\) to denote **the truth-value of \(\varphi\) in \(M\)**. We will often identify sequences of truth values with binary sequences by
thinking of "true" as \(1\) and "false" as \(0\). Now suppose that \(M\) is a model of \(T\). We claim that the sequence \(\llbracket A(0)\rrbracket^{M},\llbracket A(1)\rrbracket^{M},\llbracket A(2) \rrbracket^{M},\ldots\) is an infinite path through \(R\). The point is that the axioms above guarantee that, for each \(n\in\mathbb{N}\), the length \(n\) initial segment of this sequence agrees with some _specific_ length \(n\) string in \(R\). Since all of its proper initial segments are in \(R\), the sequence \(\llbracket A(0)\rrbracket^{M},\llbracket A(1)\rrbracket^{M},\llbracket A(2) \rrbracket^{M},\ldots\) is indeed a path through \(R\).
Note that this immediately implies that no model of \(T\) is not computable--any such model computes an infinite path through \(R\), but no such path is computable. In spite of this, we will see later that models of \(T\) have quantifier elimination and so are very well-behaved in model-theoretic terms.
### Models of \(T\)
It will help to have a clear picture of the structure of models of \(T\) and to fix some terminology for later. Since \(T\) includes the theory of the integers with successor and predecessor, \(T\) proves that \(S\) and \(P\) are injective functions with no cycles and that they are inverses. Thus any model of \(T\) consists of a disjoint union of one or more \(\mathbb{Z}\)-chains, with \(S\) moving forward along each chain, \(P\) moving backward and the constant \(0\) sitting in the middle of one of the chains. There is also a well-defined notion of distance: the distance between two elements of the same chain is simply the number of steps apart they are on the chain (and the distance between elements of two different chains is \(\infty\)).
Furthermore, each element of each chain is labelled with a truth value (corresponding to whether the predicate \(A\) holds of that element or not) and thus each chain gives rise to a bi-infinite binary sequence. If we start at the element \(0\) and move forward along its chain, then, as we saw above, the binary sequence we get is guaranteed to be a path through the tree \(R\).
Given a model \(M\) of \(T\) and elements \(a,b\in M\), we will use the following terminology.
* The **signed distance** from \(a\) to \(b\) is the unique integer \(k\) (if it exists) such that \(b=a+\underline{k}\). If no such \(k\) exists then the signed distance is \(\infty\).
* The **distance between**\(a\) and \(b\) is the absolute value of the signed distance (where the absolute value of \(\infty\) is \(\infty\)).
* For \(k\in\mathbb{N}\), the \(\boldsymbol{k}\)**-neighborhood** of \(a\) is the set \(\{a-\underline{k},a-(k-1),\ldots,a+\underline{k}\}\).
Note that if the signed distance from \(a\) to \(b\) is \(k<\infty\), the signed distance from \(b\) to \(a\) is \(-k\).
**Remark 3.1**.: By choosing a somewhat more complicated theory, it is possible to simplify some of the proofs later in this paper. In particular, we can add axioms to \(T\) which state that \(A\) behaves _generically_, in the sense that every finite pattern of values of \(A\) occurs somewhere. More precisely, for every finite binary string \(\sigma\in 2^{<\omega}\) we add the axiom
\[\exists x\bigg{[}\bigg{(}\bigwedge_{\sigma(i)=0}\neg A(x+\underline{i}) \bigg{)}\wedge\bigg{(}\bigwedge_{\sigma(i)=1}A(x+\underline{i})\bigg{)}\bigg{]}.\]
Equivalently, we can replace \(T\) with its model completion. Making this change would allow us to simplify the proofs of Propositions 4.1 and 4.4 and Lemma 4.7.
## 4. Proof of the main theorem
Let \(\mathcal{L}\) and \(T\) be the language and theory described in the previous section. In this section, we will prove that no theory which is definitionally equivalent to \(T\) has a computable model. Since \(T\) is a consistent, c.e. theory, this is enough to prove Theorem 1.2.
In order to prove this, let's fix a language \(\mathcal{L}^{\prime}\) and an \(\mathcal{L}^{\prime}\)-theory \(T^{\prime}\) which is definitionally equivalent to \(T\). Note that since the language \(\mathcal{L}\) has finite signature, we may assume that \(\mathcal{L}^{\prime}\) does as well.3 Now fix a model \(M\) of \(T^{\prime}\). Our goal is to prove that \(M\) is not computable.4
Footnote 3: The point is that if a theory \(T\) is in a language with finite signature and \(T^{\prime}\) is any theory definitionally equivalent to \(T\) then \(T^{\prime}\) has a subtheory in a language with finite signature which is also definitionally equivalent to \(T\).
Footnote 4: Recall that a model is computable if its underlying set is \(\mathbb{N}\) and all of its functions and relations are computable as functions or relations on \(\mathbb{N}\). Note that since we are assuming \(\mathcal{L}^{\prime}\) has finite signature, we don’t need to worry about whether these functions and relations are uniformly computable.
Before beginning, it will be useful to fix a few conventions. First, recall from section 2.1 that \(M\) gives rise to a model \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) of \(T\) which has the same underlying set and the same algebra of definable sets as \(M\). We will often abuse notation slightly and use \(M\) to refer to both \(M\) itself and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\). For example, if \(\varphi\) is an \(\mathcal{L}\)-formula, we will use \(M\vDash\varphi(\overline{a})\) to mean \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash\varphi(\overline{a})\). Also, we will say things like "\(b\) is the successor of \(a\)" to mean \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash b=S(a)\). Second, unless explicitly stated otherwise, we assume that formulas do not contain parameters.
### Proof strategy
To prove that \(M\) is not computable, we will show that the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) is guessable (in the sense of section 3) relative to an oracle for \(M\). Since the axioms of \(T\) ensure that this sequence is a path through the tree \(R\), and hence not guessable, this is enough to show that \(M\) is not computable.
To show that the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) is guessable from an oracle for \(M\), we will first prove that \(M\) is mutually algebraic. To do so, we will essentially show that models of \(T\) have quantifier elimination and use this to prove that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic. The mutual algebraicity of \(M\) itself follows because mutual algebraicity is preserved under definitional equivalence (because mutual algebraicity depends only on the algebra of definable sets, which is itself preserved under definitional equivalence).
Once we know that \(M\) is mutually algebraic, we can apply the quantifier elimination results of section 2.3 to infer that \(S\) and \(A\) are close to being quantifier-free definable in \(M\). In particular, the formula \(S(x)=y\) is mutually algebraic and so, by Theorem 2.10, there is an existential \(\mathcal{L}^{\prime}\)-formula \(\psi_{S}(x,y)\) such that \(\psi_{S}\) is mutually algebraic and \(M\vDash S(x)=y\to\psi_{S}(x,y)\).
We can think of \(\psi_{S}\) as a multi-valued function which takes each element \(a\in M\) to the set of elements \(b\in M\) such that \(M\vDash\psi_{S}(a,b)\). Since \(\psi_{S}\) is an existential formula, the graph of this multi-valued function is computably enumerable from an oracle for \(M\). Since \(\psi_{S}\) is mutually algebraic, there are only finitely many elements in the image of each \(a\). And since \(M\vDash S(x)=y\to\psi_{S}(x,y)\), the successor of \(a\) is always in the image of \(a\). Putting this all together, we can think of this multi-valued function as giving us, for each \(a\in M\), a finite list of guesses for \(S(a)\) which is computably enumerable relative to an oracle for \(M\).
To finish the proof, we can leverage our ability to enumerate a finite list of guesses for the successor of each element to enumerate a short list of guesses for each initial segment of the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\). To accomplish this, we will have to make use of our understanding of the structure of definable subsets of \(M\), which we first develop in order to prove mutual algebraicity.
### Model-theoretic tameness of \(M\)
Our first goal is to prove that \(M\) is mutually algebraic. One way to do this is to show that models of \(T\) satisfy quantifier elimination and then note that all atomic \(\mathcal{L}\)-formulas are mutually algebraic over \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\)--this implies that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic and hence that \(M\) is as well. However, it will be helpful for
us later to have a more detailed understanding of the structure of definable subsets of \(M\). Thus, instead of just proving quantifier elimination for models of \(T\), we will prove a stronger statement, which is essentially a quantitative version of quantifier elimination.
To explain this stronger statement, let's first consider the meaning of quantifier elimination in models of \(T\). By examining the atomic formulas of \(\mathcal{L}\), we can see that it means that for every \(\mathcal{L}\)-formula \(\varphi(\overline{x})\) and tuple \(\overline{a}\), the truth of \(\varphi(\overline{a})\) depends only on which elements of \(\overline{a}\) are close to each other (and to \(0\)), how close they are, and the values of the predicate \(A\) in a small neighborhood of each element. In our stronger statement, we will quantify exactly what "close" and "small" mean in this description. We will also extend this to \(\mathcal{L}^{\prime}\)-formulas. We will refer to the resulting statement as the **indiscernability principle** for \(M\). In order to make all of this precise, we first need to introduce some terminology.
**The radius of a formula.** For any \(\mathcal{L}\)-formula \(\varphi\) written in prenex normal form, inductively define the **radius** of \(\varphi\), written \(\operatorname{rad}(\varphi)\), as follows.
1. If \(\varphi\) is quantifier free then \(\operatorname{rad}(\varphi)\) is the total number of occurrences of \(S\) and \(P\) in \(\varphi\).
2. If \(\varphi\) has the form \(\exists x\,\psi\) or \(\forall x\,\psi\) then \(\operatorname{rad}(\varphi)=2\cdot\operatorname{rad}(\psi)\).
If \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula in prenex normal form then we define \(\operatorname{rad}(\varphi)\) in a similar way except that we change the case where \(\varphi\) is quantifier free to define \(\operatorname{rad}(\varphi)\) to be \(\operatorname{rad}(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}})\) (after first putting \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) in prenex normal form). The idea of the radius of a formula is that in the description of quantifier elimination for \(M\) above, we should interpret "close" to mean "within distance \(\operatorname{rad}(\varphi)\)."
**The \(r\)-type of a tuple.** Given a tuple \(\overline{a}=(a_{1},\dots,a_{n})\) in \(M\) and a number \(r\in\mathbb{N}\), define:
* The \(r\)**-distance table** of \(\overline{a}\) records the signed distances between the coordinates of \(\overline{a}\) and between each coordinate of \(\overline{a}\) and \(0\), treating any distance greater than \(r\) as \(\infty\). More precisely, it is the function \(f\colon\{0,1,\dots,n\}^{2}\to\{-r,-(r-1),\dots,r,\infty\}\) such that if the distance between \(a_{i}\) and \(a_{j}\) is at most \(r\) then \(f(i,j)\) is the signed distance from \(a_{i}\) to \(a_{j}\) and otherwise \(f(i,j)=\infty\) (and where we interpret \(a_{0}\) as \(0\)).
* The \(r\)**-neighborhood type** of any element \(a\in M\) is the sequence of truth values \([\![A(a-\underline{r})]\!]^{M},[\![A(a-\underline{r-1})]\!]^{M},\dots,[\![A(a+ \underline{r})]\!]^{M}\).
* The \(r\)**-type** of \(\overline{a}\) is the \(r\)-distance table of \(\overline{a}\) together with the sequence recording the \(r\)-neighborhood type of each coordinate of \(\overline{a}\).
**The indiscernability principle.** We can now state a formal version of the indiscernability principle described above.
**Proposition 4.1**.: _If \(\varphi\) is an \(\mathcal{L}\)-formula in prenex normal form and of radius \(r\) and \(\overline{a},\overline{b}\) are tuples in \(M\) with the same \(r\)-type then \(M\vDash\varphi(\overline{a})\) if and only if \(M\vDash\varphi(\overline{b})\)._
Proof.: By induction on the number of quantifiers in \(\varphi\). For quantifier free formulas, this is easy to verify. If \(\varphi\) has quantifiers then it suffices to assume \(\varphi=\exists x\,\psi\) since the case of a universal quantifier is symmetric (i.e. by considering \(\neg\varphi\) instead of \(\varphi\) and pushing the negation past the quantifiers to get it into prenex normal form). Also, it's enough to assume \(M\vDash\varphi(\overline{a})\) and prove \(M\vDash\varphi(\overline{b})\)--the other direction also follows by symmetry.
So let's assume that \(M\vDash\exists x\,\psi(\overline{a},x)\). Thus there is some \(c\) such that \(M\vDash\psi(\overline{a},c)\). We need to find some \(d\) such that \(M\vDash\psi(\overline{b},d)\). Note that it is enough to find \(d\) such that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type, because if this holds then we can apply the induction hypothesis to \(\psi\) to get that \(M\vDash\psi(\overline{b},d)\).
There are two cases depending on whether \(c\) is close to any element of \(\overline{a}\) or not. Also to reduce casework, we adopt the convention that \(a_{0}=b_{0}=0\) (note that this does not change the fact that \(\overline{a}\) and \(\overline{b}\) have the same \(r\)-type).
**Case 1.** First suppose that \(c\) is distance at most \(r/2\) from some coordinate of \(\overline{a}\). In particular, there is some \(i\leq n\) and \(-r/2\leq k\leq r/2\) such that \(c=a_{i}+\underline{k}\). In this case, we can pick \(d\) to be close to the corresponding element of \(\overline{b}\), i.e. \(d=b_{i}+\underline{k}\). We claim that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type.
First, we need to check that the \(r/2\)-distance tables are the same. It suffices to check that for each \(j\), either \(a_{j},c\) and \(b_{j},d\) have the same signed distance or both have distance greater than \(r/2\). Suppose that \(a_{j}=c+\underline{k}^{\prime}\) for some integer \(-r/2\leq k^{\prime}\leq r/2\). By substitution, \(a_{j}=(a_{i}+\underline{k})+\underline{k}^{\prime}=a_{i}+\underline{k}+k^{ \prime}\). Since \(|k+k^{\prime}|\leq r\) and since \(\overline{a},\overline{b}\) have the same \(r\)-distance table, this implies that \(b_{j}=b_{i}+\underline{k}+k^{\prime}\) and hence that \(b_{j}=d+\underline{k}^{\prime}\). The other cases can be handled similarly.
Second, we need to check that the \(r/2\)-neighborhood type of \(c\) is the same as that of \(d\). This follows from the fact that the \(r/2\)-neighborhood of \(c\) is contained in the \(r\)-neighborhood of \(a_{i}\), the \(r/2\)-neighborhood of \(d\) is contained in the \(r\)-neighborhood of \(b_{i}\) and the \(r\)-neighborhood types of \(a_{i}\) and \(b_{i}\) are the same.
**Case 2.** Now suppose that \(c\) is distance more than \(r/2\) from every coordinate of \(\overline{a}\). It is enough to find some \(d\) which has the same \(r/2\)-neighborhood type as \(c\) and which is distance more than \(r/2\) from every coordinate of \(\overline{b}\). The point is that for such a \(d\), it is easy to see that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type.
We now claim that some such \(d\) must exist.5 Suppose for contradiction that this is false. Then every element of \(M\) with the same \(r/2\)-neighborhood type as \(c\) must be contained in the \(r/2\) neighborhood of some element of \(\overline{b}\). In particular, this implies that there are a finite number of such elements and they all have the form \(b_{i}+\underline{k}\) for some \(i\leq n\) and \(-r/2\leq k\leq r/2\).
Footnote 5: This case becomes more or less trivial if \(T\) is modified in the way described in Remark 3.1. This is because the existence of such an element \(d\) is guaranteed by the extra axioms described in that remark.
Suppose there are exactly \(m\) such elements and they are equal to \(b_{i_{1}}+\underline{k_{1}},\ldots,b_{i_{m}}+\underline{k_{m}}\) (where for each \(j\), \(-r/2\leq k_{j}\leq r/2\)). It follows from the fact that \(\overline{a}\) and \(\overline{b}\) have the same \(r\)-type that the corresponding elements \(a_{i_{1}}+\underline{k_{1}},\ldots,a_{i_{m}}+\underline{k_{m}}\) are also all distinct and have the same \(r/2\)-neighborhood type as \(c\). However, since only \(m\) elements of \(M\) have this \(r/2\)-neighborhood type, \(c\) must be among this list of elements, which contradicts the assumption that \(c\) is not within distance \(r/2\) of any coordinate of \(\overline{a}\).
**Corollary 4.2**.: _Proposition 4.1 also holds for all \(\mathcal{L}^{\prime}\)-formulas in prenex normal form._
Proof.: Suppose \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula of radius \(r\) and that \(\overline{a},\overline{b}\) are tuples in \(M\) with the same \(r\)-type. In the case where \(\varphi\) is quantifier free, the radius of \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is also \(r\), for the trivial reason that radius of a quantifier-free \(\mathcal{L}^{\prime}\)-formula is defined as the radius of its \(\mathcal{L}\)-translation. Hence, we can apply the indiscernability principle to \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) to get
\[M\vDash\varphi(\overline{a}) \iff M\vDash\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}(\overline{a})\] \[\iff M\vDash\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}(\overline{b}) \iff M\vDash\varphi(\overline{b}).\]
When \(\varphi\) has quantifiers, the inductive argument that we gave in the proof of Proposition 4.1 still works.
### \(M\) is mutually algebraic
For a fixed \(r\)-type, the assertion that a tuple \(\overline{x}=(x_{1},\ldots,x_{n})\) has that \(r\)-type is expressible as a Boolean combination of \(\mathcal{L}\)-formulas of the following forms.
1. \(x_{i}=x_{j}+\underline{k}\) for some indices \(i,j\leq n\) and some \(-r\leq k\leq r\).
2. \(x_{i}=\underline{k}\) for some index \(i\leq n\) and some \(-r\leq k\leq r\).
3. \(A(x_{i}+\underline{k})\) for some index \(i\leq n\) and some \(-r\leq k\leq r\).
It is easy to check that each type of formula listed above is mutually algebraic over \(M\) (for the second and third there is actually nothing to check because they both involve only one free variable). Furthermore, for any fixed \(r\), there are a finite number of possible \(r\)-types. Thus the indiscernability principle implies that every \(\mathcal{L}\)-formula \(\varphi\) is equivalent to a finite conjunction of Boolean combinations of mutually algebraic \(\mathcal{L}\)-formulas (namely a conjunction over all \(r\)-types that satisfy \(\varphi\)).
This shows that \(M\) is mutually algebraic when considered as an \(\mathcal{L}\)-structure (i.e. that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic). However, it is easy to conclude that \(M\) is also mutually algebraic when considered as an \(\mathcal{L}^{\prime}\)-structure. For a given formula \(\varphi\), we know from our reasoning above that \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is equivalent to a Boolean combination of mutually algebraic \(\mathcal{L}\)-formulas. Next, we can replace each formula in this Boolean combination by its corresponding \(\mathcal{L}^{\prime}\)-formula. Since the mutual algebraicity of a formula only depends on the set that it defines, and since this is invariant under translating between \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), we conclude that \(\varphi\) is equivalent to a Boolean combination of mutually algebraic \(\mathcal{L}^{\prime}\)-formulas.
**Remark 4.3**.: The reasoning above also shows that \(M\) has quantifier elimination when considered as an \(\mathcal{L}\)-structure (i.e. \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) has quantifier elimination). The point is just that a tuple having a certain \(r\)-type is expressible as a quantifier free \(\mathcal{L}\)-formula.
### The satisfaction algorithm
We will now explain how the indiscernability principle implies that the satisfaction relation for \(\mathcal{L}^{\prime}\)-formulas over \(M\) is very nearly computable relative to an oracle for \(M\). At the end of this subsection, we will explain why this is useful.
The main idea (of computing the satisfaction relation) is that to check whether \(M\vDash\exists x\,\varphi(\overline{a},x)\), we don't need to try plugging in every element of \(M\) for \(x\), just those elements which are close to some coordinate of \(\overline{a}\) (or to \(0\)), plus one element of each possible \(\operatorname{rad}(\varphi)\)-neighborhood type which is far from all the coordinates of \(\overline{a}\). In other words, checking the truth of an existential formula can be reduced to checking the truth of a finite number of atomic formulas. This intuition is formalized by the next proposition, whose proof essentially just consists of this idea, but with a number of messy details in order to make precise the idea of trying all the different \(\operatorname{rad}(\varphi)\)-neighborhood types which are far from elements of \(\overline{a}\).
**Proposition 4.4** (Satisfaction algorithm for existential formulas).: _Suppose \(\varphi(\overline{x})\) is an existential \(\mathcal{L}^{\prime}\)-formula with radius \(r\). There is an algorithm which, given a tuple \(\overline{a}\) in \(M\) and the following data_
_(1) an oracle for \(M\)_
_(2) and a finite set \(U\subseteq M\),_
_tries to check whether \(M\vDash\varphi(\overline{a})\). Furthermore, if \(U\) contains the \(r\)-neighborhood of every coordinate of \(\overline{a}\) then the output of the algorithm is correct._
Proof.: Let \(\theta(\overline{x},\overline{y})\) be a quantifier free formula such that \(\varphi(\overline{x})=\exists\overline{y}\,\theta(\overline{x},\overline{y})\) and let \(n=|\overline{x}|\) and \(m=|\overline{y}|\) (i.e. the number of free and bound variables in \(\varphi\), respectively). Next, fix a finite set \(V\) such that for each possible \(r\)-neighborhood type \(p\), \(V\) contains at least \((2r+1)(n+m+1)\) points of type \(p\) (or if fewer than \((2r+1)(n+m+1)\) points have \(r\)-neighborhood type \(p\) then \(V\) contains every such point).6 Also \(V\) should contain \(0\). Let \(V^{\prime}\) be the set consisting
of all elements within distance \(r\) of some element of \(V\). Note that since \(V^{\prime}\) is finite, we can "hard-code" it into our algorithm.
_Algorithm description._ To check if \(M\vDash\varphi(\overline{a})\), look at each tuple \(\overline{b}\) of elements of \(U\cup V^{\prime}\) and check if \(M\vDash\theta(\overline{a},\overline{b})\). If this occurs for at least one such \(\overline{b}\) then output "true." Otherwise, output "false." Note that checking the truth of a quantifier free formula (such as \(\theta\)) is computable from an oracle for \(M\).
_Verification._ Let's assume that \(U\) contains the \(r\)-neighborhood of each coordinate of \(\overline{a}\) and check that the output of the algorithm is correct. It is obvious that the algorithm has no false positives: if \(M\vDash\theta(\overline{a},\overline{b})\) for some \(\overline{b}\) then \(M\vDash\varphi(\overline{a})\). Thus it suffices to assume that \(M\vDash\varphi(\overline{a})\) and show that there is some tuple \(\overline{b}\) in \(U\cup V^{\prime}\) such that \(M\vDash\theta(\overline{a},\overline{b})\).
To accomplish this, we will pick elements of \(\overline{b}\) one at a time and, at each step, ensure that all the elements we have picked so far come from the set \(U\cup V^{\prime}\). More precisely, we will pick elements \(b_{1},\ldots,b_{m}\) such that for each \(i\leq m\),
\[M\vDash\exists y_{i+1}\ldots\exists y_{m}\,\theta(\overline{a},b_{1},\ldots,b_ {i},y_{i+1},\ldots,y_{m})\]
and we will try to ensure that for each \(i\), \(b_{i}\in U\cup V^{\prime}\). However, in order to do this, we will need a somewhat stronger inductive assumption.
Let's first explain on an informal level how the induction works and why we need a stronger inductive assumption. On the first step of the induction, things work pretty well. It is possible to use the indiscernability principle to show that we can pick some \(b_{1}\) which satisfies the condition above and which is close to some element of either \(\overline{a}\) or \(V\). Since \(U\) contains a reasonably large neighborhood around each element of \(\overline{a}\) and \(V^{\prime}\) contains a reasonably large neighborhood around each element of \(V\), this means we can pick \(b_{1}\) from \(U\cup V^{\prime}\). On the second step of the induction, however, things start to go wrong. We can again use the indiscernability principle to show that we can pick some \(b_{2}\) which satisfies the condition above and which is close to either \(b_{1}\) or to some element of either \(\overline{a}\) or \(V\). In the latter case, there is no problem: we can still pick \(b_{2}\) from \(U\cup V^{\prime}\). But in the former case, there may be a problem. If the element \(b_{1}\) we picked on the first step happens to be near the "boundary" of \(U\cup V^{\prime}\) then even a \(b_{2}\) which is relatively close to it might no longer be inside \(U\cup V^{\prime}\).
We can fix this problem by requiring not just that \(b_{1}\) is in \(U\cup V^{\prime}\), but also that it is far from the "boundary" of \(U\cup V^{\prime}\). In other words, we need to require that \(b_{1}\) is close to \(\overline{a}\) or \(V\) in some stronger way than simply requiring that it be in \(U\cup V^{\prime}\). In fact, it is enough to require that \(b_{1}\) be within distance \(r/2\) of some element of \(\overline{a}\) or \(V\) and more generally, that each \(b_{i}\) is within distance \(r/2+\ldots+r/2^{i}\) of some element of \(\overline{a}\) or \(V\).
To state this formally, we define sets \(W_{0}\subseteq W_{1}\subseteq W_{2}\subseteq\ldots\subseteq W_{m}\) as follows. \(W_{0}\) consists of the coordinates of \(\overline{a}\) together with the elements of \(V\). For each \(0<i\leq m\), \(W_{i}\) consists of all points in \(M\) which are within distance \(r/2^{i}\) of some element of \(W_{i-1}\) (note that this is equivalent to being within distance \(r/2+r/4+\ldots+r/2^{i}\) of some element of \(W_{0}\)). Note that by assumption, \(U\cup V^{\prime}\) contains the \(r\)-neighborhood of each element of \(W_{0}\). It follows that each \(W_{i}\) is contained in \(U\cup V^{\prime}\)
Also, define a sequence of formulas \(\varphi_{0},\varphi_{1},\ldots,\varphi_{m}\) by removing the quantifiers from \(\varphi\) one at a time. More precisely, define
\[\varphi_{i}(\overline{x},y_{1},\ldots,y_{i}):=\exists y_{i+1}\,\ldots,\exists y _{m}\theta(\overline{x},\overline{y}).\]
So, for example,
* \(\varphi_{0}(\overline{x})=\exists y_{1}\ldots\exists y_{m}\theta(\overline{x},\overline{y})=\varphi(\overline{x})\)
* \(\varphi_{1}(\overline{x},y_{1})=\exists y_{2}\ldots\exists y_{m}\theta( \overline{x},\overline{y})\)
* \(\varphi_{2}(\overline{x},y_{1},y_{2})=\exists y_{3}\ldots\exists y_{m}\theta( \overline{x},\overline{y})\)
* \(\ldots\)
* \(\varphi_{m}(\overline{x},y_{1},\ldots,y_{m})=\theta(\overline{x},\overline{y})\).
We will now inductively construct a sequence of points \(b_{1},\ldots,b_{m}\) such that for each \(i\), \(b_{i}\in W_{i}\) and \(M\vDash\varphi_{i}(\overline{a},b_{1},\ldots,b_{i})\). Since \(W_{m}\subseteq U\cup V^{\prime}\) and \(\varphi_{m}=\theta\), this is sufficient to finish the proof.
The base case of this induction is simply the assertion that \(M\vDash\varphi(\overline{a})\) which we assumed above. Now assume that we have already found \(b_{1},\ldots,b_{i}\) and we will show how to find \(b_{i+1}\). Since \(M\vDash\varphi_{i}(\overline{a},b_{1},\ldots,b_{i})\), there is some \(c\) such that \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i},c)\). The idea is that we can pick \(b_{i+1}\) by mimicking \(c\). If \(c\) is within distance \(r/2^{i+1}\) of some coordinate of \(\overline{a}\), \(0\) or some \(b_{j}\) for \(j\leq i\) then we set \(b_{i+1}=c\). Otherwise, we can pick \(b_{i+1}\) to be some element of \(V\) with the same \(r\)-neighborhood type as \(c\) and which is also distance at least \(r/2^{i+1}\) from all coordinates of \(\overline{a}\), \(0\) and all \(b_{j}\). We can do this because either \(V\) contains many points of that \(r\)-neighborhood type (more than all the points within distance \(r/2^{i+1}\) of \(\overline{a}\), \(0\) and \(b_{1},\ldots,b_{i}\)--this is why we chose the number \((2r+1)(n+m+1)\)) or there are not very many such points and \(V\) contains \(c\) itself. Note that in the first case, \(b_{i+1}\) is within distance \(r/2^{i+1}\) of some element of \(W_{i}\), and in the second case, \(b_{i+1}\in V\). Thus in either case \(b_{i+1}\in W_{i+1}\).
Also, note that in either case \(\overline{a}b_{1}\ldots b_{i}c\) and \(\overline{a}b_{1}\ldots b_{i}b_{i+1}\) have the same \(r/2^{i+1}\)-type. Since the radius of \(\varphi_{i+1}\) can be seen to be \(r/2^{i+1}\) and \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i},c)\), the indiscernability principle implies that \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i+1})\), as desired.
We now want to give an algorithm to compute the satisfaction relation of an arbitrary formula. One way to do this is to recursively apply the idea of Proposition 4.4 to reduce checking the truth of a formula with an arbitrary number of quantifiers to checking the truth of a finite number of atomic formulas. However, if we invoke the quantifier elimination results of section 2.3 then we can do something simpler. Recall that Theorem 2.9 tells us every formula is equivalent over \(M\) to a Boolean combination of existential formulas. Thus the algorithm for existential formulas almost immediately yields an algorithm for arbitrary formulas.
**Proposition 4.5** (Satisfaction algorithm for arbitrary formulas).: _Suppose \(\varphi(\overline{x})\) is an \(\mathcal{L}^{\prime}\)-formula. There is a number \(r\in\mathbb{N}\) and an algorithm which, given any tuple \(\overline{a}\) in \(M\) and the following data_
_(1) an oracle for \(M\)_
_(2) and a finite set \(U\subseteq M\),_
_tries to check whether \(M\vDash\varphi(\overline{a})\). Furthermore, if \(U\) contains the \(r\)-neighborhood of every coordinate of \(\overline{a}\) then the algorithm is correct._
**Definition 4.6**.: For convenience, we will refer to the number \(r\) in the statement of this proposition as the **satisfaction radius** of \(\varphi\).
Proof.: By Theorem 2.9, \(\varphi(\overline{x})\) is equivalent over \(M\) to a Boolean combination of existential \(\mathcal{L}^{\prime}\)-formulas, \(\psi_{1}(\overline{x}),\ldots,\psi_{m}(\overline{x})\) (which may have parameters from \(M\)). Let \(r_{1},\ldots,r_{m}\) denote the radii of these formulas and let \(r=\max(r_{1},\ldots,r_{m})\).
The algorithm is simple to describe, but is made slightly more complicated by the fact that the formulas \(\psi_{i}\) may contain parameters from \(M\). For clarity, we will first assume that they do not contain such parameters and then explain how to modify the algorithm in the case where they do.
Here's the algorithm (in the case where there are no parameters). For each \(i\leq m\), use the algorithm for existential formulas and the set \(U\) to check the truth of \(\psi_{i}(\overline{a})\). Then assume all the reported truth values are correct and use them to compute the truth value of \(\varphi(\overline{a})\).
If \(U\) contains an \(r\)-neighborhood around every coordinate of \(\overline{a}\) then for each \(i\leq m\), it contains an \(r_{i}\)-neighborhood around each coordinate of \(\overline{a}\). So in this case, the truth values we compute for \(\psi_{1}(\overline{a}),\ldots,\psi_{m}(\overline{a})\) are guaranteed to be correct and thus the final truth value for \(\varphi(\overline{a})\) is also correct.
Now suppose that the formulas \(\psi_{i}\) contain parameters from \(M\). Let \(\overline{b}_{i}\) be the tuple of parameters of \(\psi_{i}\). Let \(V\) be the set containing the \(r\)-neighborhood of each element of each tuple of parameters \(\overline{b}_{i}\). The only modification that is needed to the algorithm described above is that instead of using \(U\) itself, we should use \(U\cup V\) when applying the satisfaction algorithm for existential formulas (and note that since \(V\) is finite, we can simply hard-code it into our algorithm).
Here's why this algorithm is useful. Note that if we had some way of computably generating the set \(U\) then we would be able to outright compute the satisfaction relation for \(\varphi\) using just an oracle for \(M\). In turn, this would allow us to use an oracle for \(M\) to compute the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\), which is a path through \(R\). Since \(R\) has no computable paths, this would imply \(M\) is not computable. Thus to finish our proof of the uncomputability of \(M\), it is enough to find an algorithm for generating the set \(U\) needed by the satisfaction algorithm. Actually, we can't quite do this in general, but we can do something almost as good: we can enumerate a short list of candidates for \(U\). This is enough to show that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\) is guessable from an oracle for \(M\). Since \(R\) has no guessable paths, this is still enough to imply that \(M\) is not computable.
### The guessing algorithm
We will now prove that \(M\) is not computable. As discussed above, we will do so by proving that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\) is guessable relative to an oracle for \(M\). Since the axioms of \(T\) ensure that this sequence is a path through \(R\) and since no path through \(R\) is guessable, this implies that \(M\) is not computable.
In other words, we can complete our proof by constructing an algorithm which, given an oracle for \(M\) and a number \(n\), enumerates a list of at most \(O(n^{2})\) guesses (at least one of which is correct) for the finite sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots,[\![A( \underline{n})]\!]^{M}\).
The rest of this section is devoted to constructing this algorithm. Since it would become annoying to append the phrase "relative to an oracle for \(M\)" to every other sentence that follows, we will adopt the convention that we always implicitly have access to an oracle for \(M\), even if we do not say so explicitly. Thus whenever we say that something is computable or computably enumerable, we mean relative to an oracle for \(M\).
**Warm-up: when \(S\) has a quantifier free definition.** We will begin by constructing an algorithm for one especially simple case. Note that this case is included only to demonstrate how the satisfaction algorithm can be used and to motivate the rest of the proof; it can be skipped without missing any essential details.
The "especially simple case" we are referring to is the case in which \(S\) has a quantifier free \(\mathcal{L}^{\prime}\)-definition. We will see that in this case, the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots\) is not only guessable, but actually computable.
To begin, let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\)--i.e. for every \(a,b\in M\), \(M\vDash S(a)=b\) if and only if \(M\vDash\varphi_{S}(a,b)\). Note that since \(\varphi_{S}\) is quantifier-free, the successor function in \(M\) is computable: to find \(S(a)\) we can just enumerate elements of \(M\) until we see an element \(b\) such that \(M\vDash\varphi_{S}(a,b)\) (which we can check because \(\varphi_{S}\) is quantifier-free). Likewise, we
can also compute the predecessor function: instead of waiting for an element \(b\) such that \(M\vDash\varphi_{S}(a,b)\), we wait for an element \(b\) such that \(M\vDash\varphi_{S}(b,a)\).
We can now explain how to compute \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\). Let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) and let \(r\) be the satisfaction radius of \(\varphi_{A}\). Given a number \(n\), do the following.
1. First use the fact that the successor function is computable to compute \(\underline{n}=S^{n}(0)\).
2. Next, use the fact that the successor and predecessor functions are computable to compute the \(r\)-neighborhood of \(\underline{n}\). Let \(U\) denote the set of elements in this \(r\)-neighborhood.
3. Finally, use the satisfaction algorithm for \(\varphi_{A}\), along with the set \(U\), to check whether \(M\vDash\varphi_{A}(\underline{n})\) and output the result as the truth value of \(A(\underline{n})\). Note that since \(U\) really does contain the \(r\)-neighborhood of \(\underline{n}\), the outcome of this step is guaranteed to be correct.
### Idea of the full algorithm
We have just seen an algorithm that computes the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) (without needing to make guesses) in the special case where \(S\) is definable by a quantifier-free \(\mathcal{L}^{\prime}\)-formula. We can no longer assume that there is a quantifier-free definition of \(S\), but by applying the quantifier elimination theorem for mutually algebraic formulas over mutually algebraic structures from section 2.3, we have something almost as good. Namely, let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\). It is easy to see that \(\varphi_{S}\) is mutually algebraic and so, by Theorem 2.10, there is a mutually algebraic existential formula \(\psi_{S}(x,y)\) (possibly with parameters from \(M\)) such that \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\).
The formula \(\psi_{S}(x,y)\) should be thought of as an "approximation" to the successor relation in \(M\). In particular, for a fixed element \(a\), any \(b\) such that \(M\vDash\psi_{S}(a,b)\) holds should be thought of as a candidate for \(S(a)\) and any \(b\) such that \(M\vDash\psi_{S}(b,a)\) holds should be thought of as a candidate for \(P(a)\). This is justified by the following two facts.
1. Since \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\), we have \(M\vDash\psi_{S}(a,S(a))\) and \(M\vDash\psi_{S}(P(a),a)\). In other words, the candidates for the successor and predecessor of \(a\) include the true successor and predecessor of \(a\), respectively.
2. Since \(\psi_{S}\) is mutually algebraic, there are not very many such candidates.
The core idea of the algorithm is that since \(\psi_{S}(x,y)\) is existential, the set of candidates for \(S(a)\) and \(P(a)\) is computably enumerable: to check if \(M\vDash\psi_{S}(a,b)\), we simply wait until we see some tuple in \(M\) which can serve as a witness. Thus we have an algorithm which, given any \(a\in M\), enumerates a short list of candidates for \(S(a)\) and \(P(a)\).
Next, we can bootstrap this into an algorithm which, for any \(a\in M\) and any number \(n\in\mathbb{N}\), enumerates a list of guesses for the sequence \(a-\underline{n},a-(n-1),\ldots,a+\underline{n}\): basically, enumerate guesses for the successor and predecessor of \(a\), then enumerate guesses for the successor and predecessor of each of those guesses and so on, for \(n\) rounds. This puts us in a situation much like the previous subsection (where the successor and predecessor functions were computable). In particular, we can enumerate guesses for the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\ldots,\llbracket A(\underline{n}) \rrbracket^{M}\) as follows.
1. First, let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) and let \(r_{A}\) be the satisfaction radius of \(\varphi_{A}\).
2. Given a number \(n\), enumerate guesses for the sequence \(\underline{-r_{A}},\ldots,\underline{n+r_{A}}\).
3. For each such guess, use the satisfaction algorithm to compute a guess for the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\ldots,\llbracket A(\underline{n}) \rrbracket^{M}\).
Note that if the guess from the second step is correct then the guess from the last step will be too because in this case we have correctly identified \(\underline{0},\ldots,\underline{n}\), along with the \(r_{A}\)-neighborhood of each one.
There is only one problem with this algorithm: we may enumerate too many guesses. Suppose that our algorithm for enumerating guesses for the successor of an element of \(M\) enumerates \(k\) guesses. Then it seems that we might end up enumerating up to \(k^{n}\) guesses for \(a+\underline{n}\): \(k\) guesses for \(a+\underline{1}\), \(k^{2}\) guesses for \(a+\underline{2}\) (since each guess for \(a+\underline{1}\) gives rise to \(k\) guesses for \(a+\underline{2}\)), and so on. Thus in the algorithm above, we might end up with about \(k^{n}\) guesses for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\), which is not enough to show that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots\) is guessable.
The second key idea of our algorithm is that we actually don't end up with so many guesses. It is possible to show that since \(\psi_{S}\) is mutually algebraic, if \(M\vDash\psi_{S}(a,b)\) then--almost always--\(a\) and \(b\) are close to each other. In particular, if the radius of \(\psi_{S}\) is \(r\) then with only finitely many exceptions, \(a\) and \(b\) must be distance at most \(r\) apart (this will be proved in Lemma 4.7 below). If we ignore the finitely many exceptions, then this implies that for any \(a\), every candidate for \(S(a)\) is within distance \(r\) of \(a\). By induction, this implies that every candidate for \(a+\underline{n}\) is within distance \(rn\) of \(a\). The point is that this means there are at most \(rn\) such candidates (rather than \(k^{n}\)).
This does not quite solve our problem: even if there are only \(rn\) candidates for \(a+\underline{n}\), there could still be exponentially many candidates for the sequence \(a-\underline{n},\ldots,a+\underline{n}\). However, it can be combined with other tricks to reduce the number of guesses to \(O(n^{2})\). This will be explained in detail in the proof of Lemma 4.9.
### Details of the algorithm
We will now describe the details of the algorithm and verify that it works correctly. We will break the algorithm (and its verification) into three parts, which work as follows.
1. **The successor and predecessor guessing algorithm:** an algorithm which takes as input an element \(a\in M\) and uses the existential formula approximating the successor relation to enumerate candidates for the successor and predecessor of \(a\). This is described in Lemma 4.8.
2. **The neighborhood guessing algorithm:** an algorithm which takes as input an element \(a\in M\) and a number \(n\) and uses the ideas discussed above to enumerate candidates for the sequence \(a-\underline{n},\ldots,a+\underline{n}\). This is described in Lemma 4.9.
3. **The \(A\) guessing algorithm:** an algorithm which takes as input a number \(n\) and uses the neighborhood guessing algorithm together with the satisfaction algorithm to enumerate candidates for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\). This is described in Lemma 4.10.
Before describing these algorithms and proving their correctness, we need to prove one technical lemma (which is related to our comment above stating that if \(M\vDash\psi_{S}(a,b)\) then \(a\) and \(b\) are usually close together).
**Lemma 4.7**.: _Suppose that \(\varphi(x,y)\) is a formula (possibly with parameters from \(M\)) of radius \(r\) which is mutually algebraic over \(M\). There is a finite set \(X\) of elements of \(M\) such that if \(M\vDash\varphi(a,b)\) then either \(a\) and \(b\) are distance at most \(r\) apart or at least one of \(a,b\) is in \(X\).7_
Footnote 7: Note that if \(T\) is modified in the way described in Remark 3.1 then both the statement and proof of this lemma can be simplified somewhat. In particular, we can replace the set \(X\) with the \(r\)-neighborhood of \(0\).
Proof.: It will help to first make explicit the parameters of \(\varphi\). Let \(\overline{c}\) denote the tuple of parameters and write \(\varphi^{\prime}(x,y,\overline{z})\) to denote the version of \(\varphi\) with the parameters exposed, i.e. \(\varphi(x,y)\) is \(\varphi^{\prime}(x,y,\overline{c})\).
Call a pair \((a,b)\)**exceptional** if \(a\) and \(b\) are more than distance \(r\) apart and both are more than distance \(r\) from every coordinate of \(\overline{c}\) and \(M\vDash\varphi(a,b)\). We will show that if
\((a,b)\) is exceptional then the \(r\)-neighborhood type of \(a\) occurs only finitely often in \(M\), and likewise for \(b\). Since there are only finitely many \(r\)-neighborhood types, this shows that there are only finitely many exceptional pairs. This is sufficient to finish the proof since we can take \(X\) to consist of all elements which are part of some exceptional pair, together with the \(r\)-neighborhood of each coordinate of \(\overline{c}\).
The claim about exceptional pairs follows from the indiscernability principle. Suppose \((a,b)\) is an exceptional pair. If \(a^{\prime}\) is any element of \(M\) which is distance more than \(r\) from all of \(b\) and from every coordinate of \(\overline{c}\) and which has the same \(r\)-neighborhood type as \(a\) then by the indiscernability principle we have
\[M\vDash\varphi(a,b)\implies M\vDash\varphi^{\prime}(a,b,\overline{c})\implies M \vDash\varphi^{\prime}(a^{\prime},b,\overline{c})\]
and hence \(M\vDash\varphi(a^{\prime},b)\). Since \(\varphi\) is mutually algebraic, there can only be finitely many such \(a^{\prime}\). Thus, outside of the \(r\)-neighborhood of \(b\) and of each coordinate of \(\overline{c}\), there are only finitely many elements with the same \(r\)-neighborhood type as \(a\). Since these \(r\)-neighborhoods are themselves finite, they also contain only finitely many elements with the same \(r\)-neighborhood type as \(a\) and thus we have shown that the \(r\)-neighborhood type of \(a\) only occurs finitely often in \(M\). Symmetric reasoning establishes the same result for \(b\).
**Lemma 4.8** (Guessing algorithm for successors and predecessors).: _There is an algorithm which, given any \(a\in M\) enumerates two lists of elements of \(M\) such that_
1. \(S(a)\) _is in the first list and_ \(P(a)\) _is in the second list._
2. _There is a constant upper bound (independent of_ \(a\)_) on the distance between any enumerated element and_ \(a\)_._
Proof.: Let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\) (i.e. \(M\vDash S(a)=b\) if and only if \(M\vDash\varphi_{S}(a,b)\)). Since \(\varphi_{S}(x,y)\) is mutually algebraic, we can apply Theorem 2.10 to obtain a mutually algebraic existential \(\mathcal{L}^{\prime}\)-formula \(\psi_{S}(x,y)\) (which may contain parameters from \(M\)) such that \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\). Let \(r\) be the radius of \(\psi_{S}\). By Lemma 4.7, there is a finite set \(X\) such that if \(M\vDash\psi_{S}(b,c)\) then either \(b\) and \(c\) are distance at most \(r\) apart or at least one of \(b,c\) is in \(X\). We will hard-code into our algorithm the elements of \(X\), along with the identity of their successors and predecessors.
Note that since \(\psi_{S}(x,y)\) is an existential formula, it follows that for a fixed \(a\), the set of elements \(b\) such that \(M\vDash\psi_{S}(a,b)\) is computably enumerable (to see why, note that we can simply enumerate tuples in \(M\) until we find one that witnesses the existential formula \(\psi_{S}(a,b)\)), and likewise for the set of elements \(b\) such that \(M\vDash\psi_{S}(b,a)\). Thus our algorithm may work as follows.
1. Begin enumerating elements \(b\) such that \(M\vDash\psi_{S}(a,b)\) or \(M\vDash\psi_{S}(b,a)\).
2. For each element \(b\) such that \(M\vDash\psi_{S}(a,b)\), check if either \(a\) or \(b\) is in \(X\). If so, use the hard-coded list of successors and predecessors of elements of \(X\) to check if \(b\) is a successor of \(a\). If this is true, enumerate \(b\) into the first list. If \(a\) and \(b\) are both not in \(X\) then enumerate \(b\) into the first list with no extra checks.
3. Do the same thing for each element \(b\) such that \(M\vDash\psi_{S}(b,a)\), but enumerate \(b\) into the second list instead of the first.
Since \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\), the true successor and predecessor of \(a\) will be successfully enumerated. Also, if \(b\) is some element of \(M\) which is distance more than \(r\) from \(a\) then either \(M\nvDash\psi_{S}(a,b)\) and \(M\nvDash\psi_{S}(b,a)\), in which case \(b\) will not be enumerated, or one of \(a,b\) is in \(X\), in which case \(b\) will still not be enumerated (because it is not a true successor or predecessor of \(a\)).
**Lemma 4.9** (Guessing algorithm for neighborhoods).: _There is an algorithm which, given any \(a\in M\) and number \(n\in\mathbb{N}\), enumerates a list of at most \(O(n^{2})\) guesses for the sequence \(a-\underline{n},\ldots,a+\underline{n}\), one of which is correct._
Proof.: It is easiest to describe our algorithm in the following way. We will first describe an algorithm which has access to certain extra information (which might not be computable from an oracle for \(M\)) and which uses this extra information to correctly compute the sequence \(a-\underline{n},\ldots,a+\underline{n}\). We then obtain an algorithm for enumerating guesses for the sequence by trying each possible value of the extra information and running the algorithm on each of these values in parallel.8 To finish, we will have to show that there are only \(O(n^{2})\) possible values for the extra information.
Footnote 8: A slightly subtle point here is that the algorithm which uses extra information to compute \(a-\underline{n},\ldots,a+\underline{n}\) might not terminate if the extra information it is given is incorrect. Thus some of the possible values that we try for the extra information will never actually output a guess. This is why we only say that our final algorithm _enumerates_ a list of guesses rather than that it _computes_ a list of guesses.
To begin, let \(r_{1}\) be the constant from the statement of Lemma 4.8 (i.e. the upper bound on the distance between any \(a\) and any element which is enumerated by the algorithm for guessing successors and predecessors of \(a\)). Let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\) and let \(r_{2}\) be the satisfaction radius of \(\varphi_{S}\).
Suppose we are given an element \(a\in M\) and a number \(n\in\mathbb{N}\) as input. Let \(N=r_{1}n+r_{2}\). Our algorithm proceeds in two phases.
1. In the first phase, we will use the algorithm from Lemma 4.8 to collect candidates for \(a+\underline{i}\) for each \(-N\leq i\leq N\). More precisely, for each such \(i\) we will find a set \(U_{i}\) which contains \(a+\underline{i}\) and which is contained in the \(r_{1}|i|\)-neighborhood of \(a\).
2. In the second phase, we will use the sets of candidates collected in the first stage as input to the satisfaction algorithm (applied to \(\varphi_{S}\)) to determine the exact identities of \(a+\underline{i}\) for each \(-n\leq i\leq n\).
The "extra information" that we alluded to above is needed in the first phase of the algorithm. This is because the sets \(U_{i}\) are not quite computable from an oracle for \(M\), but only computably enumerable. However, since the they are all finite, it is possible to compute them exactly with only a small amount of additional information. Let \(i\) be the index of the last \(U_{i}\) to have a new element enumerated into it and let \(m\) be the size of \(U_{i}\) once all its elements have been enumerated (note that such an \(i\) and \(m\) exist because all the \(U_{j}\) are finite). We claim that the pair \((i,m)\) is enough information to allow us to compute all the sets \(U_{j}\) exactly and that there are only \(O(n^{2})\) possible values for this pair.
To see why we can compute all the \(U_{j}\) exactly, note that given \(i\) and \(m\) we can simply keep enumerating elements into all the \(U_{j}\) until we see that \(U_{i}\) has size \(m\). To see why there are only \(O(n^{2})\) possible values for the pair \((i,m)\), note that there are only \(2N+1\) possible values for \(i\) and at most \(r_{1}(2N+1)\) possible values for \(m\) (since \(U_{i}\) is contained in the \(r_{1}|i|\)-neighborhood of \(a\), which has \(r_{1}(2|i|+1)\leq r_{1}(2N+1)\) elements). Thus there are at most \(r_{1}(2N+1)^{2}=O(n^{2})\) possible values for \((i,m)\).
_Phase 1: collecting candidates._ The sets \(U_{i}\) for \(-N\leq i\leq N\) can be enumerated as follows. To begin with, set \(U_{0}=\{a\}\) and set all other \(U_{i}=\varnothing\). Then run the following processes in parallel: for each \(-N<i<N\) and each element \(b\) of \(U_{i}\), use the algorithm of Lemma 4.8 to enumerate candidates for the successor and predecessor of \(b\). If \(i\geq 0\) then add each such candidate for the successor of \(b\) to \(U_{i+1}\). If \(i\leq 0\) then add each candidate for the predecessor of \(b\) to \(U_{i-1}\). It is easy to show by induction that for each \(i\), \(a+\underline{i}\) will eventually be enumerated into \(U_{i}\) and that each element enumerated into \(U_{i}\) is distance at most \(r_{1}|i|\) from \(a\).
_Phase 2: computing neighbors exactly._ Given the sets \(U_{i}\) from phase 1, we can compute the exact identities of \(a-\underline{n},\ldots,a+\underline{n}\) as follows. First, let \(U=U_{-N}\cup\ldots\cup U_{N}\) and note that \(a+\underline{0}=a\). Next, loop over \(i=0,1,\ldots,n-1\). On step \(i\), we will compute \(a+\underline{i}+\underline{1}\) and \(a-\underline{(i+1)}\). Suppose that we are on step \(i\) of the algorithm and assume for induction that we have already successfully computed \(a+\underline{i}\) and \(a-\underline{i}\) (note that for \(i=0\) this is trivial). Now do the following:
1. For each \(b\in U_{i+1}\), use the satisfaction algorithm (of Proposition 4.5) with the set \(U\) to check if \(M\vDash\varphi_{S}(a+\underline{i},b)\).
2. For each \(b\in U_{-(i+1)}\), use the satisfaction algorithm with the set \(U\) to check if \(M\vDash\varphi_{S}(b,a-\underline{i})\).
Note that each \(b\in U_{i+1}\) is within distance \(r_{1}(i+1)\) of \(a\). Since \(U\) contains the entire \(N\)-neighborhood of \(a\) and \(N=r_{1}n+r_{2}\geq r_{1}(i+1)+r_{2}\), \(U\) also contains the \(r_{2}\)-neighborhood of \(b\). Thus the conditions of the satisfaction algorithm are fulfilled and so we correctly compute whether \(b\) is the successor of \(a+\underline{i}\) or not. And since \(U_{i+1}\) is guaranteed to contain \(a+\underline{i+1}\), our algorithm will correctly identify \(a+\underline{i+1}\). Completely symmetric reasoning applies to show that our algorithm will correctly identify \(a-\underline{(i+1)}\).
**Lemma 4.10** (Guessing algorithm for \(A\)).: _There is an algorithm which, given any number \(n\in\mathbb{N}\), enumerates a list of at most \(O(n^{2})\) guesses for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\), one of which is correct._
Proof.: Let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) (i.e. \(M\vDash A(a)\) if and only if \(M\vDash\varphi_{A}(x)\)) and let \(r\) be the satisfaction radius of \(\varphi_{A}\). This algorithm essentially just combines the algorithm for guessing neighborhoods with the satisfaction algorithm for \(\varphi_{A}\).
Given a number \(n\in\mathbb{N}\) as input, first use the algorithm from Lemma 4.9 to enumerate guesses for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) (this can be done by simply giving the element \(0\in M\) and the number \(n+r\) as input to that algorithm). Let \(b_{-(n+r)},\ldots,b_{n+r}\) be one such guess and let \(U=\{b_{i}\mid-(n+r)\leq i\leq n+r\}\). For each \(0\leq i\leq n\), use the satisfaction algorithm with the set \(U\) to check if \(M\vDash\varphi_{A}(b_{i})\). If so, report that \([\![A(\underline{i})]\!]^{M}\) is true and otherwise report that it is false.
So for each guess for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) we generate exactly one guess for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\) and thus we generate at most \(O((n+r)^{2})=O(n^{2})\) guesses overall. Furthermore, one of the guesses for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) is guaranteed to be correct. For this guess, each \(b_{i}\) is actually equal to \(\underline{i}\) and for each \(i\leq n\), the set \(U\) really does contain the \(r\)-neighborhood of \(b_{i}\). Thus, for this guess, each \([\![A(\underline{i})]\!]^{M}\) is computed correctly.
## 5. Questions
### Bi-interpretability
Since definitional equivalence is a strong form of bi-interpretability, it seems reasonable to ask whether Theorem 1.2 still holds when definitional equivalence is replaced with bi-interpretability.
**Question 2**.: _Is there a consistent, c.e. theory such that no theory bi-interpretable with it has a computable model?_
It seems possible that the theory \(T\) we used in our proof of Theorem 1.2 could also be used to answer this question, but there are a few difficulties. One issue is that while the mutual algebraicity of a structure is preserved under definitional equivalence, it is not always preserved under bi-interpretability.
**Example 5.1** (Bi-interpretability fails to preserve mutual algebraicity).: Let \(\mathcal{L}\) be the language with just equality and let \(\mathcal{L}^{\prime}\) be a language with two sorts \(U\) and \(V\), and two function symbols \(f,g\colon V\to U\). Let \(T\) be the \(\mathcal{L}\)-theory describing an infinite set and let \(T^{\prime}\) be the \(\mathcal{L}^{\prime}\)-theory which states that \(U\) is infinite and \((f,g)\) is a bijection from \(V\) to \((U\times U)\setminus\{(x,x)\mid x\in U\}\).
Given a model of \(T^{\prime}\), we can obtain a model of \(T\) by forgetting the sort \(V\) and the functions \(f\) and \(g\). Given a model of \(T\) we can obtain a model of \(T^{\prime}\) as follows. Take as the underlying set for the model, the set of all pairs \((x,y)\) with pairs of the form \((x,x)\) forming the sort \(U\) and pairs of the form \((x,y)\) for \(x\neq y\) forming the sort \(V\). For the functions \(f\) and \(g\), simply take \(f((x,y))=(x,x)\) and \(g((x,y))=(y,y)\).
It is not hard to check that these two interpretations give a bi-interpretation. However, while every model of \(T\) is clearly mutually algebraic, the same is not true for \(T^{\prime}\). For example, the formula \(f(y)=x\) is not equivalent to any Boolean combination of mutually algebraic formulas.
A second issue (not unrelated to the first) is that, in our proof, we relied on the fact that any model \(M\) of a theory definitionally equivalent to \(T\) carries a notion of distance inherited from \(T\). In particular, we used this to bound the number of guesses required by the neighborhood guessing algorithm of Lemma 4.9. However, if \(M\) is only a model of a theory bi-interpretable with \(T\), it is not clear if there is still a good notion of distance which can play this role.
### Natural theories
Arguably, the theory \(T\) that we used to prove Theorem 1.2 is not very natural. It would be interesting to know if this is necessary.
**Question 3**.: _Is there a natural theory witnessing Theorem 1.2?_
Of course, much depends on what the word "natural" means. In the interests of asking a somewhat more concrete question, let's say that a theory is natural if it has been studied (at least implicitly) by mathematicians who are not logicians.
We can rephrase our question as follows: is there any natural theory which has satisfies the robust version of the Tennenbaum property implicit in Theorem 1.2? In light of Pakhomov's results, which seem to show that any theory interpreting a decent amount of arithmetic is definitionally equivalent to a theory without the Tennenbaum property, it seems like a good idea to first ask whether any natural theory satisfies the regular version of the Tennenbaum property but does not interpret any nontrivial fragment of arithmetic. We are not aware of any such theory and would consider it highly interesting.
**Question 4**.: _Is there any natural (consistent) theory \(T\) such that \(T\) has no computable models and does not interpret any nontrivial fragment of arithmetic?_
One can ask a similar question on the level of models rather than theories. In analogy with our definition for theories, let's say that a countable structure is natural if it has been studied by mathematicians who are not logicians.
**Question 5**.: _Is there a natural countable structure with no computable presentation?_
Again, we are not aware of any completely convincing example of such a structure and would consider any such example to be very interesting.
|
2307.00139 | 3D oxygen vacancy order and defect-property relations in multiferroic
(LuFeO$_3$)$_9$/(LuFe$_2$O$_4$)$_1$ superlattices | Oxide heterostructures exhibit a vast variety of unique physical properties.
Examples are unconventional superconductivity in layered nickelates and
topological polar order in (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$ superlattices.
Although it is clear that variations in oxygen content are crucial for the
electronic correlation phenomena in oxides, it remains a major challenge to
quantify their impact. Here, we measure the chemical composition in
multiferroic (LuFeO$_3$)$_9$/(LuFe$_2$O$_4$)$_1$ superlattices, revealing a
one-to-one correlation between the distribution of oxygen vacancies and the
electric and magnetic properties. Using atom probe tomography, we observe
oxygen vacancies arranging in a layered three-dimensional structure with a
local density on the order of 10$^{14}$ cm$^{-2}$, congruent with the
formula-unit-thick ferrimagnetic LuFe$_2$O$_4$ layers. The vacancy order is
promoted by the locally reduced formation energy and plays a key role in
stabilizing the ferroelectric domains and ferrimagnetism in the LuFeO$_3$ and
LuFe$_2$O$_4$ layers, respectively. The results demonstrate the importance of
oxygen vacancies for the room-temperature multiferroicity in this system and
establish an approach for quantifying the oxygen defects with atomic-scale
precision in 3D, giving new opportunities for deterministic defect-enabled
property control in oxide heterostructures. | K. A. Hunnestad, H. Das, C. Hatzoglou, M. Holtz, C. M. Brooks, A. T. J. van Helvoort, D. A. Muller, D. G. Schlom, J. A. Mundy, D. Meier | 2023-06-30T21:14:47Z | http://arxiv.org/abs/2307.00139v1 | D oxygen vacancy order and defect-property relations in multiferroic (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices
###### Abstract
Oxide heterostructures exhibit a vast variety of unique physical properties. Examples are unconventional superconductivity in layered nickelates[1] and topological polar order in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices[2, 3]. Although it is clear that variations in oxygen content are crucial for the electronic correlation phenomena in oxides[4], it remains a major challenge to quantify their impact[5]. Here, we measure the chemical composition in multiferroic (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices, revealing a one-to-one correlation between the distribution of oxygen vacancies and the electric and magnetic properties. Using atom probe tomography, we observe oxygen vacancies arranging in a layered three-dimensional structure with a local density on the order of 10\({}^{14}\) cm\({}^{-3}\), congruent with the formula-unit-thick ferrimagnetic LuFe\({}_{2}\)O\({}_{4}\) layers. The vacancy order is promoted by the locally reduced formation energy and plays a key role in stabilizing the ferroelectric domains and ferrimagnetism in the LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) layers, respectively. The results demonstrate the importance of oxygen vacancies for the room-temperature multiferroicity in this system and establish an approach for quantifying the oxygen defects with atomic-scale precision in 3D, giving new opportunities for deterministic defect-enabled property control in oxide heterostructures.
## I Introduction
The concentration and distribution of oxygen in strongly correlated electron systems is essential for the material's response[5]. By introducing oxygen vacancies or interstitials, electronic and magnetic properties can be controlled, and even entirely new functional properties can be obtained[6]. For example, redox reactions can change the oxygen-stoichiometry and drive topotactic transitions[7], resistive switching[8], and ferroelectric self-poling[9]. In structured materials, the oxygen diffusion length is usually comparable to dimensions of the system[10] and local variations in oxygen content naturally arise due to varying defect formation energies[11]. The latter plays a crucial role for property-engineering in oxide heterostructures, where atomically precise interfaces in combination with defect engineering are used to tailor, e.g., polar order[12], magnetic exchange interactions[13], and the onset of superconductivity[14, 15].
Quantifying emergent spatial variations in oxygen content at the atomic level, however, is extremely challenging [5, 16]. Enabled by the remarkable progress in high-resolution transmission electron microscopy, it is possible to image individual oxygen defects in heterostructures [17] and, for sufficiently high defect densities, chemical fingerprints associated with their accumulation or depletion at interfaces/interlayers can be detected [18, 19, 20, 21]. Despite their outstanding capabilities, these electron-microscopy based methods are not quantitative and inherently restricted to 2D projections along specific zone axes. This restriction prevents the full three-dimensional (3D) analysis of oxygen defects and limits the microscopic understanding of the interplay between oxygen defects and the material's physical properties. An experimental approach that, in principle, facilitates the required chemical accuracy and sensitivity to overcome this fundamental challenge is atom probe tomography (APT). The potential of APT is demonstrated by previous work on bulk oxide superconductors [4] and ferroelectrics [22], measuring stoichiometric variations at the nanoscale and lattice positions occupied by individual dopant atoms, respectively.
Here, we quantify the distribution of oxygen vacancies in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices and demonstrate its importance for the electric and magnetic orders that lead to room-temperature multiferroicity in this system. Using APT, we show that oxygen vacancies (\(\nu_{0}\)) have a propensity to accumulate in the LuFe\({}_{2}\)O\({}_{4}\) monolayers, forming a layered 3D structure with an average density of about \((7.8\pm 1.8)\cdot 10^{13}\) cm\({}^{-2}\). The oxygen vacancies facilitate the electrical screening that is essential for stabilizing the ferroelectric order and control the oxidation state of the iron (Fe), which is responsible for the emergent ferrimagnetism. The results clarify the defect-property relation and show that the multiferroic behavior in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) is intertwined with - and promoted by - the 3D oxygen vacancy order.
Figure 1a shows a high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) image of the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice. The system exhibits spontaneous electric and magnetic order, facilitating magnetoelectric multiferroicity at room temperature [23]. The ferroelectricity relates to the displacement of the Lu atoms in the LuFeO\({}_{3}\) layers (up-down: +_P_; down-down-up: -_P_, see Fig. 1a), whereas the ferrimagnetism has been explained based on Fe\({}^{2+}\)\(\rightarrow\) Fe\({}^{3+}\) charge-transfer excitations in the LuFe\({}_{2}\)O\({}_{4}\) layers [24]. Interestingly, the multiferroic (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice develops an unusual ferroelectric domain state with extended positively charged domain walls in the LuFeO\({}_{3}\) layers, where the polarization meets head-to-head (\(\rightarrow\)\(\leftarrow\)) [25]. The formation of charged head-to-head domain walls is surprising as they have high electrostatic costs, which raises the question how the material stabilizes them.
To understand the microscopic mechanism that leads to the distinct magnetic and electric order in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\), we map the 3D chemical composition of the superlattice using APT. For the APT
Figure 1: **3D imaging of the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices structure.****a**, HAADF-STEM image recorded along the [100] zone axis and schematic showing the atomic structure of the superlattice. Ferroelectric +_P_ and -_P_ domains are colored blue and red, respectively. **b**, SEM image of an APT needle. Three different layers are visible, corresponding to the Cr protection layer (dark grey), the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice (bright), and the substrate. **c**, 3D reconstruction of the APT data. Superlattice and substrate are represented by the Fe and ZrO ionic species, respectively. The dark lines in the superlattice correspond to double-Fe columns of the LuFe\({}_{2}\)O\({}_{4}\) layers. **d**, Zoom-in to one of LuFe\({}_{2}\)O\({}_{4}\) layers in **c**, resolving the double-Fe columns.
analysis, we deposit a protective capping layer (Pt, Cr or Ti) and prepare needle-shaped specimens using a focused ion beam (FiB, see Methods) as shown in Fig. 1b. The needle-like shape is a requirement in APT experiments and allows for producing the high electric fields required for field evaporation of surface atoms when a voltage > 2 kV is applied. The capping layer ensures that the (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice is located below the tip of the needle, which enables us to analyze a larger volume and, hence, improve the chemical precision of the experiment. Figure 1c shows the 3D reconstruction of the measured volume, where Fe and ZrO ionic species are presented to visualize the superlattice and substrate, respectively (mass spectrum and bulk chemical composition are presented in Supplementary Fig. S1). The LuFe\({}_{2}\)O\({}_{4}\) layers are visible as darker lines due to their higher concentration of Fe atoms compared to LuFeO\({}_{3}\). The 3D reconstruction shows that the spacing between the LuFe\({}_{2}\)O\({}_{4}\) layers varies within the analyzed volume of the superlattice, ranging from approximately 2 nm to 6 nm. At the atomic scale, the LuFe\({}_{2}\)O\({}_{4}\) layers exhibit the characteristic double-Fe layers (Fig. 1d), consistent with the HAADF-STEM data in Fig. 1a. Furthermore, enabled by the 3D APT imaging, we observe step-like discontinuities in individual LuFe\({}_{2}\)O\({}_{4}\) layers in Fig. 1c. The observation of such growth-related imperfections leads us to the conclusion that the multiferroic response of the material is rather robust and resilient against such local disorder.
Most importantly for this work, the APT measurement provides information about the local chemical composition of the superlattice. Figure 2a displays the concentration of the different atomic species evaluated for the region marked by the white dashed line in Fig. 1c. The line plots are derived by integrating the data in the direction perpendicular to the long axis of the needle-shaped sample, showing pronounced anomalies at the position of the LuFe\({}_{2}\)O\({}_{4}\) layers (marked by dashed lines). In total, seven peaks are resolved labelled 1 to 7; two peaks correspond to the discontinuous LuFe\({}_{2}\)O\({}_{4}\) layer (represented by the double-peak 1/2) and five peaks to the continuous LuFe\({}_{2}\)O\({}_{4}\) layers resolved in Fig. 1c (3) to 7). In all cases, we consistently find an enhancement in Fe concentration and a decrease in Lu and O concentration in the LuFe\({}_{2}\)O\({}_{4}\) layers relative to LuFeO\({}_{3}\). A more detailed analysis of the chemical composition of one of the continuous LuFe\({}_{2}\)O\({}_{4}\) layers (i.e., layer 3) is presented in Fig. 2b. Figure 2b compares measured and calculated concentration profiles for Lu, Fe, and O. The calculated concentration profile corresponds to a stoichiometric (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, assuming a realistic experimental resolution of about 0.6 nm, showing a good agreement
Figure 2: **3D oxygen vacancy order.****a**, Profiles of the relative chemical composition, with the surface to the left in the plot. Anomalies are observed at all the LuFe\({}_{2}\)O\({}_{4}\) layers, numbered 1 to 7. **b**, Measured (data points) and theoretically expected (solid line) chemical concentration profile at LuFe\({}_{2}\)O\({}_{4}\) layer 3. The shaded area highlights that the measured oxygen content is lower than in stoichiometric LuFe\({}_{2}\)O\({}_{4}\), indicating an accumulation of oxygen vacancies. **c**, 3D visualization of the oxygen stoichiometry based on the APT data set. Oxygen vacancies arrange in a layered three-dimensional structure congruent with the formula-unit-thick ferrimagnetic LuFe\({}_{2}\)O\({}_{4}\) layers. Within the LuFe\({}_{2}\)O\({}_{4}\) layers, oxygen vacancies form puddle-like regions of reduced LuFe\({}_{2}\)O\({}_{4\delta}\) (blue).
with the experimental data for Lu and Fe. In contrast, the measured concentration of O is lower than expected, indicating an accumulation of oxygen vacancies, v\({}_{0}\). By integrating over the layer, we find a v\({}_{0}\) density of \((7.8\pm 1.8)*10^{13}\) cm\({}^{-2}\), which corresponds on average to an oxygen deficient state in LuFe\({}_{2}\)O\({}_{4-\delta}\) with \(\delta\)= 0.5.
The same trend is observed for other LuFe\({}_{2}\)O\({}_{4}\) layers with minor layer-to-layer variations in the v\({}_{0}\) density (see Supplementary Fig. S2), indicating that the oxygen vacancies form a layered 3D structure within the (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, congruent with the arrangement of the LuFe\({}_{2}\)O\({}_{4}\) layers. It is important to note, however, that within the different layers the distribution of v\({}_{0}\) is inhomogeneous as shown by the 3D map in Fig. 2c. This map presents the local chemical composition and reflects the periodic variation in v\({}_{0}\) density in the LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) layers, consistent with the integrated data in Fig. 2a,b. Furthermore, it reveals a puddle-like distribution of the oxygen vacancies with puddle sizes in the order of a few nanometers and a maximum local v\({}_{0}\) density of up to \(\approx\) 10\({}^{14}\) cm\({}^{-2}\) (i.e., a reduction to LuFe\({}_{2}\)O\({}_{3.25}\)).
To better understand the propensity of the oxygen vacancies to accumulate at the LuFe\({}_{2}\)O\({}_{4}\) layers, we calculate and compare the v\({}_{0}\) defect formation energies for LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) using density functional theory (DFT) calculations (Methods). Possible vacancy sites are located in the Lu- or Fe-layers ( v\({}_{0}^{\text{LuO}_{2}}\) or v\({}_{0}^{\text{FeO}}\) ) as illustrated in Fig. 3a,b. The respective defect formation energies as function of temperature are plotted in Fig. 3c, showing that the formation energy for oxygen vacancies at v\({}_{0}^{\text{FeO}}\) sites is in general lower than at v\({}_{0}^{\text{LuO}_{2}}\) sites. In addition, the comparison of the data for LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\) indicates that it is energetically favorable to accommodate oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\), yielding an energy reduction of 0.5 eV per v\({}_{0}^{\text{FeO}}\) relative to oxygen vacancies in LuFeO\({}_{3}\). Thus, by accumulating oxygen vacancies in the LuFe\({}_{2}\)O\({}_{4}\) layers, the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice can substantially reduce its energy, which promotes the formation of
Figure 3: **Defect formation energy for oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\).****a,b**, Schematics illustrating possible defect sites for oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\) respectively. **c**, Comparison of oxygen vacancy formation energy as a function of temperature for an oxygen partial pressure of 10-10 atm. The lowest energy is found for oxygen vacancies in the Fe layers of LuFe\({}_{2}\)O\({}_{4}\)(v\({}_{0}^{\text{FeO}}\)). **d**, Schematic of the superlattice structure, summarizing the APT and DFT results. Oxygen vacancies accumulate in the LuFe\({}_{2}\)O\({}_{4}\) layers due to the locally reduced formation energy. The oxygen vacancies stabilize a ferroelectric tail-to-tail configuration at these layers and provide the electrons that are needed to screen the head-to-head domain walls that form in the LuFeO\({}_{3}\) layers.
\(\nu_{0}^{\text{FeO}}\)-rich LuFe\({}_{2}\)O\({}_{4}\) layers and, hence, a layered 3D v\({}_{0}\) order consistent with the APT results.
The observed 3D oxygen vacancy order has a direct impact on electric and magnetic properties and provides insight into their microscopic origin. The accumulation of v\({}_{0}\) effectively leads to electron-doping of the LuFe\({}_{2}\)O\({}_{4}\) layers. We find that the locally measured v\({}_{0}\) density (Fig. 2) is equivalent to a positive charge of 25 \(\pm\) 5 uC/cm\({}^{2}\). The latter explains why the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice develops the unusual ferroelectric tail-to-tail configuration at the LuFe\({}_{2}\)O\({}_{4}\) layers (seen in Fig. 1a and 3d). The latter carry a negative charge of about 12 uC/cm\({}^{2}\), which partially compensates the positive charge associated with the oxygen vacancies. As a consequence of the energetically favored tail-to-tail configuration at the LuFe\({}_{2}\)O\({}_{4}\) layers, formation of positively charged head-to-head domain walls within the LuFeO\({}_{3}\) layers is enforced which, in turn, are screened by a redistribution of the mobile electrons generated by the v\({}_{0}\). The importance of this redistribution of electrons, i.e., the charge transfer of free electrons to the head-to-head domain walls goes beyond the ferroelectric properties. It also drives the change in the oxidation state of Fe in the LuFe\({}_{2}\)O\({}_{4}\) layers (Fe\({}^{2+}\)\(\rightarrow\) Fe\({}^{3+}\)) [23, 24, 25], which is crucial for the ferrimagnetic order of the material.
In conclusion, both the electric and magnetic properties of (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) are closely related to oxygen vacancy ordering. The results clarify the microscopic origin the unusual ferroelectric domain structure and the enhancement of the magnetic response, revealing the importance of extrinsic defect-driven mechanisms for the emergence of room-temperature multiferroicity. Quantitative 3D imaging of oxygen defects and chemical profiling at the atomic-scale is of interest beyond the defect-property relations discussed in this work and can provide insight into defect-driven effects and the general role that oxygen vacancies (or interstitial) play in emergent phenomena in oxide hetero-structures. The approach shown demonstrates the benefit of quantitative atomic-scale characterization of oxygen at interfaces in oxides, which is crucial for a better understanding of their complex chemistry and physics, as well as improved property engineering and their utilization in nanoelectronic and oxitronic technologies.
## Methods
**Sample preparation and characterization:** Thin films of (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) were grown by reactive-oxide molecular-beam epitaxy in a Veeco GEN10 system on (111) (ZrO)\({}_{0.950}\)(Y\({}_{2}\)O\({}_{3}\))\({}_{0.005}\) (or 9.5 mol% yttria-stabilized zirconia) (YSZ) substrates, as described in Ref. [23]. A 300 nm Ti or Cr protective layer was deposited on top of the film with e-beam evaporation using a Pfeiffer Vacuum Classic 500, at a rate of 1 A/s. The characteristic needle shaped specimens for APT were prepared with a Helios NanoLab DualBeam FIB as described by Ref. [26]. Cross-sectional TEM specimens were prepared using an FEI Strata 400 FIB with a final milling step of 2 keV to reduce surface damage.
**Transmission electron microscopy:** Selected specimens were inspected to ensure adequate sample quality with TEM using a JEOL JEM-2100F Field Emission Electron Microscope operating at 200kV. The high-resolution HAADF-STEM image in Fig. 1 was acquired on a 100-keV Nion UltraSTEM, a fifth-order aberration-corrected microscope. The lutthetium distortions were quantified from HAADF-STEM images, as described in Ref. [23].
**Atom probe tomography:** APT measurements were recorded with a Cameca LEAP 5000XS instrument, operating in laser pulsed mode. Data was collected at cryogenic temperature (\(T\) = 25 K) with an applied bias between 2 kV and 10 kV. Laser pulses with 30 pJ energy and 250 kHz frequency were used, and the detection rate was set to 0.5%, i.e., 2 ions detected every 1000 pulse. The raw APT data was reconstructed into 3D datasets with the Cameca IVAS 3.6.12 software, using the voltage profile to determine the radial evolution. The image compression factor and field reduction factor was adjusted to make the thin film flat relative to the substrate.
**First-principles calculations of oxygen vacancy formation.** To understand the tendency of formation of an oxygen vacancy (v\({}_{0}\)) in the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, we studied the formation energy (\(\Delta E_{f}\)) of an oxygen vacancy as a function of temperature (\(T\)) and oxygen partial pressure (\(p\)) by considering the bulk ferroelectric state of LuFeO\({}_{3}\) (space group \(P\)\(G\)\(x\)\(cm\)) and the bulk ferroelectric Fe\({}^{2+}\)/Fe\({}^{3+}\) charge-ordered state of LuFe\({}_{2}\)O\({}_{4}\) (space group \(C\)\(m\)). This is a reasonable consideration as in the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices the improper ferroelectric signature
trimer distortions are induced in the LuFe\({}_{2}\)O\({}_{4}\) layer by the LuFeO\({}_{3}\) layers [23]. The formation of oxygen vacancies was studied by extracting one oxygen atom from the supercell of the ferrite systems. We used the following equation to calculate \(\Delta E_{f}\), [27, 28]
\[\Delta E_{f}=E(\nu_{0})-E_{0}+\Delta x\mu_{0}\]
where \(E(\nu_{0})\) and \(E_{0}\) represent the total energies of the supercell with and without an oxygen vacancy, respectively, and \(\Delta x\) denotes the number of \(\nu_{0}\) created in the supercell. As we considered a neutral oxygen vacancy, \(\Delta E_{f}\) does not depend on the charge state of \(\nu_{0}\) and the Fermi level of the system [28]. The chemical potential of oxygen atom, denoted as \(\mu_{0}\), was calculated by the following equation [29],
\[\mu_{0}(p,T)=\mu_{0}(p_{0},T_{0})+\mu_{0}(p_{1},T)+\frac{1}{2}k_{B}T\ln\left( \frac{p}{p_{1}}\right)\]
Here, \(\mu_{0}(p_{0},T_{0})\) represents the oxygen chemical potential at zero pressure (\(p_{0}\)=0) and zero temperature (\(T_{0}=0\)). According to the first principles calculations, \(\mu_{0}(p_{0},T_{0})=\frac{1}{2}E(\text{O}_{2})\), where \(E(\text{O}_{2})\) denotes the total energy of an O\({}_{2}\) molecule. The second term, \(\mu_{0}(p_{1},T)\), which denotes the contribution of the temperature to the oxygen chemical potential at a particular pressure of \(p_{1}=1\) atm, was obtained from the experimental data [30]. The third term, \(\frac{1}{2}k_{B}T\ln\left(\frac{p}{p_{1}}\right)\), represents the contribution of pressure to the chemical potential of oxygen. Here, \(k_{B}\) is the Boltzmann constant. In the present study, we considered two kinds of oxygen vacancies, located in the Lu- (\(\nu_{0}^{\text{LuO}_{2}}\)) or Fe (\(\nu_{0}^{\text{Fe0}}\)) layers, as illustrated in Fig. 3a,b. The oxygen vacancy formation was calculated in supercells consisting of 12 formula units and 24 formula units of LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\), respectively.
**Computational details.** We calculated the total energies by performing first-principles calculations by employing the density functional theory (DFT) method and the projector augmented plane-wave (PAW) basis method as implemented in the VASP (Vienna _Ab initio_ Simulation Package) [31, 32]. The Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation (GGA) was used to calculate the exchange correlation functional [33]. A kinetic energy cut-off value of 500 eV and appropriate k-point meshes were selected so that total ground state energies were converged to \(10^{-6}\) eV and the Hellman-Feynman forces were converged to 0.001 eV A\({}^{\text{-}4}\). For each structure, coordinates of all atoms and lattice vectors were fully relaxed. The GGA+U method as developed by Dudarev _et al._[34] was employed to deal with electron correlation in the Fe \(3d\) state. All calculations were performed by considering \(U_{\text{eff}}=U-J_{\text{H}}=4.5\) eV for the Fe \(3d\) states, where \(U\) and \(J_{\text{H}}\) represent the spherically averaged matrix elements of on-site Coulomb interactions. Additionally, we cross-checked the value of \(\Delta E_{f}\) by varying the value of \(U_{\text{eff}}\) from 3.5 to 5.5 eV, as was used in the previous studies [35, 36, 23]. We considered Lu \(4f\) states in the core. All the calculations of total energies were performed with ferromagnetic collinear arrangement of Fe spins and without spin-orbit coupling.
**Estimation of O vacancy density:** Due to a change in unit cell composition at the LuFe\({}_{2}\)O\({}_{4}\) layer, the O vacancy density cannot directly be extracted from the profile in Fig. 2. Instead, a simulation based on the ideal superlattice structure without any defect was made (solid line in Fig. 2). Using a DFT-based structure of the superlattice, the ideal atomic distribution was simulated. The atoms were then shifted around randomly to simulate the spatial resolution of experiment, which was done with a gaussian distribution with 0.55 nm standard deviation. A chemical profile across the simulated structure was then performed to get an expectation of an ideal superlattice structure. The difference between the simulated profile to the real data (shaded area in Fig. 2) represents a measure for the \(\nu_{0}\) concentration. This concentration was converted into a \(\nu_{0}\) density by multiplying it with the oxygen density of the simulated data so that the limited APT detection efficiency is not affecting the final value. The 3D map of the oxygen depletion (Fig. 2c) is derived by displaying the chemical composition in the lateral dimension within five 20 x 20 x 1.5 nm\({}^{3}\) volumes. Chemical composition is converted into formula units (i.e., LuFe\({}_{2}\)O\({}_{4}\)) by measuring the local chemical composition, and compensating for the spatial resolution of the instrument as the oxygen depletion is spread out of regions larger than the actual LuFe\({}_{2}\)O\({}_{4}\) layer.
|
2306.00210 | PERFOGRAPH: A Numerical Aware Program Graph Representation for
Performance Optimization and Program Analysis | The remarkable growth and significant success of machine learning have
expanded its applications into programming languages and program analysis.
However, a key challenge in adopting the latest machine learning methods is the
representation of programming languages, which directly impacts the ability of
machine learning methods to reason about programs. The absence of numerical
awareness, aggregate data structure information, and improper way of presenting
variables in previous representation works have limited their performances. To
overcome the limitations and challenges of current program representations, we
propose a graph-based program representation called PERFOGRAPH. PERFOGRAPH can
capture numerical information and the aggregate data structure by introducing
new nodes and edges. Furthermore, we propose an adapted embedding method to
incorporate numerical awareness. These enhancements make PERFOGRAPH a highly
flexible and scalable representation that effectively captures programs
intricate dependencies and semantics. Consequently, it serves as a powerful
tool for various applications such as program analysis, performance
optimization, and parallelism discovery. Our experimental results demonstrate
that PERFOGRAPH outperforms existing representations and sets new
state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and
10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets
new state-of-the-art results in various performance optimization tasks like
Parallelism Discovery and NUMA and Prefetchers Configuration prediction. | Ali TehraniJamsaz, Quazi Ishtiaque Mahmud, Le Chen, Nesreen K. Ahmed, Ali Jannesari | 2023-05-31T21:59:50Z | http://arxiv.org/abs/2306.00210v2 | Perfograph: A Numerical Aware Program Graph Representation for Performance Optimization and Program Analysis
###### Abstract
The remarkable growth and significant success of machine learning have expanded its applications into programming languages and program analysis. However, a key challenge in adopting the latest machine learning methods is the representation of programming languages, which directly impacts the ability of machine learning methods to reason about programs. The absence of numerical awareness, composite data structure information, and improper way of presenting variables in previous representation works have limited their performances. To overcome the limitations and challenges of current program representations, we propose a novel graph-based program representation called Perfograph. Perfograph can capture numerical information and the composite data structure by introducing new nodes and edges. Furthermore, we propose an adapted embedding method to incorporate numerical awareness. These enhancements make Perfograph a highly flexible and scalable representation that can effectively capture programs' intricate dependencies and semantics. Consequently, it serves as a powerful tool for various applications such as program analysis, performance optimization, and parallelism discovery. Our experimental results demonstrate that Perfograph outperforms existing representations and sets new state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and 10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets new state-of-the-art results in various performance optimization tasks like Parallelism Discovery and Numa and Prefetchers Configuration prediction.
## 1 Introduction
In recent years, the remarkable success of machine learning has led to transformative advancements across numerous fields, including compiler optimization and program analysis. The applications include compiler heuristics prediction, optimization decisions, parallelism detection, etc. [4; 14]. The training process generally involves feeding program data as input and transforming it into a representation suitable for machine learning models. The selection of program representation is crucial, as it can significantly impact the performance of the machine learning model [11]. With the development of graph neural networks (GNNs), an increasing number of graph representations of programs have been incorporated into GNN models for program analysis [3; 26]. One of the pioneering efforts in developing a comprehensive graph representation for programs is ProGraML [13]. ProGraML incorporates control, data, and call dependencies as integral components of a program's representation. In contrast to prior sequential learning systems for code, ProGraML closely resembles the intermediate representations used by compilers, and the propagation of information through these graphs
mimics the behavior of typical iterative data-flow analyses. Despite the success of ProGraML has achieved, there are shortcomings in this current state-of-the-art program representation, especially in performance-oriented downstream tasks. These limitations stem from neglecting numerical values available at compile time and the inadequate representation of composite data types.
In this paper, we present Perfograph* to address the limitations of the current state-of-the-art program representation. Additionally, we propose a novel way to embed numbers in programs in an elegant way that our DL model will not face unknown numbers during inference time. Our experiments demonstrate that Perfograph sets new state-of-the-art results in numerous downstream tasks. For example, in Device Mapping downstream task, Perfograph yield error rates as low as 6% and 10% depending on the target hardware. Moreover, Perfograph even outperforms the tools and models specially designed for specific tasks such as parallelism discovery.
Footnote *: Code available at: [https://github.com/tehranixyz/perfograph](https://github.com/tehranixyz/perfograph)
Overall, the main contributions of this paper are:
* A new compiler and language agnostic program representation based on LLVM-IR that represents programs as graphs.
* The proposed representation supports composite data types and provides numerical awareness, making it highly effective for performance optimization tasks.
* Evaluation of the proposed representation on common downstream tasks and outperforming state-of-the-art representations such as ProGraML.
* Quantification of the proposed approach on a new set of downstream tasks such as parallelism discovery and configuration of NUMA systems.
The rest of the paper is structured as follows: Section 2 presents the related works. In the section 3, we provide a motivational example, showing the limitations of ProGraML, the state-of-the-art program represention. This section is followed by section 4 where we present our proposed representation Perfograph along with the novel way of embedding numerical values. In section 5, experimental results on downstream tasks are provided, and finally, Section 6 concludes the paper and discusses some future works.
## 2 Related Works
Machine learning has brought significant advancements in many fields, and program analysis and software engineering are no exceptions. However, Machine Learning (ML) and Deep Learning (DL) models can not directly process raw source code to reason about programs. Therefore, researchers have explored different approaches to represent applications in a format suitable for DL models. Generally, there are three types of commonly used program presentations: sequence of tokens, Abstract Syntax Tree (AST), and Intermediate Representation (IR).
**Sequence of tokens:** The initial attempts [15; 20; 25] represented source code as a sequence of tokens, such as identifiers, variable names, or operators. This approach intuitively treats programming languages similarly to natural languages. It allows for the utilization of advanced natural language process (NLP) techniques, particularly with the advent of large language models [18; 21; 22]. However, this token-based representation overlooks the inherent dependency information within the program's structure. It fails to capture the unique relationships and dependencies between different elements of the code, which can limit its effectiveness in tasks such as compiler optimization and code optimization.
**AST:** An AST represents the structure of a program by capturing its hierarchical organization. It is constructed based on the syntactic rules of the programming language and provides a high-level abstraction of the code. Previous works have leveraged ASTs as inputs to tree-based models for various code analysis tasks like software defect prediction [16] and code semantic study [9]. Moreover, there have been efforts to augment ASTs into graphs that incorporate program analysis flows such as control flow and data flow. These AST-based graph representations capture more comprehensive code dependency information and have shown superior results compared to traditional approaches in previous works [2; 3].
**IR:** IR is an intermediate step between the source code and the machine code generated by a compiler. Previous work [36] has utilized IR to train an encoding infrastructure for representing programs as a distributed embedding in continuous space. It augments the Symbolic encodings with the flow of information to capture the syntax as well as the semantics of the input programs. However, it
generates embedding at the program or function level and also requires data-flow analyses type for generating the embedding. In contrast, our approach derives embedding from the representation and works at the more fine-grained instruction level. More recent works [6; 13; 8] have leveraged IR-based graph representation to better capture essential program information, such as control flow, data flow, and dependencies. However, despite their success, IR-based graph representations have certain limitations. For example, these representations may not be numeric-aware or may lack the ability to adequately represent composite data types. In this work, we propose Perfograph, a graph representation based on IR, to address these limitations.
## 3 Motivation
As stated in the related work section, program representations based on the intermediate representation are very effective in enabling DL models to automate the process of various optimizations. One such representation is ProGraML, whose performance surpasses other code representations, making it state-of-the-art in various optimizations and downstream tasks. However, despite its potential, it suffers from several limitations. To name a few: it is incapable of properly carrying out information regarding read and write operations to the memory location, has no support for composite data types, and discards numerical values. Listing 1 shows a code snippet where a 3-dimensional array is defined.Figure 1 shows the ProGraML representation of this code snippet. For illustration purposes, instruction nodes and control flow edges are shown in blue, whereas red represents variable, constant nodes, and data flow edges. Green edges show the call graph. As it can be seen, ProGraML fails to represent some critical information. For instance, code float arr[2][3][4] is converted to LLVM IR [2 x [3 x [4 x float]]]*, which is used to construct a node in ProGraML. It eliminates the composite data's structure information, like the array's dimension. Leaving it up to the DL model to infer the meaning behind the numbers in [2 x [3 x float]]*. Moreover, in this representation, only the type of numbers (e.g., int8,float) are considered, and the actual values of the numbers are not given attention. The absence of numerical awareness limits the performance of ProGraML in applications where numerical values play an important role. A numerically aware representation can help understand and optimize operations involving numeric data types, constants, and expressions. There are also some anomalies in the way temporary variables are depicted. For example, in 1, we see the fourth alloca node allocates memory for a variable, and two store instructions are applied on two separate nodes representing the variable. Thus, the information about the first store instruction is not carried out properly when the second store instruction happens. In the following section, we will see how Perfograph effectively addresses many limitations in the current state-of-the-art program representation. Perfograph uses ProGraML representation as its initial graph and reconstructs the graphs by addressing the aforementioned limitations.
## 4 Perfograph: A fined-grained numerical aware graph representation
Perfograph is graph representation based on LLVM IR. In fact, Perfograph is built on top of ProGraML, however, it is not suffering from the limitations that the ProGraML has, helping DL models to reason over the complex structure of the programs enabling them to help compilers
Figure 1: ProGraML representation of Listing 1.
make more accurate optimization decisions, especially in terms of performance. Figure 2 shows how various enhancements and improvements are applied to construct a more precise representation. Consider a simple code example of defining a variable and increasing it by one {int i = 0; i++;}. Figure 1(a) shows the ProGraML representation of this code example. In the following subsection, we will explain how Perfograph is constructed by addressing the limitations shown in Figure 1(a).
### Representing Local Identifiers and store instruction
**Local Identifiers:** Local identifiers' names are preceded by % in LLVM Intermediate representation. Memory allocation on the stack is done by alloca instruction. One of the limitations of the current state-of-the-art program representation, ProGraML, is that it is unable to carry out information regarding the operations that happen to a memory location. For instance, in Figure 1(a), the two store nodes represent storing values of 0 and 1 to variable i. However, as shown, each store instruction accesses a separate memory location, making it difficult for the graph neural network to reason over the operations that happen to a memory location. For the embedding vector of the second store node in 1(a) to represent the fact that some information regarding the variable i has changed, one has to increase the number of GNN layers to 3 to support up to 3 hops when propagating the messages in GNN. This can potentially limit the ability of the GNN model if there are a greater number of hops between the two store nodes shown in Figure 1(a). To address this limitation, instead of having more than one variable node (oval-shape nodes) per identifier, Perfograph only considers one variable node in its graph representation. Any load or store instruction will refer to the same variable node. These changes are shown in Figure 1(b). We see that the store nodes in Figure 1(b) access the same memory location, thus representing the fact that those store instructions are modifying the same memory location.
Store instruction: LLVM uses store instruction to write into memory. store instruction has two arguments, a value to store and the address to which it will store the value. ProGraML differentiates between these two arguments by adding a position feature to the edges as shown in Figure 1(a). However, since the store instruction modifies the contents at the corresponding memory address, we posit that it is better to reflect the fact the content of the identifier has changed. To present this information, Perfograph adds an extra edge from the store node to node representing the identifier whose value is modified by the store instruction. Figure 1(c) shows these changes in the graph constructed by Perfograph.
### Numerical Awareness
Numbers can be a significant factor in optimization decisions. For example, they can show the loop bound, and different optimizations can be considered depending on the loop bound. Perfograph, unlike ProGraML, not only considers the type of numbers such as i32, i64, float but also the actual values of the numbers. Perfograph presents numbers as constant nodes (shown as diamond nodes in Figure 1(d). As illustrated in Figure 1(d), numerical constant nodes have the actual value of the number in their feature set in addition to the type of the number. Even though numerical constant nodes have the value of the number as one of their features, there is a need to embed the numbers in a way that no unknown number will be seen in the test phase.
Figure 2: Perfograph addresses the existing limitations in program representation.
Unlike other tokens, numbers are harder to embed as an infinite amount of numbers exists, and to handle all ranges of numbers, we need to have a very large vocabulary set. However, with a very large vocabulary size, the DL models may still encounter numbers in the inference phase that they have not seen in the training phase. We propose a novel way of embedding numbers called Digit Embedding. Figure 3 shows our approach. To embed a number, we first break down the number to its digits; then, we consider a position for each one of the digits. The goal is to let DL models realize the place value of each digit. Then, each digit and its corresponding position are embedded and summed together. Therefore, we will have an embedding representing the information about the digits and their positions. For instance, in Figure 3, we embed each digit and its corresponding position with an output dimension of 3. Since the number has four digits, the results would be a vector/tensor of size \(4\times 3\). To make sure the Digit Embedding of numbers has the same length across numbers with varying sizes of digits, we apply an aggregation function over the embedding dimension. Since the output embedding dimension is three in this example, we would have one vector of length three representing the number after aggregation. The aggregation function can be of any type (Max, Mean, etc.).
### Composite Data Types
Composite data types, such as arrays and vectors, are an essential part of applications. They play an important role in many applications, such as matrix multiplications. Thus, presenting these data types helps the DL models better understand programs. Current LLVM IR-based program representations fail to present composite data types appropriately. For example, consider a three-dimensional integer array. In LLVM IR, this array is shown as 3 x [2 x [3 x i32]]]*. As can be seen, the length of the arrays and their data types are inferable. However, without proper representation, the DL model's capacities will be spent on learning these deterministic facts (i.e., the length of the arrays and their type). Perfograph considers composite data types as a new node type in its representations. Figure 3(b) shows how composite data types are supported by Perfograph. Unlike other LLVM IR-based representations, Perfograph support multi-dimensional arrays and vectors. Perfograph creates a chain of nodes to present the different dimensions of the arrays. In Figure 3(a), we see there is a node representing the three-dimensional array [3 x [2 x [3 x i32]]]*. Perfograph breaks down the corresponding node into three white nodes (since it is a three-dimension array) as shown in Figure 3(b). Then each node has a context representing that specific dimension of the array. For example, the context for the third dimension is [3 x i32], whereas for the second dimension, the context is [2 x [3 x i32]]. For each composite type node, in addition to the context of the node, we specifically add the length of the array and its type as additional features. For composite data types whose lengths are not known during compile time, we follow the LLVM conventions by considering the length of those data types as vscale. These enhancements will help the DL models to reason over the dimensions and types of composite data types. As a result, Perfograph will ultimately enable the DL models to have more accurate predictions for applications that deal with arrays and vectors.
Figure 4: Perfograph supports composite data types.
Figure 3: Overview of the digit embedding.
Experimental Results and Downstream Tasks
In this section, we evaluate Perfograph on 6 downstream tasks. For each downstream task, we will explain the task itself, the dataset, and the baselines. More details regarding the dataset, the models that are experimented with, and the baselines can be found in the supplementary material section.
### Experimental Setup
In our experiments, we use DGL's [37] RGCN [34] implementation for Perfograph representation. The graphs from Perfograph are treated as heterogeneous and managed via the HeteroGraphConv module. We use a hardware setup of two 18-Core Intel Skylake 6140 CPUs and two NVIDIA Tesla V100-32GB GPUs. The embedding space for numbers is generated by extracting digits and positions from a numeric token of an IR statement, then passed to a PyTorch [29] embedding layer for digit and position embeddings. These are combined for the final numeric token embedding. Non-numeric tokens directly go through the PyTorch embedding layer. Each Perfograph heterogeneous node converts to a 120-dimensional vector via this embedding. We use the Adam [24] Optimizer, relu [1] activation function, a learning rate of \(0.01\), and hidden_dim parameter of \(60\). Mean aggregation is applied to combine different node type results before a linear classification layer, which outputs a probability distribution for each class. The class with the highest probability is the prediction.
### Device Mapping
**Problem Definition:** We apply Perfograph to the challenging heterogeneous device mapping [12] problem. In this task, there are a number of OpenCL kernels that we need to predict which accelerator (CPU or GPU) yields higher performance. We compare Perfograph against DeepTune [12], Inst2Vec [6], and ProGraML [13]. The results of the baselines are quoted from [13].
**Dataset:** For this task, we use the dataset published in [12]. In this dataset, there are 256 OpenCL kernels available, and 680 LLVM IR instances are extracted from them. There are two types of GPUs: AMD and NVIDIA. For each of the GPUs, the runtimes of the kernels are recorded in the dataset. For AMD, 276 kernels show better performance in GPU, and 395 kernels show better performance in CPU. Whereas for NVIDIA, 385 kernels have better runtimes with GPU, and 286 kernels have better runtimes with CPU. We consider this as a binary CPU or GPU classification problem.
**Results:** As the dataset is small, we use the same 10-fold validation (with 80% training, 10% validation, and 10% testing) like ProGraML [13] and chose the model with the highest validation accuracy. The hand-crafted features of [19] are also used as graph-level features in our model to enhance the performance following the approach in [13]. Table 1 and 2 show the final precision, call, f1-score, and accuracy for AMD and NVIDIA devices. Figure 5 compares Perfograph with state-of-the-art models on the Device Mapping dataset. We can see that Perfograph sets new state-of-the-art results by achieving the lowest error rate among the baselines both for AMD and NVIDIA, indicating the effectiveness of Perfograph.
### Parallelism Discovery
**Problem Definition:** In this problem, given a sequential loop, we try to predict whether a loop can be executed in parallel. We treat this problem as a binary classification problem with two classes: Parallel and Non-Parallel.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Precision & Recall & F1-score & Accuracy \\ \hline CPU & 0.94 & 0.94 & 0.94 & 0.94 \\ GPU & 0.94 & 0.94 & 0.94 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Perfograph results for AMD devices.
Figure 5: Performance comparison the device mapping task with state-of-the-art models [lower is better].
**Dataset:** The OMP_Serial dataset [10] is used for this task. It contains around 6k compilable source C files with Parallel and Non-Parallel loops. The training dataset contains around 30k IR files. The OMP_Serial dataset has three test subsets to compare the performance with three traditional parallelism assistant tools: Pluto (4032 IR files), autoPar (3356 IR files), and DiscoPoP (1226 IR files).
**Results:** We evaluate Perfograph on all three subsets and compare it with traditional rule-based tools: Pluto [7], autoPar [31], DiscoPoP [27], and also Deep Learning based tools: Graph2Par [10], ProGraML. Table 3 shows the results. The results of Pluto and Graph2par are reported from [10]. As ProGraML does not have this downstream task in their paper, we used the ProGraML representation in our pipeline to generate the results. Results show that traditional rule-based tools have the highest precision but the lowest accuracy because those tools are overly conservative while predicting parallel loops. So, they miss out on a lot of parallelism opportunities. Perfograph achieves considerably good precision scores across all the test subsets. In terms of accuracy, Perfograph surpasses the current state-of-the-art approaches by 2% in the Pluto and autoPar subset. In the DiscoPoP subset, it achieves an impressive 99% accuracy and surpasses ProGraML by 9%.
### Parallel Pattern Detection
**Problem Definition:** Parallel loops often follow some specific patterns. Identifying parallel patterns is important because it helps developers understand how to parallelize a specific program since each parallel pattern needs to be treated differently. As a result, we apply Perfograph to identify potential parallel patterns in sequentially written programs. Only the three most common parallel patterns are considered: Do-all (Private), Reduction, and Stencil [32]. Given a loop, the task is to predict the pattern.
**Dataset:** For this experiment, we also use the OMP_Serial dataset [10]. This dataset contains source codes of different parallel patterns. These programs are collected from well-known benchmarks like NAS Parallel Benchmark [23], PolyBench [30], BOTS benchmark [17], and the Starbench benchmark [5]. Then, template programming packages like Jinja [33] are used to create synthetic programs from the templates collected from the mentioned benchmarks. The dataset contains 200 Do-all (Private), 200 Reduction, and 300 Stencil loops.
**Results:** We used 80% of the dataset for training and 20% for testing. Table 4 represents our findings. The results of Praformer and Graph2par are reported from [10]. We compare with these two approaches as they are specifically developed for solving this problem. For generating the results with ProGraML, we used the ProGraML representation in our pipeline. We can see Perfograph achieves an impressive 99% accuracy on the OMP_Serial Parallel Pattern dataset. It surpasses the state-of-the-art ProGraML model by 3%. This indicates the strength of Perfograph to capture the syntactic and structural patterns embedded into source programs. From Table 4, we can also see Perfograph has high precision for all three patterns and achieves a high precision score for Do-all and Stencil patterns while maintaining very good accuracy.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Subset & Approach & Precision & Recall & F1-score & Accuracy \\ \hline \multirow{3}{*}{Pluto} & Pluto & 1 & 0.39 & 0.56 & 0.39 \\ & Graph2par & 0.88 & 0.93 & 0.91 & 0.86 \\ & ProGraML & 0.88 & 0.88 & 0.87 & 0.89 \\ & **Perfograph** & 0.91 & 0.90 & 0.89 & **0.91** \\ \hline \multirow{3}{*}{autoPar} & autoPar & 1 & 0.14 & 0.25 & 0.38 \\ & Graph2par & 0.90 & 0.79 & 0.84 & 0.80 \\ & ProGraML & 0.92 & 0.69 & 0.67 & 0.84 \\ & **Perfograph** & 0.85 & 0.91 & 0.85 & **0.86** \\ \hline \multirow{3}{*}{DiscoPoP} & DiscoPoP & 1 & 0.54 & 0.70 & 0.63 \\ & Graph2par & 0.90 & 0.79 & 0.84 & 0.81 \\ \cline{1-1} & ProGraML & 0.92 & 0.94 & 0.92 & 0.91 \\ \cline{1-1} & **Perfograph** & 0.99 & 1 & 0.99 & **0.99** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of Perfograph on the OMP_Serial dataset.
### Numa and Prefetchers Configuration Prediction
**Problem Definition:** An appropriate configuration of Non-Uniform Memory Access (NUMA) and Hardware Prefetchers significantly impacts program performance. In this experiment, we define the task of NUMA and prefetcher selection as predicting the right configuration within a given tuning parameter search space. We evaluate the performance of both ProGraML and Perfograph for this task by converting each program in the dataset to ProGraML and Perfograph graphs following the approach in [35].
**Dataset:** We use the dataset in [35], which includes a diverse set of intermediate representation files coupled with the optimal configuration. The dataset incorporates various LLVM compiler optimization flags to produce different forms of the same program. There are 57 unique kernels (IR files) in this dataset, and around 1000 optimization flags are applied, resulting in 57000 IR files in total. Each IR file within the dataset is accompanied by its runtime on two architectures, Sandy Bridge and Skylake, across thirteen different NUMA and prefetcher configurations.
**Results:** Following the approach in the study of TehraniJamasz _et al._, we partition the dataset into ten folds for cross-validation. Figure 5(a) and 5(b) illustrate the performance results in terms of error rates. On average, Perfograph outperforms ProGraML by 3.5% and 1.8% for the Sandy Bridge and Skylake architecture, respectively. These improvements demonstrate the effectiveness of Perfograph compared to the state-of-the-art ProGraML.
### Thread Coarsening Factor (TCF) Prediction
**Problem Definition:** Thread coarsening is an optimization technique for parallel programs by fusing the operation of two or more threads together. The number of threads that can be fused together is known as the Thread Coarsening Factor (TCF). For a given program, the task is to predict the coarsening factor value (1, 2, 4, 8, 16, 32) that leads to the best runtime. The running time with coarsening factor 1 is used as the baseline for calculating speedups. For this task, we compare Perfograph against DeepTune [12], Inst2Vec [6] and ProGraML [13]. The results of the baselines are quoted from [6]. However, since ProGraML has not been evaluated on this task in the past, we apply ProGraML representation in our setup for comparison.
**Dataset:** We use the dataset of Ben-Nun et al. [12]. The dataset contains only 17 OpenCL kernels. For each kernel, the dataset has the runtime information on four different GPUs for the different
\begin{table}
\begin{tabular}{c c c c c c} \hline \multicolumn{5}{c}{OMP\_Serial Dataset.} \\ \hline Approach & Pattern & Precision & Recall & F1-score & Accuracy \\ \hline \multirow{3}{*}{Pragformer} & Do-all & 0.86 & 0.85 & 0.86 & \\ & Reduction & 0.89 & 0.87 & 0.87 & 0.86 \\ & Stencil & N/A & N/A & N/A & \\ \hline \multirow{3}{*}{Graph2Par} & Do-all & 0.88 & 0.87 & 0.87 & \\ & Reduction & 0.9 & 0.89 & 0.91 & 0.9 \\ & Stencil & N/A & N/A & N/A & \\ \hline \multirow{3}{*}{ProGraML} & Do-all & 0.92 & 0.90 & 0.91 & \\ & Reduction & 0.92 & 0.92 & 0.92 & 0.96 \\ & Stencil & 0.98 & 1 & 0.99 & \\ \hline \multirow{3}{*}{**Perfograph**} & Do-all & 1 & 0.97 & 0.99 & \\ & Reduction & 0.97 & 1 & 0.99 & \\ \cline{1-1} & Stencil & 1 & 1 & 1 & \\ \hline \end{tabular}
\end{table}
Table 4: Performance comparison for the parallel pattern detection task with Perfograph on the
Figure 6: Breakdown of the NUMA and prefetchers configuration prediction per fold [lower is better].
thread coarsening factor values. Hence, for each kernel, we have the runtime corresponding to each thread coarsening factor value on a specific GPU device.
**Results:**
we design the problem as a multi-class classification problem where given a kernel, we try to predict which thread coarsening factor provides the highest performance. As the dataset is very small, we apply a 17-fold cross-validation approach. In each fold, we train our model on 16 data points, and the model is tested on the one unseen data point that is left out of the training set. Figure 7 shows the comparison of kernels with the correct Thread Coarsening Factor (TCF) found by ProGraML and Perfograph. Across the four platforms in total Perfograph is able to correctly predict the TCF for 17 cases, whereas ProGraML is able to find only 9 cases. In two of the platforms (AMD Radeon HD 5900 and NVIDIA GTX 480) where ProGraML failed to find any kernel with the correct TCF, Perfograph can find three kernels in both of the platforms with the correct TCF value. As shown in 5, even though Perfograph outperforms ProGraML on most computing platforms, it falls behind inst2vec. We posit the reason is that inst2vec has a pretraining phase where it is trained using skip-gram. On the other hand, 17 kernels are very small. Therefore, a DL-based model is not able to generalize enough. However, we can see that even on a smaller dataset, Perfograph achieved comparable speedups with respect to the current state-of-the-art models.
### Algorithm Classification
**Problem Definition:** Previous downstream tasks showed that in most of the cases, Perfograph outperforms the baselines. Those tasks were mostly performance oriented. We go further by applying Perfograph on a different downstream task which is algorithm classification. The task involves classifying a source code into 1 of 104 classes. In this task, we compare the results of Perfograph to those of inst2vec, ProGraML. The results of the baselines are quoted from [13].
**Dataset:** We use the POJ-104 dataset [28] in a similar setup as [13] that contains around 240k IR files for training and 10k files for testing.
**Results:** For this task, inst2vec has error rate of 5.17, whereas ProGraML has error rate of 3.38. Perfograph yields an error rate of 5.00 which is better than inst2vec and slightly behind ProGraML. One of the reasons is that ProGraML already has a very small error rate in this task, leaving a very small gap for improvement, however still Perfograph's result is very close to that of ProGraML.We could not reproduce the results in ProGraML paper in our setup. In fact, when we applied ProGraML in our setup, the error rate of ProGraML was 6.00. Moreover, we posit that for algorithm classification, numbers are not a significant factor. Therefore, numerical awareness can confuse the models a little bit. However, this experiment shows that Perfograph is very close to ProGraML's performance in this task and shows the applicability of Perfograph to a wider range of downstream tasks.
## 6 Conclusion and Future Work
In this paper, we presented Perfograph, an LLVM IR-based graph representation of programs that supports composite data types such as arrays and vectors and is numerical aware. Moreover, it addresses several limitations of the previous IR-based graph representations. Perfograph is
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Computing Platform & DeepTune & inst2vec & ProGraML & Perfograph \\ \hline AMD Radeon HD 5900 & 1.1 & **1.37** & 1.15 & 1.19 \\ AMD Tahiti 7970 & 1.05 & 1.1 & 1.00 & **1.14** \\ NVIDIA GTX 480 & **1.1** & 1.07 & 0.98 & 1.03 \\ NVIDIA Tesla K20c & 0.99 & **1.06** & 1.03 & 1.01 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Speedups achieved by coarsening threads
Figure 7: Correct TCF found by ProGraML vs Perfograph [higher is better].
evaluated on various downstream tasks, and experimental results indicate that Perfograph is indeed effective, outperforming state-of-the-art in most of the downstream tasks. Perfograph numerical awareness capability is limited to the numerical values that are available at the compile time. For future work, we intend to augment our representation by adding support for dynamic information and checking the possibility of integrating hardware performance counters with our representation. Moreover, we plan to develop a pre-trained embedding model using our representation. Having a pre-trained model will help to solve the problem of limiting training samples in some downstream tasks.
|
2309.03188 | Leo T Dissected with the MUSE-Faint Survey | Leo T is the lowest mass galaxy known to contain neutral gas and to show
signs of recent star formation, which makes it a valuable laboratory for
studying the nature of gas and star formation at the limits of where galaxies
are found to have rejuvenating episodes of star formation. Here we discuss a
novel study of Leo T that uses data from the MUSE integral field spectrograph
and photometric data from HST. The high sensitivity of MUSE allowed us to
increase the number of Leo T stars observed spectroscopically from 19 to 75. We
studied the age and metallicity of these stars and identified two populations,
all consistent with similar metallicity of [Fe/H] $\sim$ -1.5 dex, suggesting
that a large fraction of metals were ejected. Within the young population, we
discovered three emission line Be stars, supporting the conclusion that rapidly
rotating massive stars are common in metal-poor environments. We find
differences in the dynamics of young and old stars, with the young population
having a velocity dispersion consistent with the kinematics of the cold
component of the neutral gas. This finding directly links the recent star
formation in Leo T with the cold component of the neutral gas. | Daniel Vaz, Jarle Brinchmann, The MUSE Collaboration | 2023-09-06T17:51:47Z | http://arxiv.org/abs/2309.03188v1 | # Leo T Dissected with the MUSE-Faint Survey
###### Abstract
Leo T is the lowest mass galaxy known to contain neutral gas and to show signs of recent star formation, which makes it a valuable laboratory for studying the nature of gas and star formation at the limits of where galaxies are found to have rejuvenating episodes of star formation.
Here we discuss a novel study of Leo T that uses data from the MUSE integral field spectrograph and photometric data from HST. The high sensitivity of MUSE allowed us to increase the number of Leo T stars observed spectroscopically from 19 to 75. We studied the age and metallicity of these stars and identified two populations, all consistent with similar metallicity of [Fe/H] \(\sim\) -1.5 dex, suggesting that a large fraction of metals were ejected. Within the young population, we discovered three emission line Be stars, supporting the conclusion that rapidly rotating massive stars are common in metal-poor environments. We find differences in the dynamics of young and old stars, with the young population having a velocity dispersion consistent with the kinematics of the cold component of the neutral gas. This finding directly links the recent star formation in Leo T with the cold component of the neutral gas.
Spectroscopy, Galaxies, Leo T, Stars, Kinematics, Star Formation, Be Stars 1
## 1 Introduction
Ultra-Faint Dwarf galaxies (UFDs) represent a fascinating enigma in the study of the Universe. These elusive objects are characterised by their extremely low metallicities, simple assembly histories, and dominant dark matter content (Simon 2019), making them an essential piece of the puzzle in understanding galaxy formation and evolution.
Among the faint and ultra-faint dwarf sample, Leo T stands out as a particularly intriguing object that has received significant attention from astronomers. Leo T is the faintest and least massive dwarf galaxy known to contain neutral gas and exhibit signs of recent star formation. This unique set of characteristics makes Leo T a valuable testing ground for galaxy formation models, as they present a challenge to current theories that have struggled to reproduce similar galaxies. Further observations of Leo T will enable astronomers to refine their models and determine whether they are on the right track towards a comprehensive and predictive theory of galaxy formation.
Leo T was discovered using SDSS imaging by Irwin _et al._ (2007). The stellar mass of Leo T is estimated to be \(M\,=\,1.3\,\times 10^{5}\) M\({}_{\odot}\) (McConnachie 2012). The star formation history (SFH) of Leo T has been extensively studied (Irwin _et al._ 2007; de Jong _et al._ 2008; Weisz _et al._ 2012; Clementini _et al._ 2012). These studies show that 50% of the total stellar mass was formed prior to 7.6 Gyr ago, with star formation beginning over 10 Gyr ago and continuing until recent times. They also show evidence of a quenching of star formation in Leo T about 25 Myr ago. None of the studies found evidence of an evolution in isochronal metallicity such that, over the course of its lifetime, it is consistent with a constant value of \([M/H]\sim\,\,-1.6\).
The only previous spectroscopic observations of Leo T are those of Simon & Geha (2007). They derive a mean radial velocity of \(v_{rad}\,\)=\(\,38.1\,\pm 2\) km s\({}^{-1}\), and velocity dispersion of \(\sigma_{v_{rad}}=7.5\,\,\pm 1.6\) km s\({}^{-1}\), which corresponds to a total dynamical mass of \(8.2\,\,\times 10^{6}\) M\({}_{\odot}\).
Ryan-Weber _et al._ (2008) and Adams & Oosterloo (2018) concluded that Leo T contains neutral gas. The Hi mass is almost four times the stellar mass, with \(M_{H1}~{}=~{}4.1~{}\times 10^{5}\) M\({}_{\odot}\)(Adams & Oosterloo, 2018). They show that the gas is present in a Cold Neutral Medium (CNM) (T \(\sim\) 800K) and a Warm Neutral Medium (WNM) (T \(\sim\) 6000K) Hi, with the CNM corresponding to almost 10% of the total Hi mass. Relevantly, the CNM was found to have a lower velocity dispersion (\(\sigma_{\rm CNM}=2.5~{}\pm 0.1\) km s\({}^{-1}\)) than the WNM (\(\sigma_{\rm WNM}=7.1~{}\pm 0.4\) km s\({}^{-1}\)) (Adams & Oosterloo, 2018). The presence of this large cold component raises the question of whether this component is related to the recent star formation observed in Leo T.
To further investigate Leo T, we used spectroscopic observations using the Multi-Unit Spectroscopic Explorer (MUSE, Bacon _et al._, 2010) integral field spectrograph (IFS). Succinctly, we densely map the stellar content and use the stellar spectra to measure the stellar metallicity and stellar kinematics. We find identifiers of a young population, namely Be stars. The data and results presented here are discussed further in Zoutendijk _et al._ (2021) and Vaz _et al._ (submitted to \(A\&A\)).
## 2 Results and Discussion
The central region of Leo T was observed as part of MUSE-Faint (Zoutendijk _et al._, 2020), a MUSE GTO survey of UFDs (PI Brinchmann). MUSE (Bacon _et al._, 2010) is a large-field medium-spectral-resolution integrated field spectrograph installed at Unit Telescope 4 of the Very Large Telescope (VLT). We extracted spectra from the final data cube using PampleMuse (Kamann _et al._, 2013). As a general rule, the extracted spectra have a modest signal-to-noise ratio (S/N) and spectral resolution (R \(\sim\) 3000). We used the specxxy full-spectrum fitting code (Husser _et al._, 2016) together with the PHOENIX (Husser _et al._, 2013) model spectra to estimate the physical parameters, namely line-of-sight velocities and [Fe/H].
Figure 1: Color-magnitude diagram of the 58 Leo T stars detected with MUSE, plotted against PARSEC isochrones drawn for constant \(\rm[Fe/H]=-1.6\). Three representative isochrones are plotted, two in blue, with ages of 0.2 and 0.8 Gyr, and one in gray for age of 9 Gyr. The stars that were found consistent with the younger isochrones are shown as dark blue squares, with the emission line stars shown as blue squares. The stars consistent with the older isochrones are shown as red diamonds.
### Different Populations in Leo T
We were able to identify 58 stars as members of Leo T based on their kinematics. For 55 of these stars we were also able to obtain an estimate of [Fe/H]. The three stars without [Fe/H] estimates are emission line Be stars, which are discussed further below. By combining with the data from Simon & Geha (2007) we have measurements of the kinematics for 75 stars.
We complemented this data with HST ACS F606W and F814W photometry2. We fit PARSEC stellar isochrones (Bressan _et al._, 2012) to the resulting colour magnitude diagram for the 58 stars (Figure 1). The best-fit [M/H] for the isochrone is [M/H]=-1.6, which is consistent with the value found in Weisz _et al._ (2012), and therefore we use this value to draw the representative isochrones shown in Figure 1. The first clear conclusion is that the sample covers a wide range of ages, with some stars consistent with ages as high as \(>10\) Gyr and others as low as 200 Myr. As such, we divided the stars into two populations: a young population, of 20 stars consistent with ages \(<1\) Gyr, and an old population, of 38 stars consistent with ages \(>5\) Gyr. Both populations are displayed using different colors in Figure 1. To assign each star to a population, we covered the color magnitude space with isochrones of different ages and with [M/H]=-1.6. We assign each star the age of the nearest isochrone. Because there is a degeneracy between the different isochrones in certain parts of the color-color space, we repeated all analyses discussed below by reassign the stars that fall in a degenerated region. We find that this does not affect our results.
Footnote 2: HST Proposals 12914, Principal Investigator Tuan Do and 14224, Principal Investigator Carmen Gallart
Within the young population we identified three emission line stars. We tentatively identified these as Be stars. Due to their peculiar spectra, specxxy failed to fit for metallicity and, therefore, these stars were not included in our metallicity analysis. Of relevance is the fact that they make up 15% of the young sample, which is comparable to Milky Way studies that reported rates at a level of \(10-20\)% in stellar clusters (Mathew _et al._, 2007). More recent studies, such as Schootemeijer _et al._ (2022), show that OBe stars and, therefore, rapidly rotating massive stars, are common in metal-poor environments, and the detection of Be stars in Leo T supports this conclusion and extend them to even lower metallicity.
Figure 2: The distribution of the metallicity ([Fe/H]) of 55 Leo T stars, estimated using specxxy. The younger population, consisting of 17 stars, is represented with a lighter color. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
### Chemical Evolution of Leo T
The histogram of the metallicity estimates for the 55 stars is shown in Figure 2. To quantify this distribution, we implemented an MCMC model, assuming that the underlying distribution is a Gaussian. Therefore, we fit the mean metallicity and the metallicity dispersion of the distribution, which are also shown in Figure 2. We obtained a metallicity of \(\rm[Fe/H]=-1.53\pm 0.05\), which is in good agreement with our photometric analysis. We repeated the analysis by resampling and removing outliers, without consequences on the results (the outliers are usually low S/N). We find a metallicity dispersion of \(\sigma_{\rm[Fe/H]}=0.21\pm 0.06\), which is low, implying that all stars have similar metallicity and that Leo T underwent almost no metallicity evolution throughout its history. This, in conjunction with the extended history of star formation of Leo T, suggests that a large fraction of metals have been ejected, keeping metallicity constant. In fact, this is consistent with theoretical expectations for low-mass dwarf galaxies (Emerick _et al._, 2018).
We repeated this analysis separately for each population. Even though the results are consistent with each other, the results are not conclusive because the sample of young stars is too small (consisting only of 17 stars). In addition, the distribution is asymmetric, especially for the young stars, with the low S/N spectra preferring a lower metallicity, meaning that our constraints prefer a somewhat lower metallicity for the younger population. However, the uncertainties here are too high to draw any conclusions.
### Stellar Kinematics vs Neutral Gas Kinematics
The histogram of the radial velocity estimates for 75 stars is shown in Figure 3. To fit the distribution, we applied the same MCMC model as before to obtain the mean barycentric velocity and the velocity dispersion of the sample. The fit is also shown in Figure 3. We find a mean velocity \(v_{\rm los}=39.4^{+1.3}_{-1.3}\) km s\({}^{-1}\), and an intrinsic velocity dispersion of \(\sigma_{v}=7.1^{+1.3}_{-1.1}\) km s\({}^{-1}\), which is consistent with what was found by Simon & Geha (2007). We repeated this analysis for each population. In this case, we used the same young population as before, but now the old population consists of 55 stars, including 17 stars from Simon & Geha (2007). These distributions and the respective fits are shown in Figures 4 and 5 for the young and old population, respectively. It is worth noting that the best fit plotted does not include uncertainties.
Figure 3: The distribution of the line-of-sight velocity of 75 Leo T stars, estimated using specxy. The younger population, consisting of 20 stars, is represented with a lighter color. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
For the younger population we obtain a mean velocity of \(v_{\rm los}=39.3^{+2.1}_{-2.1}\) km s\({}^{-1}\), and a velocity dispersion of \(\sigma_{v}=2.3^{+2.7}_{-1.7}\) kms\({}^{-1}\). For the older population we find a mean velocity of \(v_{los}=39.7^{+1.6}_{-1.6}\) km s\({}^{-1}\), and a velocity dispersion of \(\sigma_{v}=8.2^{+1.7}_{-1.4}\) km s\({}^{-1}\). Notably, we find that both populations have different kinematics, with the younger population having a significantly smaller velocity dispersion than the older stars. This is comparable to what was found for the two components of the gas in Leo T, where the cold component was found to have a velocity dispersion smaller than that of the warm component. We compare the differences in kinematics between young and old stars with what was found for the Hi kinematics of warm and cold neutral gas in Figure 6. We find a good match when comparing the velocity dispersion of the young population with the cold component of the Hi gas, and between the kinematics of old Leo T stars and the warm component of the Hi gas. The natural inference from these results is that the most recent star formation in Leo T is linked to the CNM. The results presented here combined with the results from Weisz _et al._ (2012) of no star formation in Leo T in the last \(\sim 25\) Myr are consistent with recent models that suggest that star formation in low mass
Figure 4: The distribution of the line-of-sight velocity of 20 young Leo T stars, estimated using specxy. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
Figure 5: The distribution of the line-of-sight velocity of 55 old Leo T stars, estimated using specxy. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
galaxies should be bursty with short quiescent periods (Collins & Read, 2022): due to stellar feedback, the star formation is momentarily quenched and metals are ejected from the environment and, after a short quiescent period, the gas is allowed to cool down and re-ignite star formation.
|
2310.20632 | Constrained Planarity in Practice -- Engineering the Synchronized
Planarity Algorithm | In the constrained planarity setting, we ask whether a graph admits a planar
drawing that additionally satisfies a given set of constraints. These
constraints are often derived from very natural problems; prominent examples
are Level Planarity, where vertices have to lie on given horizontal lines
indicating a hierarchy, and Clustered Planarity, where we additionally draw the
boundaries of clusters which recursively group the vertices in a crossing-free
manner. Despite receiving significant amount of attention and substantial
theoretical progress on these problems, only very few of the found solutions
have been put into practice and evaluated experimentally.
In this paper, we describe our implementation of the recent quadratic-time
algorithm by Bl\"asius et al. [TALG Vol 19, No 4] for solving the problem
Synchronized Planarity, which can be seen as a common generalization of several
constrained planarity problems, including the aforementioned ones. Our
experimental evaluation on an existing benchmark set shows that even our
baseline implementation outperforms all competitors by at least an order of
magnitude. We systematically investigate the degrees of freedom in the
implementation of the Synchronized Planarity algorithm for larger instances and
propose several modifications that further improve the performance. Altogether,
this allows us to solve instances with up to 100 vertices in milliseconds and
instances with up to 100 000 vertices within a few minutes. | Simon D. Fink, Ignaz Rutter | 2023-10-31T17:01:32Z | http://arxiv.org/abs/2310.20632v1 | # Constrained Planarity in Practice
###### Abstract.
In the constrained planarity setting, we ask whether a graph admits a planar drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. Despite receiving significant amount of attention and substantial theoretical progress on these problems, only very few of the found solutions have been put into practice and evaluated experimentally.
In this paper, we describe our implementation of the recent quadratic-time algorithm by Blasius et al. (Blasius et al., 2017) for solving the problem Synchronized Planarity, which can be seen as a common generalization of several constrained planarity problems, including the aforementioned ones. Our experimental evaluation on an existing benchmark set shows that even our baseline implementation outperforms all competitors by at least an order of magnitude. We systematically investigate the degrees of freedom in the implementation of the Synchronized Planarity algorithm for larger instances and propose several modifications that further improve the performance. Altogether, this allows us to solve instances with up to 100 vertices in milliseconds and instances with up to 100 000 vertices within a few minutes.
## 1. Introduction
In many practical graph drawing applications we not only seek any drawing that maximizes legibility, but also want to encode additional information via certain aspects of the underlying layout. Examples are _hierarchical_ drawings like organizational charts, where we encode a hierarchy among vertices by placing them on predefined levels, _clustered_ drawings, where we group vertices by enclosing them in a common region, and _animated_ drawings, where changes to a graph are shown in steps while keeping a static part fixed. In practice, clustered drawings are for example UML diagrams, where classes are grouped according to the package they are contained in, computer networks, where devices are grouped according to their subnetwork, and integrated circuits, where certain components should be placed close to each other. As crossings negatively affect the readability of drawings (Steintein and Steintein, 1998; Steintein and Stein, 1998), we preferably seek planar, i.e. crossing-free, drawings. The combination of these concepts leads to the field of constrained planarity problems, where we ask whether a graph admits a planar drawing that satisfies a given set of constraints. This includes the problems Level Planarity(Steintein and Steintein, 1998; Steintein and Steintein, 1998), Clustered Planarity(Stein and Steintein, 1998; Steintein and Steintein, 1998), and Simultaneous Embedding with Fixed Edges (SEFE) (Steintein and Steintein, 1998; Steintein and Steintein, 1998; Stein and Steintein, 1998), which respectively model the aforementioned applications; see Figure 1. Formally, these problems are defined as follows.
Figure 1. Examples of constrained planarity problems: Level Planarity (a), Clustered Planarity (b), SEFE (c).
Introduction
The _Synchronized Planarity_ algorithm is a generalization of the _Synchronized Planarity_ algorithm to the _Synchronized Planarity_ algorithm. The _Synchronized Planarity_ algorithm is a generalization of the _Synchronized Planarity_ algorithm to the _Synchronized Planarity_ algorithm.
approaches to constrained planarity. In Section 4 we describe our implementation of Synchronized Planarity and evaluate its performance in comparison with the two other available Clustered Planarity implementations. We tune the running time of our implementation to make it practical even on large instances in Section 5. We analyze the effects of our engineering in greater detail in Section 6.
### Preliminaries.
We rely on some well-known concepts from the fields of graph drawing and planar graphs. We only briefly define the most important terms here and refer to the theoretical description of the implemented algorithm (Brandt, 1997) for more comprehensive definitions. A more gentle introduction to the concepts can also be found in Chapter 1 of the Handbook of Graph Drawing and Visualization (Krishnan, 2001). We consider two planar (i.e., crossing-free) drawings equivalent if they define the same _rotation system_, which specifies for each vertex its _rotation_, i.e., the cyclic order of the edges around the vertex. An _embedding_ is an equivalence class of planar drawings induced by this relation. An _embedding tree_(Brandt, 1997) is a PQ-tree (Krishnan, 2001) that describes all possible rotations of a vertex in a planar graph; see Figure 2d. Its leaves correspond to the incident edges, while its inner nodes are either Q-nodes, which dictate a fixed ordering of their incident subtrees that can only be reversed, or are P-nodes, which allow arbitrary permutation. A BC-tree describes the decomposition of a connected graph into its _biconnected_ components, which cannot be disconnected by the removal of a so-called _cut-vertex_. Each node of a BC-tree represents either a cut-vertex or a maximal biconnected _block_. We refer to a vertex that is not a cut-vertex as _block-vertex_. An SPQR-tree (Krishnan, 2001) describes the decomposition of a biconnected graph into its _triconnected_ components, which cannot be disconnected by the removal of a so-called _split-pair_ of two vertices. Each inner node represents a _skeleton_, which is either a triconnected _'rigid'_ minor whose planar embedding can only be mirrored, a split-pair of two _pole_ vertices connected by multiple _'parallel'_ subgraphs that can be permuted arbitrarily, or a cycle formed by split-pairs separating a _'series'_ of subgraphs; see Figure 2c. All three kinds of trees can be computed in time linear in the size of the given graph (Krishnan, 2001; Krishnan, 2001; Krishnan, 2001).
## 2. Constrained Planarity
Schaefer (Schaefer, 1997, Figure 2) introduced a hierarchy on the various variants of constrained planarity that have been studied in the past. Figure 3 shows a subset of this hierarchy, incorporating updates up to 2015 by Da Lozzo (Da Lozzo, 2015, Figure 0.1). Arrows indicate that the target problem either generalizes the source problem or solves it via a reduction. In the version of Da Lozzo, the problems Strip, Clustered and Synchronized Planarity as well as (Connected) SEFE still formed a frontier of problems with unknown complexity, separating efficiently solvable problems from those that are NP-hard. Since then many of these problems were settled in P, especially due to the Clustered
Figure 2. A planar graph (a), its SPQR-tree (b) and the corresponding skeletons (c). Rigids are highlighted in red, parallels in green, and series in blue. The embedding tree of the vertex marked in blue (d). Small black disks are P-nodes, larger white disks are Q-nodes.
Planarity solution from 2019 by Fulek and Toth [31]. The only problem from this hierarchy that remains with an unknown complexity is SEFE. In this section, we want to give a short summary of the history of Clustered Planarity and SEFE, which we see central to the field of constrained planarity and which also serve as a motivation for Synchronized Planarity. Afterwards, we will give a short summary of the algorithm we implement for solving the latter problem. We point the interested reader to the original description [8] for full details.
Recall that in SEFE, we are given two graphs that share some common part and we want to embed both graphs individually such that their common parts are embedded the same way [9; 15; 46]. More general SEFE variants are often NP-complete, e.g., the case with three given graphs [32], even if all share the same common part [6; 47]. In contrast, more restricted variants are often efficiently solvable, e.g., when the shared graph is biconnected, a star, a set of cycles, or has a fixed embedding [2; 3; 10]. The case where the shared graph is connected, which is called Connected SEFE, was shown to be equivalent to the so-called Partitioned \(\mathcal{T}\)-coherent 2-page Book Embedding problem [3] and to be reducible to Clustered Planarity[5], all of which were recently shown to be efficiently solvable [31]. In contrast to these results, the complexity of the general SEFE problem with two graphs sharing an arbitrary common graph is still unknown.
Recall that in Clustered Planarity, the embedding has to respect a laminar family of clusters, that is every vertex is assigned to some (hierarchically nested) cluster and an edge may only cross a the border of a cluster's region if it connects a vertex from the inside with one from the outside [11; 41]; see Figure 4 for an example. Lengauer [41] studied and solved this problem as early as 1989 in the setting where the clusters are connected. Feng et al. [25], who coined the term Clustered Planarity, rediscovered this algorithm and asked the general question where disconnected clusters are allowed. This question remained open for 25 years. In that time, polynomial-time algorithms were found for many special-cases [4; 20; 29; 34] before Fulek and Toth [31] found an \(O((n+d)^{8})\) solution in 2019, where \(d\) is the number of crossings between a cluster-border and an edge leaving the cluster. Shortly thereafter, Blasius et al. [8] gave a solution with running time in \(O((n+d)^{2})\) that works via a linear-time reduction to Synchronized Planarity.
In Synchronized Planarity, we are given a graph together with a set of _pipes_, each of which pairs up two distinct vertices of the graph. Each pipe synchronizes the rotation of its two paired-up vertices (its _endpoints_) in the following sense: We seek a planar embedding of the graph where for each pipe \(\rho\), the rotations of its endpoints line up under the bijection \(\varphi_{\rho}\) associated with \(\rho\)[8]. Formally, this problem is defined as follows.
Figure 3: Constrained planarity variants related to Synchronized Planarity, updated selection from [21]. Problems and reductions marked in blue are used for generating test instances.
**Problem** Synchronized Planarity\({}^{a}\)
**Given**: graph \(G\) and a set \(\mathcal{P}\), where each _pipe_\(\rho\in\mathcal{P}\) consists of two distinct vertices \(v_{1},v_{2}\in V(G)\) and a bijection \(\varphi_{\rho}\) between the edges incident to \(v_{1}\) and those incident to \(v_{2}\), and each vertex is part of at most one pipe
**Question**: Is there a drawing of \(G\) where for each pipe \(\rho=(v_{1},v_{2},\varphi_{\rho})\), the cyclic order of edges incident to \(v_{1}\) lines up with the order of edges incident to \(v_{2}\) under the bijection \(\varphi_{\rho}\)?
* Note that we disregard the originally included Q-vertices here, as they can also be modeled using pipes [8, Section 5].
The motivation for this "synchronization" can best be seen by considering the reduction from Clustered to Synchronized Planarity. At each cluster boundary, we split the graph into two halves: one where we contract the inside of the cluster into a single vertex and one where we contract the outside into a single vertex. In a clustered planar embedding, the order of the edges "leaving" one cluster (i.e. the rotation of its contracted vertex in the one half) needs to match the order in which they "enter" the parent cluster (i.e. the the rotation of the corresponding contracted vertex in the other half). The graph resulting from separately contracting each side of a cluster boundary is called CD-tree [11]; see Figure 4 and [8, Figure 6] for an example. Using this graph, the synchronization of rotations can easily be modeled via Synchronized Planarity by pairing the two contracted vertices corresponding to the same cluster boundary with a pipe.
In the quadratic algorithm for solving Synchronized Planarity, a pipe is _feasible_ if one of the three following operation can be applied to remove it.
**EncapsulateAndJoin**: If both endpoints of the pipe are cut-vertices, they are "encapsulated" by collapsing each incident block to a single vertex to obtain two stars with paired-up centers. Additionally, we split the original components at the two cut-vertices, so that each of their incident blocks is retained as separate component with its own copy of the cut-vertex. These copies are synchronized with the respective vertex incident to the original cut-vertex representing the collapsed block. Now the cut-vertices can be removed by "joining" both stars at their centers, i.e, by identifying their incident edges according to the given bijection; see the top row of Figure 5.
**PropagatePQ**: If one endpoint of the pipe is a block-vertex and has an embedding tree that not only consists of a single P-node (i.e., it is _non-trivial_), a copy of this embedding tree is inserted ("propagated") in place of each respective pipe endpoint. The inner nodes of the embedding trees are synchronized by pairing corresponding vertices with a pipe; see the middle row of Figure 5. Note that, as Q-nodes only have a binary embedding decision, they can more easily be synchronized via a 2-SAT formula instead of using pipes.
Figure 4: A Clustered Planarity instance (a), its cluster tree (b), and its CD-tree representation (c).
SimplifyMatchingIn the remaining case, at least one of the endpoints of the pipe is a block-vertex but has a trivial embedding tree. If the vertex (or, more precisely, the parallel in the SPQR-tree that completely defines its rotation) can respect arbitrary rotations, we can simply remove the pipe. When the other pole of the parallel is also paired-up and has a trivial embedding tree, we "short-circuit" the pipe across the parallel; see the bottom row of Figure 5. One exception is if the pipe matches the poles of the same parallel, where we can again remove the pipe without replacement.
The algorithm then works by simply applying a suitable operation on an _arbitrary_ feasible pipe each step. Moreover, it can be shown that if a pipe is not feasible, then this is directly caused by a close-by pipe with endpoints of higher degree [8]. Especially, this means that maximum-degree pipes are always feasible.
Each of the three operations runs in time linear in the degree of the removed pipe once the embedding trees it depends on has been computed. This is dominated by the time spent on computing the embedding tree, which is linear in the size of the considered biconnected component. Every applied operation removes a pipe, but potentially introduces new pipes of smaller degree. Blasius et al. [8] show that the progress made by the removal of a pipe always dominates the overhead of the newly-introduced pipes and that the number of operations needed to remove all pipes is limited by the total degree of all paired-up vertices. Furthermore, the resulting instance without pipes can be solved and embedded in linear time. An embedding of the input graph can then be obtained by undoing all changes made to the graph in reverse order while maintaining the found embedding. The algorithm thus runs in the following three simple phases:
1. While pipes are left, choose and remove an arbitrary feasible pipe by applying an operation.
2. Solve and embed the resulting pipe-free (_reduced_) instance.
3. Undo all applied operations while maintaining the embedding.
Figure 5. The operations for solving Synchronized Planarity[8]. Pipes are indicated by orange dashed lines, their endpoints are shown as larger disks. **Top:** Two cut-vertices paired-up by a pipe (left), the result of encapsulating their incident blocks (middle) and the bipartite graph resulting from joining both cut-vertices (right). **Middle:** A block-vertex pipe endpoint (left) that has a non-trivial embedding tree (middle) that is propagated to replace both the vertex and its partner (right). **Bottom:** Three different cases of paired-up vertices with trivial embedding trees (blue) and how their pipes can be removed or replaced (red).
## 3. Related Work
Surprisingly, in contrast to their intense theoretical consideration, constrained planarity problems have only received little practical attention so far. Of all variants, practical approaches to Clustered Planarity were studied the most, although all implementations predate the first fully-correct polynomial-time solution and thus either have an exponential worst-case running time or cannot solve all instances. Chimani et al. (Chimani et al., 2017) studied the problem of finding maximal cluster planar subgraphs in practice using an Integer Linear Program (ILP) together with a branch-and-cut algorithm. A later work (Gutwenger et al., 2018) strengthened the ILP for the special case of testing Clustered Planarity, further improving the practical running time. The work by Gutwenger et al. (Gutwenger et al., 2018) takes a different approach by using a Hanani-Tutte-style formulation of the problem based on the work by Schaefer (Schaefer, 2018). Unfortunately, their polynomial-time testing algorithm cannot solve all instances and declines to make a decision for some instances. The Hanani-Tutte-approach solved instances with up to 60 vertices and 8 clusters in up to half a minute, while the ILP approach only solves roughly 90 % of these instances within 10 minutes (Gutwenger et al., 2018).
The only other constrained planarity variant for which we could find experimental results is Partitioned 2-page Book Embedding. Angelini et al. (Angelini et al., 2017) describe an implementation of the SPQR-tree-based linear-time algorithm by Hong and Nagamochi (Hong and Nagamochi, 2018), which solves instances with up to 100 000 vertices and two clusters in up to 40 seconds. Unfortunately, their implementation is not publicly available. For (Radial) Level Planarity, prototypical implementations were described in the dissertations by Leipert (Leippert, 2017) and Bachmaier (Bachmaier, 2018), although in both cases neither further information, experimental results, nor source code is available. The lack of an accessible and correct linear-time implementation may be due to the high complexity of the linear-time algorithms (Chimani et al., 2017). Simpler algorithms with a super-linear running time have been proposed (Gutwenger et al., 2018; Gutwenger et al., 2018; Schaefer et al., 2018). For these, we could only find an implementation by Estrella-Balderrama et al. (Estrella-Balderrama et al., 2018) for the quadratic algorithm by Harrigan and Healy (Harrigan and Healy, 2018). Unfortunately, this implementation has not been evaluated experimentally and we were also unable to make it usable independently of its Microsoft Foundation Classes GUI, with which it is tightly intertwined.
We are not aware of further practical approaches for constrained planarity variants. Note that while the problems Partitioned 2-page Book Embedding and Level Planarity have linear-time solutions, they are much more restricted than Synchronized Planarity (see Figure 3) and have no usable implementations available. We thus focus our comparison on solutions to the Clustered Planarity problem which, besides being a common generalization of both other problems, fortunately also has all relevant implementations available.
\begin{table}
\begin{tabular}{l|c|c c|c c|c c|c c} Dataset & \# & \multicolumn{2}{c|}{Vertices} & \multicolumn{1}{c|}{Density} & \multicolumn{1}{c|}{Components} & \multicolumn{2}{c|}{Clusters/Pipes} & \multicolumn{1}{c}{\(d\)} \\ \hline C-OLD & 1643 & \(\leq\)59 & ( 17.2) & 0.9–2.2 (1.4) & =1 & \(\leq\)19 & ( 4.2) & \(\leq\)256 & ( 34.0) \\ C-NCP & 13834 & \(\leq\)500 & ( 236.8) & 0.6–2.9 (1.9) & \(\leq\)48 & (21.7) & \(\leq\)50 & ( 16.8) & \(\leq\)5390 & ( 783.3) \\ C-MED & 5171 & \(\leq\)10\({}^{3}\) & ( 311.6) & 0.9–2.9 (2.3) & \(\leq\)10 & ( 5.1) & \(\leq\)53 & ( 16.1) & \(\leq\)7221 & ( 831.8) \\ \hline C-LRG & 5096 & \(\leq\)10\({}^{5}\) & (15 214.1) & 0.5–3.0 (2.4) & \(\leq\)100 & (29.8) & \(\leq\)989 & ( 98.8) & \(\leq\)2380 013 & (44 788.7) \\ SEFE-LRG & 1008 & \(\leq\)10\({}^{4}\) & ( 3800.0) & 1.1–2.4 (1.7) & =1 & \(\leq\)20 000 & (7600.0) & \(\leq\)113 608 & (34 762.4) \\ SP-LRG & 1587 & \(\leq\)10\({}^{5}\) & (25 496.6) & 1.3–2.5 (2.0) & \(\leq\)100 & (34.5) & \(\leq\)20 000 & (1467.4) & \(\leq\)139 883 & ( 9627.5) \\ \end{tabular}
\end{table}
Table 1. Statistics for our different datasets, values in parentheses are averages. Column # shows the number of instances while column \(d\) shows the total number of cluster–border edge crossings or the total degree of all pipes, depending on the underlying instances.
## 4. Clustered planarity in practice
In this section, we shortly describe our C++ implementation of the Synchronized Planarity algorithm by Blasius et al. (Blasius et al., 2017) and compare its running time and results on instances derived from Clustered Planarity with those of the two existing implementations by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) and by Gutwenger et al. (Gutwenger et al., 2019). We base our implementation on the graph data structures provided by the OGDF (Guthwenger et al., 2018) and, as the only other dependency, use the PC-tree implementation by Fink et al. (Fink et al., 2019) for the embedding trees. The PC-tree is a datastructure that is conceptually equivalent to the PQ-tree we use as embedding tree, but is faster in practice (Fink et al., 2019).
The algorithm for Synchronized Planarity makes no restriction on how the next feasible pipe should be chosen. For now, we will use a heap to always use a pipe of maximum degree, as this ensures that the pipe is feasible. The operations used for solving Synchronized Planarity heavily rely on (bi-)connectivity information while also making changes to the graph that may affect this information. As recomputing the information before each step would pose a high overhead, we maintain this information in the form of a BC-forest (i.e. a collection of BC-trees). To generate the embedding trees needed by the PropagatePQ and SimplifyMatching operations, we implement the Booth-Lueker algorithm for testing planarity (Blasius et al., 2017; Fink et al., 2019) using PC-trees. We use that, after processing all vertices of a biconnected component, the resulting PC-tree corresponds to the embedding tree of the vertex that was processed last.
### Evaluation Set-Up
We compare our implementation of Synchronized Planarity with the Clustered Planarity implementations ILP by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) and HT by Gutwenger at al. (Gutwenger et al., 2019). Both are written in C++ and are part of the OGDF. The ILP implementation by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) uses the ABACUS ILP solver (Blasius et al., 2017) provided with the OGDF. We refer to our Synchronized Planarity implementation processing pipes in descending order of their degree as SP[d]. We use the embedding it generates for yes-instances as certificate to validate all positive answers. For the Hanani-Tutte algorithm, we give the running times for the modes with embedding generation and verification (HT) and the one without (HT-f) separately. Note that HT-f only checks an important necessary, but not sufficient condition and thus may falsely classify negative instances as positive, see (Gutwenger et al., 2019, Figure 3) and (Guthwenger et al., 2018, Figure 16) for examples where this is the case. Variant HT tries to verify a positive answer by generating an embedding, which works by incrementally fixing parts of a partial embedding and subsequently re-running the test. This process may fail at any point, in which case the algorithm can make no statement about whether the instance is positive or negative (Gutwenger et al., 2019, Section 3.3). We note that, in any of our datasets, we neither found a case of HT-f yielding a false-positive result nor a case of a HT verification failing. The asymptotic running time of HT-f is bounded by \(O(n^{6})\) and the additional verification of HT adds a further factor of \(n\)(Gutwenger et al., 2019).
We combine the Clustered Planarity datasets that were previously used for evaluations on HT and ILP to form the set C-OLD(Chimani et al., 2017; Guthwenger et al., 2018; Guthwenger et al., 2019). We apply the preprocessing rules of Gutwenger at al. (Gutwenger et al., 2019) to all instances and discard instances that become trivial, non-planar or cluster-connected, since the latter are easy to solve (Guthwenger et al., 2018). This leaves 1643 instances; see Table 1. To create the larger dataset C-NCP, we used existing methods from the OGDF to generate instances with up to 500 vertices and up to 50 clusters. This yields 15 750 instances, 13 834 out of which are non-trivial after preprocessing. As this dataset turned out to contain only 10 % yes-instances, we implemented a new clustered-planar instance generator that is guaranteed to yield yes-instances. We use it on random planar graphs with up to 1000 vertices to generate 6300 clustered-planar instances with up to 50 clusters. Out of these, 5171 are non-trivial after preprocessing and make up our dataset C-MED. We provide full details on the generation of our dataset at the end of this section.
We run our experiments on Intel Xeon E5-2690v2 CPUs (3.00 GHz, 10 Cores, 25 MB Cache) with a memory usage limit of 6 GB. As all implementations are single-threaded, we run multiple experiments in parallel using one core per experiment. This allows us to test more instances while causing a small overhead which affects all implementations in the same way. The machines run Debian 11 with a 5.10 Linux Kernel. All binaries are compiled statically using gcc 10.2.1 with flags -O3 -march=native and link-time optimization enabled. We link against slightly modified versions of OGDF 2022.02 and the PC-tree implementation by Fink et al. [26]. The source code of our implementation and all modifications are available at github.com/N-Coder/syncplan,1 while our dataset is on Zenodo with DOI 10.5281/zenodo.7896021.
Footnote 1: It is also archived at Software Heritage with ID swh:1:snp:0date4960cc1303cc3575cf04924e19d664f8ad87.
Details on Dataset Generation.The dataset C-OLD is comprised of the datasets P-Small, P-Medium, P-Large by Chimani and Klein [19] together with PlanarSmallR (a version of PlanarSmall[17] with preprocessing applied), PlanarMediumR and PlanarLargeR by Gutwenger et al. [36]. The preprocessing reduced the dataset of Chimani and Klein [19] to 64 non-trivial instances, leading to dataset C-OLD containing 1643 instances in total.
The OGDF library can generate an entirely random clustering by selecting random subsets of vertices. It can also generate a random clustered-planar and cluster-connected clustering on a given graph by running a depth-first search that is stopped at random vertices, forming new clusters out of the discovered trees. To generate non-cluster-connected but clustered-planar instances, we temporarily add the edges necessary to make a disconnected input graph connected. For the underlying graphs of C-NCP, we use the OGDF to generate three instances for each combination of \(n\in\{100,200,300,400,500\}\) nodes, \(m\in\{n,1.5n,2n,2.5n,3n-6\}\) edges, and \(d\in\{10,20,30,40,50\}\) distinct connected components. For each input graph, we generate six different clusterings, three entirely random and three random clustered-planar, with \(c\in\{3,5,10,20,30,40,50\}\) clusters. This yields 15 750 instances, 13 834 out of which are non-trivial after preprocessing.
It turns out that roughly 90 % of these instances are not clustered-planar (see Table 2), even though half of them are generated by a method claiming to only generate clustered-planar instances. This is because the random DFS-subtree used for clusters by the OGDF only ensures that the generated cluster itself, but not its complement are connected. Thus, if the subgraph induced by the selected vertices contains a cycle, this cycle may separate the outside of the cluster; see Figure 5(a). To reliably generate yes-instances, we implemented a third method for generating random clusterings. We first add temporary edges to connect and triangulate the given input graph. Afterwards, we also generate a random subtree and contract it into a cluster. Each visited vertex is added to the tree with a probability set according to the desired number of vertices per cluster. To ensure the non-tree vertices remain connected, we only add vertices to the tree whose contraction leaves the graph
Figure 6: **(a) Converting the subtree \(\{a,b,c,d\}\) with root \(a\) (shown in orange) into a cluster will separate vertices \(u\) and \(v\), as the edge \(bd\) (dashed) will also be part of the cluster. (b) A clustered-planar graph with two clusters (in addition to the root cluster) that HT classifies as “nonCPlanarVerified”.**
triangulated, i.e., that have at most two neighbors that are already selected for the tree. We convert the selected random subtrees into clusters and contract them for the next iterations until all vertices have been added to a cluster.
As we do not need multiple connected components to ensure the instance is not cluster-connected for our Clustered Planarity instance generator, we used fewer steps for the corresponding parameter, but extended the number of nodes up to 1000 for C-MED. The underlying graphs are thus comprised of three instances for each combination of \(1\leq n\leq 1000\) nodes with \(0\equiv n\mod 100\) (i.e. \(n\) is a multiple of 100), \(m\in\{n,1.5n,2n,2.5n,3n-6\}\) edges, and \(d\in\{1,10,25,50\}\) distinct connected components. For each input graph, we generate three random clustered-planar clusterings with an expected number of \(c\in\{3,5,10,20,30,40,50\}\) clusters. This yields 6300 instances which are guaranteed to be clustered-planar, 5171 out of which are non-trivial after preprocessing and make up our dataset C-MED.
### Results.
Table 2 shows the results of running the different algorithms. The dataset C-OLD is split in roughly equal halves between yes- and no-instances and all algorithms yield the same results, except for the 111 instances for which the ILP ran into our 5-minute timeout. The narrow inter-quartile ranges in Figure 7 show that the running time for HT and SP[d] clearly depends on the number of crossings between cluster boundaries and edges in the given instance, while it is much more scattered for ILP. Still, all instances with less than 20 such crossings could be solved by ILP. For HT, we can see that the verification and embedding of yes-instances has an overhead of at least an order of magnitude over the non-verifying HT-f. The running times for HT on no-instances as well as the times for HT-f on any type of instance are the same, showing that the overhead is solely caused by
\begin{table}
\begin{tabular}{r|r r r r|r r r r|r r r} & \multicolumn{3}{c|}{C-OLD} & \multicolumn{3}{c|}{C-NCP} & \multicolumn{3}{c}{C-MED} \\ & ILP & HT & HT-f & SP[d] & ILP & HT & HT-f & SP[d] & ILP & HT & HT-f & SP[d] \\ \hline Y & 732 & 792 & 792 & 792 & 181 & 1327 & 1534 & 1535 & 953 & 762 & 2696 & 5170 \\ N & 800 & 851 & 851 & 851 & 946 & 6465 & 6463 & 12 308 & 0 & 85 & 85 & 0 \\ ERR & 0 & 0 & 0 & 0 & 5214 & 0 & 0 & 0 & 1263 & 0 & 0 & 0 \\ TO & 111 & 0 & 0 & 0 & 7502 & 6051 & 5846 & 0 & 2955 & 4324 & 2390 & 1 \\ \end{tabular}
\end{table}
Table 2. Counts of the results ‘yes’, ‘no’, ‘error’, and ‘timed out’ on C-OLD, C-NCP and C-MED.
Figure 7. Median running times on dataset C-OLD **(a)** together with the underlying scatter plot **(b)**. For each algorithm, we show running times for yes- and no-instances separately. Markers show medians of bins each containing 10 % of the instances. Shaded regions around each line show inter-quartile ranges.
the verification while the base running time is always the same. For the larger instances in this test set, SP[d] is an order of magnitude faster than HT-f. For SP[d], we also see a division between yes- and no-instances, where the latter can be solved faster, but also with more scattered running times. This is probably due to the fact that the test can fail at any (potentially very early) reduction step or when solving the reduced instance. Furthermore, we additionally generate an embedding for positive instances, which may cause the gap between yes- and no-instances.
The running times on dataset C-NCP are shown in Figure 8. The result counts in Table 2 show that only a small fraction of the instances are positive. With only up to 300 cluster-edge crossings these instances are also comparatively small. The growth of the running times is similar to the one already observed for the smaller instances in Figure 7. HT-f now runs into the timeout for almost all yes-instances of size 200 or larger, and both HT and HT-f time out for all instances of size 1500 and larger. The ILP only manages to solve very few of the instances, often reporting an "undefined optimization result for c-planarity computation" as error; see Table 2. The algorithms all agree on the result if they do not run into a timeout or abort with an error, except for one instance that HT classifies as negative while SP[d] found a (positive) solution and also verified its correctness using the generated embedding as certificate. This is even though the Hanani-Tutte approach by Gutwenger et al. (2018) should answer "no" only if the instance truly is negative. Figure 6b shows a minimal minor of the instance for which the results still disagree.
The running times on dataset C-MED with only positive instances shown in Figure 9 are in accordance with the previous results. We now also see more false-negative answers from the HT approach, which points to an error in its implementation; see also Table 2. The plots clearly show that our approach is much faster than all others. As the Synchronized Planarity reduction fails at an arbitrary step for negative instances, the running times of positive instances form an upper bound for those of negative instances. As we also see verifying positive instances to obtain an embedding as far more common use-case, we focus our following engineering on this case.
## 5. Engineering Synchronized Planarity
In this section, we study how degrees of freedom in the Synchronized Planarity algorithm can be used to improve the running times on yes-instances. The algorithm makes little restriction on the order in which pipes are processed, which gives great freedom to the implementation for choosing the pipe it should process next. In Section 5.1 we investigate the effects of deliberately choosing the next pipe depending on its degree and whether removing it requires generation of an embedding tree. As mentioned by the original description of the Synchronized Planarity
Figure 8. Median running times **(a)** and scatter plot **(b)** on dataset C-NCP.
algorithm, there are two further degrees of freedom in the algorithm, both concerning pipes where both endpoints are block-vertices. The first one is that if both endpoints additionally lie in different connected components, we may apply either PropagatePQ or (EncapsulateAnd)Join to remove the pipe. Joining the pipe directly removes it entirely instead of splitting it into multiple smaller ones, although at the cost of generating larger connected components. The second one is for which endpoint of the pipe to compute an embedding tree when applying PropagatePQ. Instead of computing only one embedding tree, we may also compute both at once and then use their intersection. This preempts multiple following operations propagating back embedding information individually for each newly-created smaller pipe. We investigate the effect of these two decisions in Section 5.2. Lastly, we investigate an alternative method for computing embedding trees in Section 5.3, where we employ a more time-consuming algorithm that in return yields embedding trees for all vertices of a biconnected component simultaneously instead of just for a single vertex.
To gain an initial overview over which parts could benefit the most from improvements, Figure 10 shows how the running time is distributed across different operations, averaged over all instances in C-MED. It shows that with more than 20ms, that is roughly 40 % of the overall running time, a large fraction of time is spent on generating embedding trees, while the actual operations contribute only a minor part of roughly 18 % of the overall running time. 27 % of time is spent on solving and embedding the reduced instance and 15 % is spent on undoing changes to obtain an embedding for the input graph. Thus, the biggest gains can probably be made by reducing the time spent on generating embedding information in the form of embedding trees. We use this as rough guideline in our engineering process.
Dataset Generation.To tune the running time of our algorithm on larger instances, we increased the size of the generated instances by a factor of 100 by changing the parameters of our own cluster-planar instance generator to \(n\in\{100\), 500, 1000, 5000, 10 000, 50 000, 100 000\}, \(d\in\{1,10,100\}\), \(c\in\{3,5,10,25,50,100,1000\}\) for dataset C-LRG. This yields 6615 instances, out of which 5096 are non-trivial after preprocessing; see Table 1.
Figure 10. Average time spent on different operations for SP[d] on C-MED.
Figure 9. Median running times **(a)** and scatter plot **(b)** on dataset C-MED.
In addition to the Clustered Planarity dataset we also generate a dataset that uses the reduction from Connected SEFE. We do so by generating a random connected and planar embedded graph as shared graph. Each exclusive graph contains further edges which are obtained by randomly splitting the faces of the embedded shared graph until we reach a desired density. For the shared graphs, we generate three instances for each combination of \(n\in\{100,\,500,\,1000,\,2500,\,5000,\,7500,\,10\,000\}\) nodes and \(m\in\{n,1.5n,2n,2.5n\}\) edges. For \(d\in\{0.25,0.5,0.75,1\}\), we then add \((3n-6-m)\cdot d\) edges to each exclusive graph, i.e., the fraction \(d\) of the number of edges that can be added until the graph is maximal planar. We also repeat this process three times with different initial random states for each pair of shared graph and parameter \(d\). This leads to the dataset SEFE-LRG containing 1008 instances.
We also generate a dataset of Synchronized Planarity instances by taking a random planar embedded graph and adding pipes between vertices of the same degree, using a bijection that matches their current rotation. The underlying graphs are comprised of three instances for each combination of \(n\in\{100,\,500,\,1000,\,5000,\,10\,000,\,50\,000,\,100\,000\}\) nodes, \(m\in\{1.5n,2n,2.5n\}\) edges, and \(d\in\{1,10,100\}\) distinct connected components. Note that we do not include graphs that would have no edges, e.g., those with \(n=100\) and \(d=100\). For each input graph, we generate three random Synchronized Planarity instances with \(p\in\{0.05n,0.1n,0.2n\}\) pipes. This leads to the dataset SP-LRG containing 1587 instances.
Altogether, our six datasets contain 28 339 instances in total. For the test runs on these large instances, we increase the timeout to 1 hour.
Figure (a)a shows the result of running our baseline variant SP[d] of the Synchronized Planarity algorithm (together with selected further variants of the algorithm from subsequent sections)
Figure 11: C-LRG median absolute running times **(a)** and fraction of timeouts **(b)**. Each marker again corresponds to a bin containing 10 % of the instances.
Figure 12: Scatterplot and estimate for SP[d] running time growth behavior on C-LRG.
on dataset C-LRG. Note that, because the dataset spans a wide range of instance sizes and thus the running times also span a range of different magnitudes, the plot uses a log scale for both axes. Figure 10(b) shows the fraction of runs that timed out for each variant. To estimate the practical runtime growth behavior, we also fit a polynomial to the data shown in Figure 12 and thereby find the running time growth behavior to be similar to \(d^{1.5}\), where \(d\) is the number of crossings between edges and cluster borders.
### Pipe Ordering
To be able to deliberately choose the next pipe, we keep a heap of all pipes in the current instance, where the ordering function can be configured. Note that the topmost pipe from this heap may not be feasible, in which case we will give priority to the close-by pipe of higher degree that blocks the current pipe from being feasible (see (Brandt et al., 2016, Lemma 3.5)). We compare the baseline variant SP[d] sorting by descending (i.e. largest first) degree with the variant SP[a] sorting by ascending degree, and SP[r] using a random order. Note that for these variants, the ordering does not depend on which operation is applicable to a pipe or whether this operation requires the generation of an embedding tree. To see whether making this distinction affects the running time, we also compare the variants SP[d+c], which prefers to process pipes on which EncapsulateAndJoin can be applied, and SP[d-c], which defers such pipes to the very end, processing pipes requiring the generation of embedding trees first.
To make the variants easier to compare, Figure 11(a) shows running times relative to that of the baseline SP[d]. Note that we do not show the median of the last bin, in which up to 70 % of the runs timed out, while this number is far lower for all previous bins; see Figure 10(b). Figure 11(a) shows that the median running times differ by less than 10 % between these variants. The running time of SP[r] seems to randomly alternate between being slightly slower and slightly faster than SP[d]. SP[d] is slightly slower than SP[a] for all bins except the very first and very last, indicating a slight advantage of processing small pipes before bigger ones on these instances. Interestingly, SP[d] is also slower than both SP[d+c] and SP[d-c] for all bins. The fact that these two variants have the same speed-ups indicates that EncapsulateAndJoin should not be interleaved with the other operations, while it does not matter whether it is handled first or last. Still, the variance in relative running times is high and none of the variants is consistently faster on a larger part of the instances. To summarize, the plots show a slight advantage for not interleaving operation EncapsulateAndJoin with the others or sorting by ascending degree, but this advantage is not significant in the statistical sense; see Section 6.2. We keep SP[d] as the baseline for our further analysis.
Figure 13. Relative running times when **(a)** sorting by pipe degree or applicable operation and **(b)** when handling pipes between block-vertices via intersection or join. Note the different scales on the y-axis.
### Pipes with two Block-Vertex Endpoints
Our baseline always processes pipes where both endpoints are block-vertices by applying PropagatePQ or SimplifyMatching based on the embedding tree of an arbitrary endpoint of the pipe. Alternatively, if the endpoints lie in different connected components, such pipes can also be joined directly by identifying their incident edges as in the second step of EncapsulateAndJoin. This directly removes the pipe entirely instead of splitting into further smaller pipes, although it also results in larger connected components. We enable this joining in variant SP[d b]. As a second alternative, we may also compute the embedding trees of both block-vertices and then propagate their intersection. This preempts the multiple following operations propagating back embedding information individually for each newly-created smaller pipe. We enable this intersection in variant SP[d i]. Variant SP[d bi] combines both variants, preferring the join and only intersecting if the endpoints are in the same connected component. We compare the effect of differently handling pipes with two block-vertex endpoints in variants SP[d b], SP[d i] and SP[d bi] with the baseline SP[d], which computes the embedding tree for an arbitrary endpoint and only joins pipes where both pipes are cut-vertices.
Figure 12(b) shows that SP[d b] (and similarly SP[d bi]) is faster by close to \(25\,\mathrm{\char 37}\) on instances with less than \(1000\) cluster-border edge crossings, but quickly grows \(5\) times slower than SP[d] for larger instances. This effect is also visible in the absolute values of Figure 10(a). This is probably caused by the larger connected components (see the last column of Table 3), which make the computation of embedding trees more expensive. Only inserting an embedding tree instead of the whole connected component makes the embedding information of the component directly available in a compressed form without the need to later process the component in its entirety again. Figure 12(b) also shows that SP[d i] is up to a third slower than SP[d], indicating that computing both embedding trees poses a significant overhead while not yielding sufficiently more information to make progress faster. We also evaluated combinations of the variants from this section with the different orderings from the previous section, but observed no notable differences in running time behavior. The effects of the variants from this section always greatly outweigh the effects from the different orderings. To summarize, as the plots only show an advantage of differently handling pipes between block-vertices for small instances, but some strong disadvantages especially for larger instances, we keep SP[d] as our baseline.
### Batched Embedding Tree Generation
Our preliminary analysis showed that the computation of embedding trees consumes a large fraction of the running time (see Figure 10), which cannot be reduced significantly by using the
Figure 14. Relative running times for **(a)** SPQR-tree batched embedding tree generation and **(b)** for different variants thereof.
degrees of freedom of the algorithm studied in the previous two sections. To remedy the overhead of recomputing embedding trees multiple times we now change the algorithm to no longer process pipes one-by-one, but to process all pipes of a biconnected component in one batch. This is facilitated by an alternative approach for generating embedding trees not only for a single vertex, but for all vertices of a biconnected component. The embedding tree of a vertex \(v\) can be derived from the SPQR-tree using the approach described by Blasius et al. (Blasius et al., 2017): Each occurrence of \(v\) in a "parallel" skeleton of the SPQR-tree corresponds to a (PQ-tree) P-node in the embedding tree of \(v\), each occurrence in a "rigid" to a (PQ-tree) Q-node. This derivation can be done comparatively quickly, in time linear in the degree of \(v\). Thus, once we have the SPQR-tree of a biconnected component available, we can apply all currently feasible PropagatePQ and SimplifyMatching operations in a single batch with little overhead. The SPQR-tree computation takes time linear in the size of the biconnected component, albeit with a larger linear factor than for the linear-time planarity test that yields only a single embedding tree. In a direct comparison with the planarity test, this makes the SPQR-tree the more time-consuming approach.
We enable the batched embedding tree computation based on SPQR-trees in variant SP[s]. Figures (a)a and (a)a show that for small instances, this yields a slowdown of close to a third. Showing a behavior inverse to SP[d b], SP[s] grows faster for larger instances and its speed-up even increases to up to 4 times as fast as the baseline SP[d]. This makes SP[s] the clear champion of all variants considered so far. We will thus use it as baseline for our further evaluation, where we combine SP[s] with other, previously considered flags.
### SPQR-Batch Variations
Figure (b)b switches the baseline between the two variants shown in Figure (a)a and additionally contains combinations of the variants from Section 5.2 with the SPQR-batch computation. As in Figure (b)b, the intersection of embedding trees in SP[s i] is consistently slower, albeit with a slightly smaller margin. The joining of blocks in SP[s b] also shows a similar behavior as before, starting out 25 % faster for small instances and growing up to 100 % slower for larger instances. Again, this is probably because too large connected components negatively affect the computation of SPQR-trees. Still, the median of SP[s b] is consistently faster than SP[d]. Different to before, SP[s bi] is now faster than SP[s b], making it the best variant for instances with up to 5000 cluster-border edge crossings. This is probably because in the batched mode, there is no relevant overhead for obtaining a second embedding tree, while the intersection does preempt some following operations. To summarize, for instances up to size 5000, SP[s bi] is the fastest variant, which is outperformed by SP[s] on larger instances. This can also be seen in the absolute running times in Figure (a)a, where SP[s] is more than an order of magnitude faster than SP[d b] on large instances.
## 6. Further Analysis
In this section, we provide further in-depth analysis of the different variants from the previous section and also analyze their performance on the remaining datasets to give a conclusive judgement. To gain more insights into the runtime behavior, we measured the time each individual step of the algorithm takes when using the different variants. An in-depth analysis of this data is given in Section 6.1, where Figure 15 also gives a more detailed visualization of per-step timings. The per-step data corroborates that the main improvement of faster variants is greatly reducing the time spent on the generation of embedding trees, at the cost of slightly increased time spent on the solve and embed phases.
To further verify our ranking of variants' running times from the previous sections, we also use a statistical test to check whether one variant is significantly faster than another. The results
presented in Section 6.2 corroborate our previous results, showing that pipe ordering has no significant effect while the too large connected components and batched processing of pipes using SPQR-trees significantly change the running time.
The results of the remaining datasets SEFE-LRG and SP-LRG are presented in Section 6.3 and mostly agree with the results on C-LRG, with SP[d b] clearly being the slowest and SP[s] being the fastest on large instances. The main difference is the magnitude of the overhead generated by large connected components for variants with flag [b].
### Detailed Runtime Profiling
Table 3 shows the per-step running time information aggregated for variants studied in the previous section. Figure 15 in greater detail shows how the running time spent is split on average across the different steps of the algorithm (Figure 15a) and then also further drills down on the composition of the individual steps that make the instance reduced (Figure 15b), solve the reduced instance (Figure 15d), and then derive a solution and an embedding for the input instance by undoing all changes while maintaining the embedding (Figure 15e). For variants that use the SPQR-tree for embedding information generation, we also analyze the time spent on the steps of this batch operation (Figure 15c). Note that we do not have these measurements available for runs that timed out. To ensure that the bar heights still correspond to the actual overall running times in the topmost plot, we add a bar corresponding to the time consumed by timed-out runs on top. This way, ordering the bars by height yields roughly the same order of variants as we already observed in Figure 11a.
Figure 15b clearly shows that the majority of time during the reduce step is spent on generating embedding information, either in the form of directly computing embedding trees (bars prefixed with "ET") or by computing SPQR trees. This can also be seen by comparing column "Make Reduced" in Table 3 with column "Compute Emb Tree". Only for the fastest variants, those with flag [s] and without [b], the execution of the actual operations of the algorithm becomes more prominent over the generation of embedding information in Figure 15c. Here, the terminal case of the SimplifyMatching operation (described in the bottom left part of Figure 5) now takes the biggest fraction of time, and actually also a bigger absolute amount of time than for the other, slower variants with flag [b] enabled. This is probably because, instead of being joined as with flag [b] enabled, here pipes between block-vertices are split by PropagatePQ into multiple smaller pipes, which
\begin{table}
\begin{tabular}{l|c|c c c|c c c c|c|c} \hline SP[d] & 142.68 & 133.08 & 0.82 & 8.78 & 0.25 & 5.00 & 13.79 & 91.34 & 5.64 & 1811 & 2780 \\ SP[d b] & 197.17 & 194.72 & 0.99 & 1.46 & 0.63 & 1.36 & 1.53 & 186.18 & 0.42 & 652 & 13 021 \\ SP[s] & 86.57 & 57.75 & 1.25 & 27.56 & 0.57 & 9.84 & 22.38 & 7.61 & 18.03 & 2696 & 2890 \\ SP[s b] & 93.07 & 79.25 & 3.55 & 10.26 & 2.92 & 4.29 & 12.74 & 46.31 & 5.46 & 1421 & 22 965 \\ SP[s b1] & 81.32 & 68.90 & 3.09 & 9.32 & 2.51 & 3.79 & 11.52 & 41.16 & 4.84 & 1448 & 23 284 \\ \hline \end{tabular}
\end{table}
Table 3. Average values for different variants of SP on dataset C-LRG. All values, except for the counts in the last two columns, are running times in seconds. The first data column shows the average total running time, followed by how this is split across the three phases. The following four columns show the composition of the running time of the “Make Reduced” step. The last three columns detail information about the “Undo Simplify” step in the “Embed” phase, and the maximum size of biconnected components in the reduced instance.
Figure 15: The average running time of our different Synchronized Planarity variants.
then need to be removed by SimplifyMatching. This leads to the variants without [b] needing, on average, roughly two to three times as many SimplifyMatching applications as those with [b]; see Table 3.
The larger biconnected components caused by [b] may also be the reason why the insertion of wheels takes a larger amount of time for variants with [b] in the solving phase shown in Figure 15d. When replacing a cut-vertex by a wheel, all incident biconnected components with at least two edges incident to the cut-vertex get merged. Updating the information stored with the vertices of the biconnected components is probably consuming the most time here, as undoing the changes by contracting the wheels is again very fast. Other than the "MakeWheels" part, most time during the solving phase is spent on computing SPQR trees, although both is negligible in comparison to the overall running time.
The running times of the embedding phase given in Figure 15e show an interesting behavior as they increase when the "Make Reduced" phase running time decreases, indicating a potential trade-off to be made; see also the "Embed" column in Table 3. As the maximum time spent on the "Make Reduced" phase is still slightly larger, variants where this phase is faster while the embedding phase is slower are still overall the fastest. The biggest contribution of running time in the latter phase is the undoing of SimplifyMatching operations, which means copying the embedding of one endpoint of a removed pipe to the other. The time spent here roughly correlates with the time spent on applying the SimplifyMatching operations in the first place (see Table 3).
To summarize, the per-step data corroborates that the main improvement of faster variants is greatly reducing the time spent on the generation of embedding trees, at the cost of slightly increased time spent on the solve and embed phases. Flags [s] and [b] have the biggest impact on running times, while flag [i] and the processing order of pipes do not seem to have a significant influence on the overall running time. While the variants with [s] clearly have the fastest overall running times, there is some trade-off between the amounts of time spent on different phases of the algorithm when toggling the flag [b].
### Statistical Significance
To test whether one variant is (in the statistical sense) significantly faster than another, we use the methodology proposed by Radermacher (Radermacher, 2016, Section 3.2) for comparing the performance of graph algorithms. For a given graph \(G\) and two variants of the algorithm described by their respective running times \(f_{A}(G),f_{B}(G)\) on \(G\), we want to know whether we have a likelihood at least \(p\) that the one variant is faster than the other by at least a factor \(\Delta\). To do so, we use the binomial sign test with advantages as used by Radermacher (Radermacher, 2016), where we fix two values \(p\in[0,1]\) and \(\Delta\geq 1\), and study the following hypothesis given a random graph \(G\) from our dataset: Inequality \(f_{A}(G)\cdot\Delta<f_{B}(G)\) holds with probability \(\pi\), which is at least \(p\). The respective null hypothesis is that the inequality holds with probability less than \(p\). Note that this is an experiment with exactly two outcomes (the inequality holding or not), which we can independently repeat on a sequence of \(n\) graphs and obtain the number of instances \(k\) for which the inequality holds. Using the binomial test, we can check the likelihood of obtaining at most \(k\) successes by drawing \(n\) times from a binomial distribution with probability \(p\). If this likelihood is below a given significance level \(\alpha\in[0,1]\), that is the obtained result is unlikely under the null hypothesis, we can reject the null hypothesis that the inequality only holds with a probability less than \(p\).
Fixing the significance level to the commonly-used value \(\alpha=0.05\), we still need to fix values for \(p\) and \(\Delta\) to apply this methodology in practice. We will use three different values for \(p\in[0.25,0.5,0.75]\), corresponding to the advantage on a quarter, half, and three quarters of the dataset. To obtain values for \(\Delta\), we will split our datasets evenly into two halves \(\mathcal{G}_{\text{train}}\) and \(\mathcal{G}_{\text{verify}}\), using the \(\mathcal{G}_{\text{train}}\) to obtain an estimate for \(\Delta\) and \(\mathcal{G}_{\text{verify}}\) to verify this value. For a given value of \(p\), we set \(\Delta^{\prime}\)
to the largest value such that \(f_{A}(G)\cdot\Delta^{\prime}<f_{B}(G)\) holds for \(p\cdot|\mathcal{G}_{\text{train}}|\) instances. To increase the likelihood that we can reject the null hypothesis in the verification step on \(\mathcal{G}_{\text{verify}}\), we will slightly discount the obtained value of \(\Delta^{\prime}\), using \(\Delta=\min(1,c\cdot\Delta^{\prime})\) instead with \(c\) set to \(0.75\).
Applying this methodology, Figure 16 compares the pairwise advantages of the variants from Sections 5.1 and 5.2. We see that SP[d i] and especially SP[d b] are significantly slower than the other variants: for the quarter of the dataset with the most extreme differences, the advantage rises up to a 5-fold speed-up for other variants, while slight advantages still persist when considering three quarters of instances. Conversely, not even on a quarter of instances are SP[d i] and SP[d b] faster than other variants. Comparing the remaining variants with each other, we see that each variant has at least a quarter of instances where it is slightly faster than the other variants, but always with no noticeable advantage, that is \(\Delta=1\). This is not surprising as the relative running times are scattered evenly above and below the baseline in Figure 12(a). For half of the dataset, SP[d-c] is still slightly faster than other variants, while no variant from Section 5.1 is faster than another for at least three quarters of instances. To summarize, our results here corroborate the findings from Sections 5.1 and 5.2, with SP[d i] and SP[d b] as the clearly slowest variants. While there is no clear winner among the other variants, at least SP[d-c] is slightly faster than the others on half of the dataset, but still has no noticeable advantage.
Figures 13(a) and 13(b) compare the pairwise advantages of the variants from Sections 5.3 and 5.4 (see also Figure 13(b)) for instances with more and less than 5000 cluster-border edge crossings, respectively. For the larger instances of Figure 13(a), the variants with flag [s] outperform SP[d] on at least \(75\,\%\) of instances, with advantages as high as a factor of 5 on at least a quarter of instances. Furthermore, SP[s] outperforms the variants with additional flags [b] and [i] on at least half of all instances. Considering \(75\,\%\) of all instances, the only significant result is that SP[s bi] outperforms SP[s b] but with no advantage, i.e. \(\Delta=1\). For the smaller instances of Figure 13(b), the comparison looks vastly different. Here, SP[s bi] outperforms all other variants on at least \(75\,\%\) of instances, although its advantage is not large, with only up to \(1.6\) even on the most extreme quarter of the dataset. Furthermore, variants SP[d] and SP[s b] outperform variants SP[s i] and SP[s] on half of the dataset, but again with no noticeable advantage, that is \(\Delta=1\). To summarize, our results are
Figure 16. Advantages of variants without flag [s] on C-LRG instances of size at least 5000. Blue cell backgrounds indicate significant values, while in cells with white background, we were not able to reject the null-hypothesis with significance \(\alpha=0.05\). Empty cells indicate that the fraction where the one algorithm is better than the other is smaller than \(p\).
again in accordance with those from Sections 5.3 and 5.4, where for large instances variant SP[s] is the fastest, whereas for smaller instances SP[s bi] is superior.
### Other Problem Instances
Running the same evaluation on the datasets SEFE-LRG and SP-LRG yielded absolute running times with roughly the same orders of magnitude as for C-LRG, see the left plots in Figures 18 to 20 (but note that the plots show different ranges on the x-axis while having the same scale on the y-axis). The right plots in the figures again detail the running times relative to SP[d]. For SP-LRG, the relative running time behavior is similar to the behavior observed on C-LRG. The two major differences concern variants with flag [b]. Variant SP[d b(i)] is not faster than SP[d] on small instances and also sooner grows slower on large instances. Similarly, SP[s b(i)] is not much faster than SP[d] on small instances, and its speed-up over SP[d] for larger instances has a dent where it returns to having roughly the same speed as SP[d] around size 1000. On a large scale, this behavior indicates that the slowdown caused by large connected components is even worse in dataset SP-LRG. For SEFE-LRG, the instances are less evenly distributed in terms of their total pipe degree, as the total pipe degree directly corresponds to the vertex degrees in the SEFE instance. Regarding the relative running time behavior, we still see that SP[d bi] is much slower and SP[s
Figure 17. Advantages of variants with flag [s] on C-LRG instances of size at least 5000 (a) and at most (b) 5000.
(i)] much faster than SP[d]. For the remaining variants, the difference to SP[d] is much smaller than in the two other datasets. This indicates that the size of connected components does not play an as important role in this dataset as before.
## 7. Conclusion
In this paper, we described the first practical implementation of Synchronized Planarity, which generalizes many constrained planarity problems such as Clustered Planarity and Connected SEFE. We evaluated it on more than \(28\,000\) instances stemming from different problems. Using the quadratic algorithm by Blasius et al. (Blasius et al., 2016), instances with \(100\) vertices are solved in milliseconds, while we can still solve most instances with up to \(100\,000\) vertices within minutes. This makes our implementation at least an order of magnitude faster than all other Clustered Planarity implementations, which also have a worse asymptotic running time. Furthermore, we found that the proposed algorithm is able to perform well in the same time as the one in the previous experiments.
Figure 19. Absolute (a) and relative (b) running times with regard to SP[d] for SP-LRG.
Figure 20. Absolute (a) and relative (b) running times with regard to SP[d] for SEFE-LRG.
Figure 18. Absolute (a) and relative (b) running times with regard to SP[d] for C-LRG. |
2309.14392 | Unveiling Fairness Biases in Deep Learning-Based Brain MRI
Reconstruction | Deep learning (DL) reconstruction particularly of MRI has led to improvements
in image fidelity and reduction of acquisition time. In neuroimaging, DL
methods can reconstruct high-quality images from undersampled data. However, it
is essential to consider fairness in DL algorithms, particularly in terms of
demographic characteristics. This study presents the first fairness analysis in
a DL-based brain MRI reconstruction model. The model utilises the U-Net
architecture for image reconstruction and explores the presence and sources of
unfairness by implementing baseline Empirical Risk Minimisation (ERM) and
rebalancing strategies. Model performance is evaluated using image
reconstruction metrics. Our findings reveal statistically significant
performance biases between the gender and age subgroups. Surprisingly, data
imbalance and training discrimination are not the main sources of bias. This
analysis provides insights of fairness in DL-based image reconstruction and
aims to improve equity in medical AI applications. | Yuning Du, Yuyang Xue, Rohan Dharmakumar, Sotirios A. Tsaftaris | 2023-09-25T11:07:25Z | http://arxiv.org/abs/2309.14392v1 | # Unveiling Fairness Biases in Deep Learning-Based Brain MRI Reconstruction
###### Abstract
Deep learning (DL) reconstruction particularly of MRI has led to improvements in image fidelity and reduction of acquisition time. In neuroimaging, DL methods can reconstruct high-quality images from undersampled data. However, it is essential to consider fairness in DL algorithms, particularly in terms of demographic characteristics. This study presents the first fairness analysis in a DL-based brain MRI reconstruction model. The model utilises the U-Net architecture for image reconstruction and explores the presence and sources of unfairness by implementing baseline Empirical Risk Minimisation (ERM) and rebalancing strategies. Model performance is evaluated using image reconstruction metrics. Our findings reveal statistically significant performance biases between the gender and age subgroups. Surprisingly, data imbalance and training discrimination are not the main sources of bias. This analysis provides insights of fairness in DL-based image reconstruction and aims to improve equity in medical AI applications.
Keywords:Fairness Image Reconstruction Algorithm Bias Neuroimaging.
## 1 Introduction
Magnetic resonance imaging (MRI) is routinely used to help diagnose or ascertain the pathophysiological state in a noninvasive and harmless manner. However, MRI is characterised by long acquisition times. There is an interest in improving imaging fidelity whilst reducing acquisition time. A solution is to subsample the frequency domain (k-space). This introduces aliasing artefacts in the image domain due to the violation of the Nyquist sampling theorem, causing difficulties such as biomarkers extraction and interpretation in neuroimaging.
Recently, deep learning (DL) methods based on convolutional neural networks (CNNs) have been proposed to reconstruct high-quality images from the undersampled k-space data [5]. By learning complex patterns from large amounts of training data and filling in missing k-space data, these DL models successfully reconstruct images that closely resemble those obtained through fully sampled acquisitions. Advances in DL-based image reconstruction enable both accelerated acquisition and high-quality imaging, providing significant benefits.
Deep learning methods may be subject to biases (e.g., from the training dataset) which can lead to fairness and lack of equity. For example, recent studies have shown that image segmentation algorithms can be unfair: Puyol-Anton et al. [8] found racial bias can exist in DL-based cine CMR segmentation models when training with a race-imbalanced dataset. This leads us to ask: _Could DL-based image reconstruction algorithms be also unfair?_
To date, such a, at least empirical, study is lacking, and this article precisely addresses this gap. Our primary objective is to investigate the biases in the algorithm resulting from demographic information present in the training data. To the best of our knowledge, this is the first fairness analysis in a DL-based image reconstruction model. We make the following contributions:
* We identify existing bias in performance between gender and age groups using the publicly available OASIS dataset [6].
* We investigate the origin of these biases by mitigating imbalances in the training set and training paradigm with different bias mitigation strategies.
* We discuss the factors that may impact the fairness of the algorithm, including inherent characteristics and spurious correlations.
## 2 Background
### Fairness Definitions
Amongst the various definitions of fairness,since we study the fairness for different demographic subgroups, we consider only group fairness in our analysis.
**Group Fairness**: Group fairness aims to ensure equitable treatment and outcomes for different demographic or subpopulation groups. It recognises the potential for biases and disparities in healthcare delivery and seeks to address them to promote fairness and equity [3]. To ensure fairness, equalised odds [2] is used as a criterion that focuses on mitigating bias, stating as "The predictor \(\hat{Y}\) satisfies equalised odds with respect to protected attribute A and outcome Y, if \(\hat{Y}\) and A are independent conditional on Y." The criterion can be formulated as
\[\forall y\in\{0,1\}:P(\hat{Y}=1|A=0,Y=y)=P(\hat{Y}=1|A=1,Y=y). \tag{1}\]
**Fairness in Image Reconstruction**: It requires the reconstructed image to faithfully represent the original one without distorting or altering its content based on certain attributes such as race, gender, or other protected attributes.
When applying equalised odds as the fairness criterion, while the original equation focuses on fairness in predictive labels, image reconstruction tasks typically involve matching pixel values or image representations. Thus, we reformulate the problem based on probabilistic equalised odds, as proposed by [7]. We let \(P\subset\mathbb{R}^{k}\) be the input space of an image reconstruction task, \((\mathbf{x},\mathbf{y})\sim P\) represent a patient, with \(\mathbf{x}\) representing the undersampled image, and \(y\) representing the fully sampled image or ground truth image. Also, we assume the presence
of two groups \(g_{1},g_{2}\subset P\), which represent the subsets defined by the protected attribute \(\mathbf{A}\). Fairness using probabilistic equalised odds is formulated as:
\[\forall\mathbf{y}\in\mathcal{Y}:\mathbb{E}(\mathbf{x},\mathbf{y})\sim g_{1}[f( \mathbf{x})\mid\mathbf{Y}=\mathbf{y}]=\mathbb{E}(\mathbf{x},\mathbf{y})\sim g _{2}[f(\mathbf{x})\mid\mathbf{Y}=\mathbf{y}]. \tag{2}\]
Here, \(f\) represents the DL-based reconstruction network. With this formulation, we aim to achieve fairness by ensuring that the quality or fidelity of the reconstructed image is consistent across different data distributions irrespective of different demographic characteristics.
### Source of Bias
**Data imbalance** can be a significant source of bias in medical scenarios [17]. It can refer to the imbalanced distribution of demographic characteristics, such as gender and ethnicity, within the dataset. For example, the cardiac magnetic resonance imaging dataset provided by the UK Biobank [10] is unbalanced with respect to race: \(>80\%\) of the subjects are of white ethnicity, resulting in unequal representation of features correlated to ethnicity. This imbalance can introduce bias in the analysis and interpretation of the data.
**Training discrimination** is another source of bias, possibly occurring concurrently with data imbalance [4]. An imbalanced dataset can lead to imbalanced minibatches drawn for training. Hence, the model mainly learns features from the dominant subgroup in each batch, perpetuating bias in the training process.
**Spurious correlations** can also contribute to bias [17]. This refers to the presence of misleading or incorrect correlations between the training data and the features learned by the model. For instance, a model can learn how to classify skin diseases by observing markings made by dermatologists near lesions, rather than fully learning the diseases [15]. This is particularly likely to happen in the minority subgroup due to limited presence in training dataset, leading to overfitting during the training process and further exacerbating bias.
**Inherent characteristics** can also play a role in bias, even when the model is trained with a balanced dataset [17]. Certain characteristics may inherently affect the performance of different subgroups. For instance, in skin dermatology images, lesions are often more challenging to recognise in darker skin due to lower contrast compared to lighter skin. As a result, bias based on ethnicity can still exist even if the dataset is well-balanced in terms of proportions.
## 3 Methods
**Our main goal**: Our goal is to identify the bias in image reconstruction models and any potential sources of bias related to demographic characteristics. To investigate fairness in image reconstruction tasks, we systematically design and conduct experiments that eliminate potential origins of bias w.r.t. various demographic characteristics. We start by establishing a baseline model using Empirical Risk Minimisation (ERM) to assess the presence of bias in relation to diverse
demographic subgroups. Then, we employ a subgroup rebalancing strategy with a balanced dataset in terms of demographic attributes, to test the hypothesis that bias is caused by data imbalance. Then, we use the minibatch rebalancing strategy to evaluate the effects of training discrimination for each subgroup.
**Reconstruction Networks**: We use a U-Net [12] as the backbone for the reconstruction network. The reconstruction network is trained using undersampled MRI brain scans, which are simulated by applying a random Cartesian mask to the fully sampled k-space data. Details of the data and the experimental setup of the reconstruction network are provided in Section 4.
**Baseline Network**: We follow the principle of Empirical Risk Minimisation (ERM) [14]. ERM seeks to minimise the overall risk of a model by considering the entire population, instead of the composition of specific groups and hence without controlling for the distribution of protected attributes.
**Subgroup Rebalancing Strategy**: This strategy aims to examine the performance when a perfectly balanced dataset of the protected attributes is used. Instead of randomly selecting data from the entire dataset to define a training set, the training set consists of an equal number of subjects from different subgroups according to demographic characteristics. This approach ensures that all subgroups have equal chances during the training phase, helping us identify if data imbalance is the source of bias.
**Minibatch Rebalancing Strategy**: This strategy examines the performance when balanced minibatches in terms of protected attributes are used to eliminate discrepancy before training [9]. Hence, each minibatch has an equal presence of subjects with different demographic characteristics and all subgroups have an equal opportunity during each iteration to influence the model weights.
**Evaluation Metrics**: Although several fairness metrics have been proposed, most of the current work is focused on image classification and segmentation tasks, which may not be directly applicable to image reconstruction tasks. Therefore, we analyse the fairness of image reconstruction using image reconstruction metrics and statistical analysis. The performance of the reconstruction is evaluated using Structural Similarity Index (SSIM, higher is better), Peak Signal-to-Noise Ratio (PSNR, higher is better) on the patient level.
To investigate bias between subgroups with different demographic characteristics, we performed the non-parametric Kruskal-Wallis ANOVA test (as available within OriginPro 2023) to test the omnibus hypothesis that there are differences in subgroups with \(p<0.05\) as the threshold for statistical significance. The test will provide Chi-Square value and p-value as results. Higher Chi-Square values indicate the presence of more significant differences between subgroups. This approach allows us to assess the potential bias in the image reconstruction process specifically instead of relying on fairness metrics designed for other tasks.
## 4 Experimental Analysis
### Dataset and pre-processing
**Dataset**: We select the publicly available Open Access Series of Imaging Studies (OASIS) dataset [6] to evaluate the fairness of the image reconstruction task. The initial data set consists of a cross-sectional collection of 416 subjects(316 subjects is healthy and 100 subjects is clinically diagnosed with very mild to moderate Alzheimer's disease)and for each subject, three or four individual T1-weighted MRI scans obtained in single imaging sessions are included. To simulate clinical practice with uncertainty about patients' conditions, we used an entire dataset consisting of a mix of patients, including both healthy subjects and patients with Alzheimer's disease (AD), without providing explicit labels for their conditions. To study the fairness regarding inherent demographic information, we choose gender and age information provided in the dataset as the protected attributes. Since the patients are aged 18 to 96, we categorise the patients into young adult (age below 40), middle-aged adult (ages 40 to 65), and older adult (age above 65) according to the criteria proposed by [13].The statistics of the subgroups are summarised in Table 1.
According to Table 1, there is a clear imbalance in the distribution of demographic characteristics in the OASIS dataset. In the protected attribute gender, the female is the dominant group with 256 subjects, while the male is the disadvantaged group with only 160 subjects. In terms of age, compared to the middle-aged adults group, the young and older adults groups are dominant groups.
**Data Pre-processing**: To ensure the equal size of dataset for methods in Section 3, the dataset is firstly categorised into six age-gender subgroups(e.g.,middle-aged female adults) and sampled according to the size of minority subgroups, which is 27 from middle-aged male adults, to maintain the balance for both age and gender distribution among sampled dataset (162 subjects in total). Then, we sampled 5 subjects from six age-gender subgroups to form test set, which is 30 subjects in total. For the training and validation set, we sampled the rest 22 subjects from each age-gender subgroups for the rebalancing and minibatch rebalancing strategies, which is 132 subjects in total. While for the baseline network, the training and validation set are randomly sampled with a size of 132 subjects. The train-validation-test splits all follow the proportions of \(20:2:5\). For each patient, we select 122 central slices out of 208 slices in one volume.
### Implementation Details
We employ a U-Net as backbone. Its first feature map size is 32, with 4 pooling cascades, resulting in a total of 7.8M parameters. We employ the Adam optimiser
\begin{table}
\begin{tabular}{c|c|c c|c c c} \hline
**Category** & **All** & **Female Male** & **Young Middle-aged Older** & **Older** \\ \hline
**Count** & 416 & 256 & 160 & 156 & 82 & 178 \\
**Proportion (\%)** & 100.0 & 61.5 & 38.5 & 37.5 & 19.7 & 42.8 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of demographic subgroups in OASIS. Patients are categorised into young adult (below 40), middle-aged adult (40 to 65) and older adult (above 65).
with a learning rate of \(10^{-4}\) with a step-based scheduler with a decay gamma of 0.1. Both the \(\ell_{1}\) loss and the SSIM loss were incorporated into our experiments. Models were trained for 40 epochs with batch size 6.
5-fold cross validation is used to mitigate sample bias. Our experimental setup uses the PyTorch Lightning framework and we trained on an NVIDIA A100 Tensor Core GPUs. The implementation of our code is inspired by the fastMRI repository.1 Our code is publicly available at: [https://github.com/ydu0117/ReconFairness](https://github.com/ydu0117/ReconFairness).
Footnote 1: [https://github.com/facebookresearch/fastMRI/](https://github.com/facebookresearch/fastMRI/)
### Results
Table 2 reports the SSIM and PSNR results (mean and standard deviation) from 5-fold cross-validation under three different strategies. Figures 1 and 2 demonstrate the reconstruction performance of subgroups defined by demographic characteristics. Table 3 offers the results of Kruskal-Wallis ANOVA test between demographic subgroups, including p-values and Chi-Square values.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Baseline ERM**} & \multicolumn{2}{c|}{**Subgroup Rebalancing**} & \multicolumn{2}{c}{**Minibatch Rebalancing**} \\ \cline{2-7} & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR \\ \hline
**Whole** & 0.872 (0.012) & 7.742 (0.112) & 0.867 (0.011) & 7.529 (0.109) & 0.867 (0.011) & 7.529 (0.109) \\ \hline
**Female** & 0.876 (0.010) & 7.999 (0.099) & 0.876 (0.010) & 8.006 (0.095) & 0.871 (0.010) & 7.767 (0.095) \\
**Male** & 0.868 (0.013) & 7.485 (0.118) & 0.870 (0.013) & 7.509 (0.117) & 0.864 (0.013) & 7.292 (0.117) \\ \hline
**Young Adults** & 0.876 (0.010) & 8.690 (0.092) & 0.877 (0.009) & 8.729 (0.090) & 0.872 (0.009) & 8.496 (0.090) \\
**Middle-aged Adults** & 0.874 (0.011) & 7.859 (0.108) & 0.875 (0.011) & 7.877 (0.106) & 0.869 (0.011) & 7.645 (0.106) \\
**Older Adults** & 0.867 (0.010) & 6.676 (0.102) & 0.867 (0.010) & 6.666 (0.099) & 0.861 (0.010) & 6.448 (0.099) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of Image Reconstruction Performance under three strategies.
Figure 1: Image Reconstruction Performance for Gender Subgroups. In the figure, ‘F’ represents ‘Female’ and ‘M’ represents ‘Male’. This figure indicates performance gap between two gender subgroups in image reconstruction task under different strategies.
**Presence of Bias**: Focusing on the baseline ERM model, our results show that there is a significant performance difference between subgroups categorised by gender and age. Among the gender subgroups, the female group outperforms the male group. This difference is obvious in Figure 1, showing that the baseline model provides better performance for female subjects compared to male subjects. This difference is statistically significant in Table 3.
Among the three age groups, the results demonstrate an obvious performance gap. Referring to Table 2, the young adults group provides a better performance in all three metrics. Furthermore, the results indicate that as age increases, the reconstruction performance worsens (both metrics). The trend is visually evident in Figure 2 and is statistically significant in Table 3.
**Is dataset imbalance the source of unfairness?** The performance under rebalancing strategies shows that the imbalance of data and the discrimination of training are not the major cause of bias. Specifically, in Table 2, when comparing the performance of the subgroups under different training strategies, the perfor
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{**Gender**} & \multicolumn{2}{c}{**Age Group**} \\ \cline{2-5} & SSIM & PSNR & SSIM & PSNR \\ \hline
**Baseline ERM** & 13.44\({}^{***}\) & 6.45\({}^{*}\) & 11.64\({}^{**}\) & 78.91\({}^{***}\) \\
**Subgroup Rebalancing** & 10.90\({}^{**}\) & 6.01\({}^{*}\) & 14.64\({}^{**}\) & 81.08\({}^{***}\) \\
**Minibatch Rebalancing** & 10.30\({}^{**}\) & 5.44\({}^{*}\) & 13.70\({}^{**}\) & 78.62\({}^{***}\) \\ \hline \end{tabular}
\end{table}
Table 3: Kruskal-Wallis ANOVA results for the Three Strategies Testing for Influence of “Gender” and “Age Group”. The results include Chi-Square values with statistical significance (p-value) indicated as \({}^{***}\)\(p<0.001\), \({}^{**}\)\(p<0.01\), \({}^{*}\)\(p<0.05\).
Figure 2: Image Reconstruction Performance for Age Subgroups under Three strategies. In the figure, ‘YA’ represents ‘Young Adults’, ‘MA’ represents ‘Middle-aged Adults’ and ‘OA’ represents ‘Older Adults’. This figure indicates performance gaps between age subgroups in image reconstruction task under different strategies.
mance gaps evidenced before still exist. These biases are also visually illustrated in Figures 1 and 2 and are again statistically significant in Table 3.
However, the Chi-square values under rebalancing strategies in Table 3, is reduced compared to the baseline ERM network among gender subgroups. This reduction in Chi-Square values indicates that the rebalancing either of the training set or the minibatch may mitigate partial bias, illustrating that dataset imbalance and training discrimination towards gender may be sources of bias, but not the main source. However, it is noticeable that the balancing strategies result in performance reduction of the dominant subgroup.
## 5 Discussion
**What is the source of unfairness**: We find that data imbalance and training discrimination do not significantly contribute to bias. Instead, the bias may stem from spurious correlations and inherent characteristics. Specifically, the model may focus on neuroanatomical features that are associated with demographic factors [11, 1]. In Figure 3, the relations between demographic features and neuroanatomy metrics including estimated Total Intracranial Volume (eTIV) as well as normalised Whole Brain Volume (nWBV) are analysed. Our results show that women tend to have smaller eTIV compared to men, and young adults have the highest nWBV among age subgroups. Thus, these differences in eTIV between gender and nWBV between age may result in spurious correlations that lead to bias, which requires further investigation in future work.
**Clinical Relevance**: It is noticeable that the difference in SSIM among subgroups is in the second or third decimal place in some cases. Although the small difference may not be clinically meaningful in practice, it can lead to additional errors and bias in downstream tasks such as segmentation and classification, ultimately leading to inaccurate diagnoses.
**Limitations**: Previous studies [16] have reported data imbalances among different racial groups due to geographic limitations of the datasets. In our analysis, due to the lack of racial data, the training set may still exhibit an imbalance in terms of race, even if we implement a rebalancing strategy.
Figure 3: Relations between Demographic Features and Neuroanatomy Metrics.
## 6 Conclusion
In this study, we conducted an initial analysis of fairness in DL-based image reconstruction tasks with respect to demographic characteristics, specifically gender and age. We employed three strategies to investigate the bias caused by these characteristics. Through the use of rebalancing strategies, we found that imbalanced training sets and training discrimination were not the major contributors to bias. However, further investigation is needed to identify the sources of bias in image reconstruction tasks. Correspondingly, we need to propose bias mitigation strategies to ensure fairness in DL-based image reconstruction applications.
## Acknowledgements
This work was supported in part by National Institutes of Health (NIH) grant 7R01HL148788-03. Y. Du and Y. Xue thank additional financial support from the School of Engineering, the University of Edinburgh. S.A. Tsaftaris also acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819\(\backslash\)8\(\backslash\)25), and the UK's Engineering and Physical Sciences Research Council (EPSRC) support via grant EP/X017680/1. The authors would like to thank Dr. Chen and K. Vilouras for inspirational discussions and assistance. Data used in Sec. 4.1 were provided by OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382. |
2305.19563 | Zero-Shot Automatic Pronunciation Assessment | Automatic Pronunciation Assessment (APA) is vital for computer-assisted
language learning. Prior methods rely on annotated speech-text data to train
Automatic Speech Recognition (ASR) models or speech-score data to train
regression models. In this work, we propose a novel zero-shot APA method based
on the pre-trained acoustic model, HuBERT. Our method involves encoding speech
input and corrupting them via a masking module. We then employ the Transformer
encoder and apply k-means clustering to obtain token sequences. Finally, a
scoring module is designed to measure the number of wrongly recovered tokens.
Experimental results on speechocean762 demonstrate that the proposed method
achieves comparable performance to supervised regression baselines and
outperforms non-regression baselines in terms of Pearson Correlation
Coefficient (PCC). Additionally, we analyze how masking strategies affect the
performance of APA. | Hongfu Liu, Mingqian Shi, Ye Wang | 2023-05-31T05:17:17Z | http://arxiv.org/abs/2305.19563v1 | # Zero-Shot Automatic Pronunciation Assessment
###### Abstract
Automatic Pronunciation Assessment (APA) is vital for computer-assisted language learning. Prior methods rely on annotated speech-text data to train Automatic Speech Recognition (ASR) models or speech-score data to train regression models. In this work, we propose a novel zero-shot APA method based on the pre-trained acoustic model, HuBERT. Our method involves encoding speech input and corrupting them via a masking module. We then employ the Transformer encoder and apply k-means clustering to obtain token sequences. Finally, a scoring module is designed to measure the number of wrongly recovered tokens. Experimental results on speechocean762 demonstrate that the proposed method achieves comparable performance to supervised regression baselines and outperforms non-regression baselines in terms of Pearson Correlation Coefficient (PCC). Additionally, we analyze how masking strategies affect the performance of APA.
Hongfu Liu, Mingqian Shi, Ye Wang School of Computing, National University of Singapore, Singapore
{hongfu,m-shi,wangye}@comp.nus.edu.sg
**Index Terms**: automatic pronunciation assessment, zero-shot learning, self-supervised learning, HuBERT
## 1 Introduction
Learning a second language (L2) is a common requirement in bilingual or multilingual communities. However, L2 learners often struggle with achieving good proficiency in pronunciation. Computer-assisted pronunciation training (CAPT) is a notable application that enables language learners to effectively learn the pronunciation of new languages [1, 2]. CAPT provides feedback containing evaluation results, which can be automatically generated based on pronunciation, facilitating L2 learners in adjusting their pronunciation for improvement. Therefore, providing an overall assessment of pronunciation automatically is one of the primary objectives of CAPT.
Automatic pronunciation assessment has been extensively investigated over a prolonged period. Existing pronunciation assessment methods are implemented in the supervised setting. These approaches involve the usage of collected speech data with text annotations for training ASR models. Then the evaluation can be conducted based on the recognition results of ASR models. Goodness of Pronunciation (GoP) is one of the most commonly used metrics, aiming to provide phoneme-level scores for a given utterance. GoP requires calculating the log-posterior probability for each reference phoneme based on the contextual information [3, 4, 5]. On the other hand, there is an alternative research line that involves using speech data from non-native speakers with pronunciation scores annotated by domain experts to train regression models. Various features of speech data have been explored in this line, one of which is the phone-level features of speech [6, 7]. To enhance regression performance, [8] propose to use deep features transferred from the acoustic models of ASR. Using speech representations of pre-trained acoustic models such as wav2vec 2.0 or HuBERT also contributes to improving the regression performance by fine-tuning [9, 10]. Furthermore, multi-aspect pronunciation assessment at multiple granularities [11, 12] has been explored with multi-task supervised learning. However, there is a lack of unsupervised assessment approaches in the literature. All current pronunciation assessment methods require supervised signals to obtain the evaluation results.
Resource-efficient methods have been widely investigated for the low-resource scenario in the speech community [13, 14]. Nevertheless, it remains challenging to evaluate the quality of pronunciation using few or no data samples. Recent advances in Self-Supervised Learning (SSL) pre-trained language models (PLMs) have demonstrated strong few-shot and zero-shot learning abilities in the natural language processing community [15, 16] due to the knowledge acquired during the pre-training stage. PLMs are capable of performing downstream tasks via appropriate prompting with limited or even no data samples. However, the zero-shot ability has not been fully explored for SSL pre-trained acoustic models. This is because they learn at the acoustic level and it is challenging to learn linguistic representations from raw audio [17, 18], making it difficult to adapt them to downstream tasks without fine-tuning. While fine-tuning SSL pre-trained acoustic models with supervised data has been shown to be effective in automatic pronunciation assessment [9, 10], zero-shot pronunciation assessment has yet to be explored. Nevertheless, the acoustic-level knowledge acquired by SSL pre-trained acoustic models presents a viable option for zero-shot pronunciation assessment based on the unlabelled speech data observed during pre-training.
In this work, we propose a zero-shot pronunciation assessment approach that requires no annotated speech data. This is achieved by leveraging the SSL pre-trained acoustic model, HuBERT [19], for conducting the masked token prediction task. Our method involves encoding the waveform speech input into frame sequences and transforming them into corrupted sequences via a masking module. We then employ the Transformer Encoder of HuBERT and apply k-means clustering to obtain tokens of frame sequences and recovered tokens of corrupted sequences. Finally, a scoring module is designed to evaluate the pronunciation of a given speech by measuring the number of wrongly recovered tokens. Our proposed method is unsupervised and requires no fine-tuning. We conduct experiments on the speechocean762 dataset [7]. The experimental results demonstrate that the proposed method achieves comparable performance compared to supervised baselines and outperforms non-regression baselines in terms of the Pearson Correlation Coefficient.
## 2 Method
### Overview
An overview of our proposed method is shown in Figure 1. We developed three main steps to achieve this assessment as shown in Figure 1. The first step is to input the waveform speech audio to the convolutional neural network (CNN) encoder to get a frame sequence. The Transformer encoder takes as input the frame sequence and the k-means clustering is utilized to obtain the token sequences. The second step is to apply a mask module on the frame sequence gained in Step 1 and input the masked sequence to the Transformer encoder followed by the k-means clustering to obtain the recovered tokens of masked spans. Finally, a scoring module is employed to measure the number of wrongly recovered tokens based on the outputs of Step 1 and Step 2. The intuition behind this is that for well-pronounced speech, recovered tokens would be similar to tokens of corresponding positions obtained from uncorrupted input. On the contrary, for mispronunciation speech, recovered tokens would differ from their counterparts a lot.
### HuBERT Module
The HuBERT Module is adapted from the original HuBERT architecture [19]. This module consists of one CNN encoder, one Transformer encoder, one k-means clustering, and one masking module. Let \(X=\{x_{1},...,x_{T}\}\) denote the output of the CNN encoder with \(T\) frames. Then the Transformer encoder is employed to get latent representations of \(X\) that are further utilized to obtain the token sequences \(Z=\{z_{1},...,z_{T}\}\) through k-means clustering, where \(z_{t}\in[C]\) is a \(C\)-class categorical variable and \(t\in[T]\). \(z_{t}\) is also known as the hidden acoustic unit.
### Masking Module
To construct the masked token prediction task, we employ a masking strategy \(r\) on \(X\). If the set of indices to be masked is denoted by \(M\subset[T]\) for a sequence \(X\) with \(T\) frames, then the corrupted version is denoted by \(X^{*}=r(X,M)\), where \(x_{m}\) is replaced by a mask embedding \(x^{*}\) for \(m\in M\). Then we feed \(X^{*}\) into the same Transformer encoder and use the same k-means clustering. As a consequence, the output token sequence of masked spans is denoted by \(Z^{*}=\{z_{m}|m\in M\}\).
Masking strategy is of great importance in the proposed method. Basically, we aim to mask mispronounced segments and expect the SSL pre-trained acoustic model can recover them with correctly pronounced tokens. However, whether the speech is mispronounced and where the mispronunciation occurs are unknown due to our unsupervised setting. To address this issue, we propose two strategies that are used to mask out the mispronunciation segments.
#### 2.3.1 Random Masking
Random masking is a direct approach that is based on the masking strategy employed in pre-training. However, a single instance of random masking may have a lower probability of covering the mispronunciation component. To address this concern, we propose to repeat random masking \(k\) times for a given sequence \(X\). Specifically, we randomly select \(p\%\) of the frames in \(X\) as starting indices, and subsequently mask spans of \(l\) for each start index. These spans are mutually exclusive, with no overlap between them. By increasing the value of \(k\), it is possible to ensure that each frame is masked at least once.
#### 2.3.2 Regular Masking
Regular masking is an alternative approach that masks frames in a rule-based way. This strategy involves segmenting the input into \(k\) slices of equal length. We then proceed to mask one of those segments at a time and perform inference. The process is repeated until every segment has been masked at least once. The number \(k\) of segmented slices determines the granularity of the segmentation.
### Scoring Module
In order to assess the quality of speech pronunciation, we introduce the scoring module, which measures the number of incor
Figure 1: Overview of the zero-shot automatic pronunciation assessment.
rectly recovered tokens based on \(Z\) and \(Z^{*}\). Specifically, the average Mis-Recovered Token (aMRT) is proposed as a metric to measure the performance of pronunciation. Formally,
\[\mathbf{aMRT}=\frac{1}{k}\sum_{j=1}^{k}\sum_{i\in M_{j}}\delta(z_{i},z_{i}^{*})\]
where \(M_{j}\subset[T]\) represents the \(j\)-th set of indices to be masked, and function \(\delta\) is defined as:
\[\delta(z,z^{*})=\begin{cases}0,&z=z^{*}\\ 1,&z\neq z^{*}\end{cases}\]
A higher aMRT value corresponds to a greater number of mis-recovered tokens and thus a lower quality of pronunciation. To obtain the PCC results between our proposed metrics and ground-truth scores, we adopt the negative values of aMRT as our final metrics.
## 3 Experiments
### Dataset
We conduct experiments on the dataset speechocean762 [7], which is specifically designed for pronunciation assessment. This open-source speech corpus is composed of 5,000 English utterances collected from 250 non-native speakers, half of whom are children. The corpus provides rich label information including phoneme, word, and sentence levels, and includes assessment scores ranging from 0 to 10 annotated by five experts. Our proposed approach is evaluated at the sentence level on the test set, which contains 2,500 utterances. We choose this public dataset for easy reproduction and comparison.
### Baseline Models
We compare our proposed method with regression-based and non-regression-based baselines. The regression-based baselines include GoP [3, 20], DeepFeature1[8], and the state-of-the-art GOPT [12], all of which are supervised with human-annotated pronunciation scores. The non-regression-based baseline, on the other hand, utilizes the average phoneme-level GoP over the entire sentence as the measurement, and is referred to as non-reg GoP. This method does not require score annotations but instead uses a supervised ASR model.
Footnote 1: DeepFuture refers to the methods in [8] using deep features of ASR acoustic model
### Experimental Setup
We utilize the HuBERT-Base2 model and adopt the CNN encoder, Transformer encoder, and k-means clustering in the experiments. HuBERT-Base is pre-trained on the LibriSpeech-960 [21], and the k-means with 100 clusters is fitted on LibriSpeech train-clean-100 split as per [22] using intermediate representations from HuBERT-Base. The output of the 7th layer of the Transformer Encoder is chosen as the default feature for clustering, as the resulting acoustic units perform well in discrimination tests [19, 17, 23]. We set masking probability \(p=20\%\), masking length \(l=5\), and repeating times \(k=50\) as the default. Each experiment is repeated three times with three different random seeds \(\{13,21,100\}\), and the mean and standard deviation of the results are reported. Prior to performing the inference steps, all input audios are resampled with 16000 as the sample rate. The non-reg GoP is computed using Kaldi [24] to obtain the average phoneme-level GoP of the entire sentence. The ASR model utilized in this calculation is Librispeech ASR Chain Model3, as per [12].
Footnote 2: [https://github.com/pytorch/fairseq](https://github.com/pytorch/fairseq)
### Main Results
Two comparative studies are conducted to assess the effectiveness of the proposed method. The first study involves PCC performance comparison between our proposed method with regression-based and non-regression-based baselines, while the second study compares the PCC performance of different masking strategies.
The performances of various regression-based baselines and non-regression-based baselines are presented in Table 1. The results indicate that, compared to regression-based baselines, the proposed method lags behind the basic supervised baseline by a small margin of 0.04 PCC, despite a large performance gap of 0.14 PCC with the state-of-the-art supervised baseline. Notably, the proposed method is achieved by leveraging only the acoustic knowledge of HuBERT-Based acquired during pretraining, without the usage of annotated scores.
Furthermore, in comparison with the non-regression-based baseline, our proposed method shows a performance improvement of 0.03 PCC over the non-reg GoP. It is noteworthy that non-reg GoP requires an ASR model, while our method does not, underscoring the effectiveness of our ASR-free approach.
Table 2 presents the performance comparison of two masking strategies employed in this study. The results show that random masking achieves superior performance with an improvement of 0.014 PCC over regular masking. We conjecture that this may be due to the fact that the input distribution with random masking is closer to the input distribution during pretraining, leading to enhanced performance. In addition, the experimental results reveal that random masking exhibits a low variance, indicating the stability of the method.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Model** & **PCC** \\ \hline
**Regression based** & \\ \hline GoP [3] & 0.64 \\ GoP(2BLSTM+MLP) [20] & 0.67 \\ DeepFeature [6] & 0.72 \\ GOPT [12] & 0.74 \\ \hline
**Non-regression based** & \\ \hline non-reg GoP & 0.57 \\ Ours & 0.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between our method with regression-based and non-regression-based baselines on speechocean762
\begin{table}
\begin{tabular}{c c} \hline \hline
**Masking Strategy** & **PCC** \\ \hline Random Masking & \(0.595\pm 0.002\) \\ Regular Masking & \(0.581\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of two masking strategies. The standard derivation of Random Masking is reported.
### Impact of masking hyperparameters
#### 3.5.1 Random Masking
In order to further examine the impact of various hyperparameters of random masking, including masking probability, masking length, and feature layers used for clustering on the final results, three additional experiments are carried out. The results are presented in Figure 2.
The initial subfigure 2 illustrates the impact of mask probability on PCC results, with mask probability ranging from 0.1 to 0.5 with an interval of 0.1. The mask length is set to 5, and the feature layer is set to 7. The results indicate that the mask probability of 0.3 yields the best performance, while both higher and lower mask probabilities produce inferior outcomes. This observation may be attributed to the fact that the high mask probability may discard essential information that is required for reconstruction, whereas the low mask probability may decrease the possibility of masking mispronunciation parts.
Subfigure 2 showcases how the length of each masked span affects the PCC results. The mask length ranges from 2 to 10 with an interval of 2, while the mask probability is set to 0.2, and the feature layer is set to 7. The curve of this figure suggests a linear decrease in performance as the length increases. This phenomenon may stem from the pre-trained HuBERT-Base's inadequate ability to recover a long masked span given the context.
Apart from the aforementioned factors, this study also investigates the degree to which the features used for clustering can contribute to pronunciation assessment. Therefore, the features from various layers of the Transformer Encoder ranging from 7 to 12 are examined. The outcomes presented in subfigure 2 reveal that using features from the 9th layer results in the best PCC performance. Generally, features from the 7th to 10th layer are considered useful for pronunciation assessment, whereas deeper features lead to poorer performance.
#### 3.5.2 Regular Masking
For regular masking, we mainly investigate the impact of the slice number on the PCC results, namely how the mask granularity affects the PCC results. The results are presented in Figure 3. Our finding suggests that the more refined granularity of a single mask span does not necessarily lead to improved performance. One potential explanation for this outcome is that the use of a single mask span causes a shift from the input distribution, leading to poor performance. In addition, shorter masked spans may fail to cover entire words or even phonemes, which can have an adverse impact on the results.
## 4 Discussion
While our zero-shot method achieves results comparable to supervised methods, it is essential to acknowledge that our method differs from the canonical-text-based pronunciation assessment. Our method draws on the acoustic knowledge obtained during pre-training, and thus, even if the transcription is different from the canonical text, a speech that is accurately pronounced may still receive a high score. Moreover, our method is limited to sentence-level assessment, and the exploration of unsupervised pronunciation assessment at the phoneme and word levels will be left as future work. The objective of this study is to establish a baseline and provide a pilot study of unsupervised pronunciation assessment.
## 5 Conclusion
In this paper, we present a zero-shot automatic pronunciation assessment approach. Instead of training regression models or using ASR models to compute GoP, we directly utilize a SSL pre-trained acoustic model and use the acoustic knowledge acquired from pre-training. To perform the ASR-free pronunciation assessment, we design two masking strategies and a novel evaluation metric to score the pronunciation of given speeches at the sentence level. Experimental results on speechocean762 achieve comparable performance to the supervised regression-based baseline and outperform the non-regression-based baseline. In the future, we hope to extend this research line of unsupervised pronunciation assessment to phoneme and word levels.
## 6 Acknowledgements
The authors would like to thank anonymous reviewers for their valuable suggestions. This project is funded in part by a research grant MOESOL-2021-0017 from the Ministry of Education in Singapore.
Figure 3: Impact of slice number on PCC results
Figure 2: Impact of (a) mask probability, (b) mask length, and (c) feature layer on PCC results |
2308.16874 | D-VAT: End-to-End Visual Active Tracking for Micro Aerial Vehicles | Visual active tracking is a growing research topic in robotics due to its key
role in applications such as human assistance, disaster recovery, and
surveillance. In contrast to passive tracking, active tracking approaches
combine vision and control capabilities to detect and actively track the
target. Most of the work in this area focuses on ground robots, while the very
few contributions on aerial platforms still pose important design constraints
that limit their applicability. To overcome these limitations, in this paper we
propose D-VAT, a novel end-to-end visual active tracking methodology based on
deep reinforcement learning that is tailored to micro aerial vehicle platforms.
The D-VAT agent computes the vehicle thrust and angular velocity commands
needed to track the target by directly processing monocular camera
measurements. We show that the proposed approach allows for precise and
collision-free tracking operations, outperforming different state-of-the-art
baselines on simulated environments which differ significantly from those
encountered during training. Moreover, we demonstrate a smooth real-world
transition to a quadrotor platform with mixed-reality. | Alberto Dionigi, Simone Felicioni, Mirko Leomanni, Gabriele Costante | 2023-08-31T17:21:18Z | http://arxiv.org/abs/2308.16874v2 | # D-VAT: End-to-End Visual Active Tracking
###### Abstract
Visual active tracking is a growing research topic in robotics due to its key role in applications such as human assistance, disaster recovery, and surveillance. In contrast to passive tracking, active tracking approaches combine vision and control capabilities to detect and actively track the target. Most of the work in this area focuses on ground robots, while the very few contributions on aerial platforms still pose important design constraints that limit their applicability. To overcome these limitations, in this paper we propose D-VAT, a novel end-to-end visual active tracking methodology based on deep reinforcement learning that is tailored to micro aerial vehicle platforms. The D-VAT agent computes the vehicle thrust and angular velocity commands needed to track the target by directly processing monocular camera measurements. We show that the proposed approach allows for precise and collision-free tracking operations, outperforming different state-of-the-art baselines on simulated environments which differ significantly from those encountered during training.
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
## I Introduction
Micro aerial vehicles (MAVs) are gaining increasing interest thanks to their agility and low cost, which make them suitable for a wide variety of robotic tasks, especially those performed in cluttered or dangerous environments. Applications include transportation, exploration, surveillance, and tracking [1]. In this paper, we focus on the visual active tracking (VAT) task, which requires a _tracker_ vehicle to maintain visual contact with a dynamic _target_. In contrast to passive tracking, where the pose of the camera is fixed, active tracking approaches actively regulate the camera pose by suitably controlling the vehicle, in order to keep the target inside the camera field-of-view (FoV). The VAT problem is far more challenging than passive tracking as it requires to directly map high-dimensional image data into suitable control actions. Previous research on this problem combined a dedicated perception module (_e.g._, an object detector) with a separate closed-loop control module for the vehicle motion [2, 3, 4]. This approach has two fundamental limitations: (i) the two modules are designed separately and not jointly optimized; (ii) their combination requires extra effort for tuning and implementation.
A viable alternative to overcome these drawbacks is to adopt end-to-end deep reinforcement learning (DRL), which has already shown impressive results in many fields of robotics [5, 6, 7, 8]. Recently, this paradigm has been explored for VAT [9, 10]. Most of the related works focus on ground robots and take advantage of the physical characteristics of these platforms (_i.e.,_ low dimensionality of the configuration space and limited number of possible actions) to facilitate the design of VAT policies. However, much less attention has been devoted to more complex platforms such as MAVs, which require a more sophisticated policy to be learned by the DRL agent. State-of-the-art (SotA) works have addressed this issue by relying on some simplifying assumptions, _e.g.,_ by ignoring the vehicle dynamics [11] or by constraining the possible control actions to a predefined subset of the action space [12]. Solutions based on these simplifications are, in general, less robust and performing.
In this paper, we aim to remove these assumptions and propose D-VAT, a novel end-to-end DRL-based continuous control model for visual active tracking that is tailored to MAV systems. D-VAT relies on a monocular setup, _i.e.,_ it requires only an RGB image stream collected by an onboard camera to directly compute the thrust and angular velocity commands needed to track the target with high accuracy (see [13] for a justification of such commands). To the best of our knowledge, this is the first end-to-end approach that solves the VAT problem for MAVs without severely constraining the motion of the target or the tracker vehicle. We compare D-VAT to both model-based and data-driven SotA strategies on photorealistic simulated environments considerably different from those employed during training, where it achieves a much better tracking performance than these methods.
The rest of this work is organized as follows: Section II contains literature review and details the paper contribution; Section III provides the preliminary definitions; Section IV formalizes the considered tracking problem; Section V describes the experiments and discusses the results; Section VI
Fig. 1: Overview of the VAT task. The _tracker_ MAV (blue) adjusts its position and orientation so as to keep the _target_ MAV (red) at the center of the camera FoV and at a predefined distance. Our approach exploits an end-to-end DRL-based VAT method that directly maps RGB images into thrust and angular velocity commands that are fed to the tracker.
draws the conclusions and outlines future research directions.
## II Related Work
In recent years, VAT has become a central research topic in robotics. VAT applications consider either pan-tilt-zoom (PTZ) vision sensors attached to a fixed base or cameras mounted on robotic vehicles to meet the goal of keeping the tracked object in sight. For instance, [14] presents a visual tracking solution that enables a PTZ camera to track the behavior of a moving person in surveillance applications. Along the same line, [15] proposes a two layer architecture for real-time human motion tracking. In the context of mobile robots, VAT takes advantage of the control degrees of freedom of the vehicle to maintain the visibility of the tracked object. Most of the related approaches employ modular architectures that combine passive perception and motion control components [2, 3, 4]. In particular, [16] couples the perception module with a low-level controller based on DRL. The former computes semantic segmentation maps from RGB images to obtain an intermediate representation that facilitates the agent in controlling the vehicle. Despite the significant results achieved by modular approaches such as the above ones, the combination of perception and control components poses, in general, important challenges. First, the modules are designed independently and not jointly optimized, reducing the effectiveness of the overall pipeline. Secondly, their integration is usually based on several tuning parameters whose optimal values are non-trivial to determine. Moreover, a performance drop in one module might cause the overall system to fail.
The aforementioned challenges can be addressed by leveraging DRL techniques [8, 17, 18]. A vast literature is available on DRL-based VAT approaches for ground vehicle systems. [19] proposes an end-to-end deep neural network architecture to train a DRL agent in simulated environments and takes advantage of domain randomization in order to favor generalization to real-world scenarios. [20] develops an asymmetric dueling training procedure employing an adversarial target that stimulates the development of an effective policy. In [10], the assumption of having the target within the camera FoV at the beginning of the maneuver is removed, so that the agent is able to explore an unknown environment, find the target and track it. All these approaches feature a discrete action space and therefore they cannot explore the full performance envelope of the vehicle. In fact, the resulting maneuvers are non-smooth and prone to losing visual contact with the target. An end-to-end architecture that exploits continuous actions is presented in [9].
Compared to ground robots, the design of learning-based policies for MAVs is significantly more challenging. In [21], a multi-layer perceptron is coupled with a low-level PID controller in order to stabilize the MAV hovering configuration. This method employs absolute position measurements provided by motion capture system, and does not address the VAT problem. A VAT solution is proposed in [22] to allow a MAV to fly and track a moving object. In particular, the control system of the MAV is designed to track ground targets by processing down-looking images, which precludes the application of the method to scenarios featuring front-looking cameras and flying targets. [11] presents an active tracking module for MAVs equipped with a pan-tilt camera that is able to track a person in various complex scenes. Nonetheless, the MAV dynamics are not fully exploited in the design of the control policy and the action space is discrete, which poses a hard limit on the achievable performance. A continuous action space is considered in [12], where a RL-based policy is coupled with a low-level PID control layer. However, the positioning of the MAV is constrained to a plane and thus the tracker is not free to move in 3D. Very few studies addressed the VAT problem for MAVs without relying on restrictive assumptions on the motion of the target-tracker pair. The recent work [23] tackles this problem by adopting an image-based visual servoing approach that features a modular design similar to those discussed at the beginning of this section. Nevertheless, such a design leads to position and orientation errors in the order of 1 m and 0.1 rad, respectively, and it requires full attitude information.
### _Contribution_
As highlighted by the previous literature review, an increasing number of studies is focusing on VAT in the context of MAV applications. Model-based techniques (see, _e.g.,_[23]) present design and integration issues that inherently limit their performance and entail tracking errors that may limit their applicability. On the other hand, existing learning-based approaches are affected by different constraints: (i) the target lies on a plane [22]; (ii) the tracker is controlled by discrete actions [11]; (iii) the agent is trained with continuous actions that are confined to a subset of the tracker action space [12]. To overcome these limitations, in this paper we provide the following contributions:
* We propose D-VAT, a novel end-to-end DRL continuous control model for VAT applications involving MAVs.
* The proposed DRL policy directly maps RGB image data into thrust and angular velocity commands, and does not make restrictive assumptions on the trajectories of both the tracker and the target.
* We show the benefits of D-VAT by comparing it against different model-based and data-driven SotA approaches. Our approach outperforms the baselines also in scenarios that differ substantially from the training ones, demonstrating remarkable generalization capabilities.
## III Preliminary Definitions
The optimization of RL models requires a significant number of interactions with the environment and this number becomes massive when deep approximators come into play. In practice, this excludes the possibility of using real MAVs to collect interaction episodes, both for efficiency and safety reasons. To overcome this issue, highly photorealistic simulation frameworks can be used to generate an unlimited amount of episodes and train the DRL models without any physical risk to the vehicle. In this work, we follow this practice and optimize our D-VAT model in simulated environments.
Before detailing the characteristics of D-VAT and its training procedure, in this section we describe the dynamic model which is integrated into the simulation engine to generate realistic motions. In particular, we follow [24] and consider a surrogate model in which the tracker is controlled by thrust and angular velocity inputs. The model is given by:
\[\ddot{p} = \frac{f}{m}R_{3}-g \tag{1}\] \[\dot{R} = R\,[\omega]_{\times}\]
In system (1), \(p\) and \(R\) are the tracker absolute position and orientation, while \(m\) and \(g=[0\ 0\ 9.8]^{\top}\mathrm{m\,s^{-2}}\) are the vehicle mass and the gravity vector, respectively. Moreover, \(f\) and \(\omega\) indicate the collective thrust and the angular velocity inputs. The notation \([\omega]_{\times}\) refers to the skew-symmetric representation of vector \(\omega=[\omega_{x}\,\omega_{y}\,\omega_{z}]^{T}\). Since our DRL optimization framework is discrete-time, we apply a zero-order-hold discretization to system (1) and denote by \(z(k)\) the value taken by a signal \(z(t)\) at the sampling instant \(t=kt_{s}\), where \(t_{s}\) the sampling time. The motion of the target is modeled by a parameterized class of trajectories denoted by \(p_{r}(k)\), as detailed in Section IV-D. It is important to highlight that D-VAT is trained in a model-free manner and has no explicit information about the dynamics (1). The simulation model is only used to generate realistic MAV trajectories.
## IV Approach
### _Problem Formulation_
The goal of VAT is to control the motion of a tracker agent equipped with a vision sensor, so as to maintain the target within the FoV of the camera and at a predefined distance. In this paper, we assume that both the tracker and the target are MAVs that are free to move in 3D. The vision sensor is an RGB camera whose reference frame is coincident with the tracker body-fixed frame. In particular, the optical axis is aligned with the \(x\)-axis direction. At the beginning of the VAT task, the target is located ahead of the tracker (within the camera FoV), and starts moving along a time-varying trajectory. The tracker employs only the image stream coming from its front camera as a source of information and computes the thrust and angular velocity commands needed to meet the control goal. Similarly to other complex navigation and control tasks, VAT can be tackled by formulating a suitable reinforcement learning (RL) problem [25]. In particular, we treat the tracker as an RL agent which repeatedly interacts with an environment over a series of independent episodes. For each discrete timestep, the agent receives an observation \(o(k)\), a reward \(r(k)\), and produces an action \(u(k)\). The observation is given by the aforementioned sequence of camera images, while the action is a continuous command that specifies the thrust and the angular velocity of the tracker MAV, _i.e.,_\(u(k)=(f(k),\omega(k))\). The reward is defined in Section IV-C.
### _Deep Reinforcement Learning Strategy_
The proposed end-to-end VAT strategy relies on a monocular setup and requires only an RGB image stream collected by the onboard camera to directly compute the MAV control commands. RGB images are partial observations of the full MAV state and are composed of a large number of pixels that form a huge observation space. For this reason, it is not viable to train the agent using classical RL algorithms, and more advanced solutions based on Deep Neural Network (DNN) approximators must be applied. In particular, we adopt the _asymmetric actor-critic_ formulation [26, 10]. According to this framework [25], we design two different DNN architectures for the _actor_ (A-DNN) and for the _critic_ (C-DNN). The former learns the optimal policy \(u(k)=\pi(o(k))\) with respect to the given task, while the latter aims to evaluate such a policy during the training phase. The asymmetric structure of this framework allows the critic network to be fed with more privileged information than the actor network, thus stimulating the development of an effective policy evaluation. It is worth remarking that the A-DNN is the only agent operating at inference time.
The A-DNN is a convolutional neural network composed of a ResNet18 [27] and three additional hidden layers, each one characterized by 512 neurons and ReLU activations. In order to learn temporal relations, the proposed A-DNN design processes a sequence of \(H\) front-view camera images. This turned out to play a key role in improving the tracking performance. The image sequence is given by
\[o(k)\!=\!\left[\begin{array}{cc}I(k)&I(k-1)&\ldots&I(k-H+1)\end{array} \right]^{T}, \tag{2}\]
where \(I(k)\) is the RGB frame acquired at the \(k\)-th time step. Moreover, the A-DNN extracts 512 visual features from each image through its convolutional block. Subsequently, the \(H\times 512\) features are concatenated and fed to the linear layers to compute the action. The control actions are saturated to be consistent with the physical characteristics of MAV actuators. In particular, a \(\tanh\) saturation is adopted to confine the action values computed by the A-DNN within prescribed limits (see angular rate and thrust limits in Table I).
The C-DNN design consists of a fully connected neural network with three hidden layers, each one composed of 256 neurons and ReLU activations. The correct selection of the inputs to the C-DNN is, in general, nontrivial. In this work, we explored different possibilities and selected the input set that we found to be the most informative without unnecessarily increasing the network complexity. In particular, we define the observation of the C-DNN as a vector \(o_{c}(k)\) representing the relative state as follows:
\[o_{c}(k)\!=\!\left[\begin{array}{c}y(k)\\ v(k)\\ a(k)\end{array}\right]\!=\!\left[\begin{array}{c}R(k)^{T}[p_{r}(k)-p(k)]\\ R(k)^{T}[\dot{p}_{r}(k)-\dot{p}(k)]\\ R(k)^{T}[\ddot{p}_{r}(k)-\ddot{p}(k)]\end{array}\right], \tag{3}\]
where \(y(k)\), \(v(k)\) and \(a(k)\) denote respectively the position, velocity and acceleration of the target relative to the tracker, expressed in the tracker body-fixed frame. The C-DNN output is a scalar representing the estimated _action-value_\(Q_{\pi}(o_{c}(k),u(k))\). The overall design is illustrated in Fig. 2.
### _Optimization_
The A-DNN and the C-DNN are both trained by using the popular RL-based Soft Actor-Critic (SAC) framework
[28], where the reward signal \(r(k)\) is specifically designed to address the VAT problem in MAVs scenarios, taking into account the distinctive characteristics and requirements of the considered control task. In particular, the main control objective is to align the target with the center of the tracker camera FoV while keeping a predefined distance between the two vehicles. To this purpose, the reward is defined as:
\[r_{e}(k)=(r_{x}(k)\,r_{y}(k)\,r_{z}(k))^{\beta}, \tag{4}\]
where \(\beta>0\) is a suitable exponent and
\[r_{x} = \max(0,1-|y_{x}(k)-d_{r}|),\] \[r_{y} = \max\left(0,1-\left|\frac{2}{A_{\text{FoV}}}\arctan\left(\frac{y_ {y}(k)}{y_{x}(k)}\right)\right|\right), \tag{5}\] \[r_{z} = \max\left(0,1-\left|\frac{2}{A_{\text{FoV}}}\arctan\left(\frac{y _{z}(k)}{y_{x}(k)}\right)\right|\right).\]
In Eq. (5), \(r_{x}\) is maximal when the first entry of \(y(k)=[y_{x}(k)\,y_{y}(k)\,y_{z}(k)]^{T}\) matches the desired distance \(d_{r}\) to the target (\(d_{r}\) is specified along the \(x\)-axis of the body-fixed frame, which is assumed coincident with the optical axis). Moreover, \(r_{y}\) and \(r_{z}\) are functions that encourage the agent to keep the target at the center of the image plane and thus away from the camera FoV limits, being \(A_{\text{FoV}}\) the FoV amplitude in radians. The reward term \(r_{e}(k)\) in (4) is clipped in the interval \([0,\ 1]\) to favor the learning process, and it is maximal (\(r_{e}=1\)) when the VAT goal is achieved.
Two additional reward terms are included in the formulation to optimize also the control effort and the MAV linear velocity. In particular, we define a velocity penalty \(r_{v}\) and a control effort penalty \(r_{u}\) as follows:
\[r_{v}(k)=\frac{\|v(k)\|}{1+\|v(k)\|},\ \ r_{u}(k)=\frac{\|u(k)\|}{1+\|u(k)\|}. \tag{6}\]
Collision avoidance constraints are taken into consideration by penalizing the RL agent whenever \(\|y(k)\|<d_{m}\), where \(d_{m}\) is the minimum distance allowed.
The reward function is obtained by adding up all the above contributions, which results in:
\[r(k)=\left\{\begin{aligned} & r_{e}(k)-k_{v}r_{v}(k)-k_{u}r_{u}(k) \quad\|y(k)\|>d_{m}\\ &-k_{c}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
### _Experimental Setup_
The A-DNN and C-DNN have been optimized by using the Stable-Baselines3[30] implementation of SAC, which we customize2 to extend it to the _asymmetric actor-critic_ formulation of our approach. The networks have been optimized for approximately 18,000 episodes executed in 6 parallel environments, using the Adam optimizer with a learning rate of 0.0003, a discount factor \(\gamma\) of 0.99, and a batch size of 64. Each training episode has a maximum duration of \(40\) s, and the observation sequence length for the A-DNN is set to \(H=3\). The other hyper-parameters and settings are reported in Table I. The training process is performed on a workstation equipped with 2 x NVIDIA RTX 2080Ti with 11GB of VRAM, an Intel Core processor i7-9800X (3.80GHz x16) and 64 GB of DDR4 RAM.
Footnote 2: The source code will be available upon acceptance.
Our approach is tested on two environment classes: the first one contains scenes similar to those used during the training phase, although with different room shapes, objects disposition, and textures (we refer to these scenes as Box Environments). The second is, instead, aimed at testing the generalization capabilities of D-VAT and has more complex and photo-realistic environments, _i.e.,_ an outdoor urban scenario (Urban), an outdoor park environment (Park), and an indoor scene of an office building (Office). These are depicted in Fig. 4 and are significantly different from the ones used to train our model.
We run a total of 20 maneuver realizations for each test environment. In each run, the tracker is spawned at a random initial position, while the target is initially placed in front of the tracker at the optimal distance. To assess the generalization capabilities of our approach, we test also target trajectories that differ from the training ones. In particular, we consider constant setpoints and rectilinear trajectories with different shapes such as ramp-like and cubic. In the following, the D-VAT agent is compared to the SotA baselines described hereafter.
### _Baselines_
**Active Object Tracking (AOT)[19]**. In this approach, the agent is trained to track predefined target trajectories by using discrete actions. To comply with the dynamic model (1), which takes as input the collective thrust and angular velocity of the MAV, we define the action set as follows: \(\{+\Delta\omega_{x},-\Delta\omega_{x},\)\(+\Delta\omega_{y},-\Delta\omega_{y},\)\(+\Delta\omega_{z},-\Delta\omega_{z},\)\(+\Delta f,-\Delta f,\)\(no\_op\) }, where the operator \(\Delta\) indicates a fixed increment of thrust or angular velocity and \(no\_op\) prescribes a zero thrust or angular velocity increment. The size of the \(\Delta\) increments has been manually tuned to meet the task specifications.
**AD-VAT+[20]**. The model policy is learned during the adversarial dueling against the target, which is itself an RL agent. This approach employs the same discrete action space as the AOT baseline.
**C-VAT[9]**. The model is optimized using a target that is randomly spawned in the surrounding of the tracker. In particular, a heuristic trajectory generator (HTG) is combined with a suitable set of auxiliary losses in order to facilitate the convergence of the training process. Herein, we implement the HTG with a Linear Quadratic Gaussian (LQG) controller that exploits ground truth pose information to control the tracker so as to achieve the VAT goal. Moreover, the auxiliary losses in [9] have been extended to a 3D environment.
**SiamRPN++ PID**. This modular baseline combines the object tracker SiamRPN++[31] with a standard MAV control architecture featuring two Proportional-Integral-Derivative (PID) feedback loops. In order to achieve the VAT goal, the outer loop processes the bounding box information provided by SiamRPN++ (_i.e.,_ position and size of the bounding box enclosing the target) to compute roll, pitch, yaw, and thrust signals that are are fed to the inner (attitude control) loop. The PID parameters have been tuned using a trial and error approach on relevant scenarios, so as to achieve a suitable trade-off between reactivity to tracking errors and sensitivity to noise. The inner loop needs attitude information and, in our tests, we provide the ground-truth attitude angles returned by the simulator. This baseline is favored with respect to D
Fig. 4: Images from the photo-realistic environments employed to test the generalization capabilities of D-VAT. From left to right: an urban setting (Urban), a park environment (Park), and an office space (Office). It should be noted that the visual appearance of these scenarios differs significantly from the scenes used during training.
Fig. 3: Examples of the training environment randomization. The tracker (blue) and the target (red) MAVs are spawned in a large room with random characteristics including walls height, objects shape and disposition, textures, light conditions, and presence of distracting objects in the background.
VAT because it has access to privileged information, _i.e.,_ the attitude of the MAV.
**SiamRPN++ LQG**. This modular baseline combines SiamRPN++ with a model-based design that couples feedback linearization and a linear control law (see, _e.g.,_[32]). In particular, we adopt a Linear-Quadratic-Gaussian (LQG) design. The resulting policy uses the bounding box information to regulate directly the thrust and angular velocity of the tracker so as to meet the VAT objective. The LQG weights have been tuned extensively to achieve a fair trade-off between performance and robustness. This baseline requires attitude information (to linearize the MAV dynamics by feedback) and hence it is favored with respect to D-VAT.
### _Metrics_
To evaluate the performance of D-VAT against that of the baselines, we adapted the tracking metrics in [9, 10] to a 3D environment. For convenience, the metrics are defined by expressing the ground-truth position of the target relative to the tracker in a spherical coordinate system, whose axes are aligned with those of the tracker body-fixed frame. The spherical coordinates are denoted by \((\rho,\theta,\varphi)\). The considered metrics are detailed below.
**Distance Score**: measures the ability of the tracker to maintain the desired distance from the target, as follows
\[\tilde{P}_{\rho}(k)=\begin{cases}\max\left(0,1-2|\rho(k)-d_{r}|\right),&\text {if}\quad\begin{array}{l}|\theta(k)|<\frac{A_{\text{RV}}}{2}\\ |\varphi(k)|<\frac{A_{\text{RV}}}{2}\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Elevation Score**: measures the ability of the tracker to maintain the target vertically aligned to the center of the FoV, as follows
\[\tilde{P}_{\theta}(k)=\begin{cases}\max\left(0,1-\frac{2|\theta(k)|}{A_{\text{ RV}}}\right),&\text{if}\quad\begin{array}{l}|\varphi(k)|<\frac{A_{\text{RV}}}{2}\\ |\rho(k)-d_{r}|<0.5\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Azimuth Score**: measures the ability of the tracker to maintain the target horizontally aligned to the center of the FoV, as follows
\[\tilde{P}_{\varphi}(k)=\begin{cases}\max\left(0,1-\frac{2|\varphi(k)|}{A_{ \text{RV}}}\right),&\text{if}\quad\begin{array}{l}|\theta(k)|<\frac{A_{ \text{RV}}}{2}\\ |\rho(k)-d_{r}|<0.5\\ \end{array}\\ 0&\text{otherwise}\end{cases}\]
**Total Score**: it is the arithmetic mean of the above metrics, given by \(\tilde{P}_{c}(k)=(\tilde{P}_{\rho}(k)+\tilde{P}_{\theta}(k)+\tilde{P}_{\varphi }(k))/3\).
Notice that if \(\tilde{P}_{\rho}(k)=1\), then the tracker is at the desired distance from the target. Moreover, if \(\tilde{P}_{\theta}\) and \(\tilde{P}_{\phi}\) are both equal to \(1\), then the target centroid is at the center of the FoV. Summarizing, \(\tilde{P}_{c}(k)=1\) when perfect visual tracking is achieved at step \(k\).
The metrics are averaged with respect to the episode time and across the 20 runs performed in each scenario, resulting in \(P_{m}=\frac{1}{20N_{c}}\sum_{i=1}^{20}\sum_{k=0}^{N_{c}-1}{{}^{(i)}\tilde{P}_ {m}(k)}\quad,\) where \(m\in\{\rho,\theta,\varphi,c\}\), \({}^{(i)}\tilde{P}_{m}\) indicates that the performance is evaluated on the \(i\)-th run, and \(N_{c}\) is the number of samples within the episode.
### _Comparison Results_
The results of the experimental campaign are presented in Tables II and III. Our first important finding is that D-VAT outperforms all the baselines with respect to the performance metrics, and it is able to track the target by producing low-level control commands directly from RGB images. A visual inspection of the experiments (see the supplementary videos for qualitative results) shows that D-VAT is able to react promptly and effectively to the target movements. Specifically, it (i) computes fast maneuvers when the target approaches the boundary of camera FoV to avoid losing it, and (ii) provides a smooth control policy that is close to being optimal (_i.e.,_ the target is almost always maintained at the center of the image plane and at the desired distance).
The learning-based approaches AOT, AD-VAT+ and C-VAT fail to converge to a suitable tracking policy. This could be explained by considering the high complexity of the task. AOT and AD-VAT+ are both strategies that rely on a discrete action space. Thus, they generate non-smooth control policies that struggle to maintain the target visibility and might even result in unstable maneuvers that cause the target to disappear outside the FoV. Even C-VAT, despite being designed to provide continuous commands, fails to provide an efficient tracking policy. To explain this result, it is important to notice that the dimension of the MAV action space is doubled with respect to that of a planar ground robot (which is the platform considered in the original C-VAT work [9]). The increased complexity of the quadrotor dynamics make the model optimization more challenging and, in the case of C-VAT, this entails a large performance degradation.
The baselines that combine two separate modules, _i.e.,_ an object detector and a controller (LQG or PID), are instead able to achieve better results. Nonetheless, the overall tracking performance is inferior to that of D-VAT. This can be attributed to the modular nature of these baselines. As the two components are designed independently, their coupling turns out to be inefficient and can cause the overall system to fail. In practice, this problem emerges since the controller, which has been designed under the assumption that the relative position is accurately known, is fed with position measurements extracted from the bounding box information provided by the object detector. These measurements, due to non-ideal image conditions or aggressive target maneuvers, might violate the design assumptions. This aspect becomes even more critical in realistic environments that are characterized by a high density of distracting objects in the background (_e.g.,_ the photorealistic scenarios Urban and Office in Fig. 4). In this regard, it should be noted that the PID scheme, thanks to its more adaptable design, is more robust to model mismatch than the LQG counterpart.
On the other hand, thanks to the domain randomization strategy we employ, D-VAT has learned a tracking policy that can deal effectively with a wide range of scenarios and at the same time achieve high performance. This holds even when the visual conditions of the environment are very different from those employed in the training phase (see the results
obtained for the Urban, Park and Office scenarios in Table II).
To further study the comparison between D-VAT and the modular baselines, we run additional experiments by varying the maximum velocity of the target. We perform these experiments on a simplified scene with low amount of texture and no objects. In Table III, it can be seen that for low target velocities, the modular baselines and D-VAT achieve similar performance. However, when the target performs faster and more aggressive trajectories, the performance of both the modular baselines decreases, while D-VAT tracking capabilities are almost unaffected. This suggests that the proposed learning-based approach is more robust and responsive in challenging scenarios where the ability of traditional control strategies may be limited.
### _DRL Controller Validation_
To validate the learned controller in a realistic environment, we used a simulation model in which system (1) is augmented with the angular velocity and the thrust dynamics. The former are stabilized by a low-level proportional controller, which is a common setting for embedded MAV autopilots, while the latter are represented by a first order model that is typical for the actuator. Moreover, we included the effect of air drag. The simulation model is then given by
\[\begin{bmatrix}\ddot{p}\\ \dot{R}\\ \dot{\omega}\\ \dot{f}\end{bmatrix}=\begin{bmatrix}\frac{1}{m}(R_{3}f+f_{drag})-g\\ R\left[\omega\right]_{\times}\\ J^{-1}(k_{\omega}(\omega_{cmd}-\omega)-\left[\omega\right]_{\times}J\omega)\\ k_{f}(f_{cmd}-f)\end{bmatrix}, \tag{8}\]
where \(J\) is the inertia matrix of the MAV, \(f_{cmd}\) and \(\omega_{cmd}\) are the commanded total thrust and body rates provided by the DRL controller, \(k_{f}\) and \(k_{\omega}\) are suitable scalar gains, and \(f_{drag}=-K_{v}\dot{p}\) is a linear drag term, being \(K_{v}\) the drag coefficient matrix. The following parameter values have been employed according to [13]: \(J=\text{diag}(0.0025,0.0021,0.0043)\)\(\text{kgm}^{2}\) and \(K_{v}=\text{diag}(0.3,0.3,0.15)\). Moreover, we set \(k_{f}=30\) and \(k_{\omega}=1\), resulting in a thrust settling time of about \(0.1\) s and in a peak control torque in the order of 1 Nm. These figures are compatible with the MAV actuator specifications. Besides the simulation model (8), the validation scenario includes two moving objects that occasionally appear in the tracker FoV. One of them shares the same shape as the target one but has a different color, while the other has a different shape but the same color as the target MAV.
Table IV compares the results in Table III with those obtained in the validation environment, in the absence (second row) and in the presence (third row) of dynamic distracting objects. It can be seen that the performance drop with respect to the tests with the simplified model (1) is negligible. Moreover, the tracker agent is nearly unaffected by the presence of dynamic distracting objects, which proves that it did not overfit with respect to the object color or its shape individually. This is even more remarkable if we consider that the agent was trained on model (1) and that no moving objects other than the target MAV were included during the training phase. From these results, it can be concluded that our strategy offers good robustness and generalization capabilities against unmodeled dynamics.
Finally, notice that in the definition of the reward function (7) we did not penalize variations of the control command, so as to favor fast maneuvering and to reduce as much as possible the tracking error. As a downside, this choice can lead to a nervous tracking behavior. One possibility to mitigate this issue is to low-pass filter the A-DNN output. For instance, we found out that using a first order low-pass filter with a cut-off frequency of 2 Hz does indeed result in smoother trajectories (see the attached videos). However, it entails a percentage drop in performance of about \(20\%\).
## VI Conclusions
In this work, we proposed D-VAT, an end-to-end visual active tracking approach for MAV systems. The D-VAT agent is trained by exploiting an asymmetric actor-critic DRL formulation. Once optimized, it is capable of computing thrust and angular velocity commands for the tracker MAV directly from input images. Experiments against different baselines show that our approach exhibits superior tracking capabilities and it is capable of generalizing over scenarios that considerably differ from those used during training.
Currently, D-VAT can track vehicles whose appearance is similar to that of the target MAV used for the optimization. Future work will consider methodologies to make the tracker agent independent from the appearance of the target MAV.
|
2309.16585 | Text-to-3D using Gaussian Splatting | Automatic text-to-3D generation that combines Score Distillation Sampling
(SDS) with the optimization of volume rendering has achieved remarkable
progress in synthesizing realistic 3D objects. Yet most existing text-to-3D
methods by SDS and volume rendering suffer from inaccurate geometry, e.g., the
Janus issue, since it is hard to explicitly integrate 3D priors into implicit
3D representations. Besides, it is usually time-consuming for them to generate
elaborate 3D models with rich colors. In response, this paper proposes GSGEN, a
novel method that adopts Gaussian Splatting, a recent state-of-the-art
representation, to text-to-3D generation. GSGEN aims at generating high-quality
3D objects and addressing existing shortcomings by exploiting the explicit
nature of Gaussian Splatting that enables the incorporation of 3D prior.
Specifically, our method adopts a progressive optimization strategy, which
includes a geometry optimization stage and an appearance refinement stage. In
geometry optimization, a coarse representation is established under 3D point
cloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a
sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians
undergo an iterative appearance refinement to enrich texture details. In this
stage, we increase the number of Gaussians by compactness-based densification
to enhance continuity and improve fidelity. With these designs, our approach
can generate 3D assets with delicate details and accurate geometry. Extensive
evaluations demonstrate the effectiveness of our method, especially for
capturing high-frequency components. Our code is available at
https://github.com/gsgen3d/gsgen | Zilong Chen, Feng Wang, Yikai Wang, Huaping Liu | 2023-09-28T16:44:31Z | http://arxiv.org/abs/2309.16585v4 | # Text-to-3D using Gaussian Splatting
###### Abstract
In this paper, we present Gaussian Splatting based text-to-3D generation (Gsgen), a novel approach for generating high-quality 3D objects. Previous methods suffer from inaccurate geometry and limited fidelity due to the absence of 3D prior and proper representation. We leverage 3D Gaussian Splatting, a recent state-of-the-art representation, to address existing shortcomings by exploiting the explicit nature that enables the incorporation of 3D prior. Specifically, our method adopts a progressive optimization strategy, which includes a geometry optimization stage and an appearance refinement stage. In geometry optimization, a coarse representation is established under a 3D geometry prior along with the ordinary 2D SDS loss, ensuring a sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians undergo an iterative refinement to enrich details. In this stage, we increase the number of Gaussians by compactness-based densification to enhance continuity and improve fidelity. With these designs, our approach can generate 3D content with delicate details and more accurate geometry. Extensive evaluations demonstrate the effectiveness of our method, especially for capturing high-frequency components. Our code is available at [https://github.com/gsgen3d/gsgen/](https://github.com/gsgen3d/gsgen/).
## 1 Introduction
Diffusion model based text-to-image generation (Saharia et al., 2022; Rombach et al., 2022; Ramesh et al., 2022; Alex et al., 2023) has achieved remarkable success in synthesizing photo-realistic images from textual prompts. Nevertheless, for high-quality text-to-3D content generation, the advancements lag behind that of image generation due to the inherent complexity of real-world 3D scenes. Recently, DreamFusion (Poole et al., 2023) has made great progress in generating delicate assets by utilizing score distillation sampling with a pre-trained text-to-image diffusion prior. Its follow-up works further improve this paradigm in quality (Wang et al., 2023; Chen et al., 2023), training speed (Lin et al., 2023; Metzer et al., 2022), and generating more reasonable geometry (Armandpour et al., 2023; Zhu
Figure 1: Delicate 3D assets generated using the proposed Gsgen. See our project page sggen3d.github.io for videos of these images.
& Zhuang, 2023; Seo et al., 2023). However, most existing text-to-3D methods still suffer greatly from collapsed geometry and limited fidelity, and are difficult to incorporate 3D priors due to the implicit nature of NeRF (Mildenhall et al., 2020) and DMTet(Shen et al., 2021).
Recently, 3D Gaussian Splatting (Kerbl et al., 2023) has garnered significant attention in the field of 3D reconstruction, primarily due to its remarkable ability to represent intricate scenes and capability of real-time rendering. By modeling a scene using a set of 3D Gaussians, Kerbl et al. (2023) adopt an explicit and object-centric approach that fundamentally diverges from implicit representations like NeRF and DMTet. This distinctive approach paves the way for the integration of explicit 3D priors into text-to-3D generation. Building upon this insight, instead of a straightforward replacement of NeRFs with Gaussians, we propose to guide the generation with an additional 3D point cloud diffusion prior to enhancing geometrical coherence. By adopting this strategy, we can better harness the inherent advantages of 3D Gaussians in the creation of complex and 3D-consistent assets.
Specifically, we propose to represent the generated 3D content with a set of Gaussians and optimize them progressively in two stages, namely geometry optimization and appearance refinement. In the geometry optimization stage, we optimize the Gaussians under the guidance of a 3D point cloud diffusion prior along with the ordinary 2D image prior. The incorporation of this extra 3D SDS loss ensures a 3D-consistent rough geometry. In the subsequent refinement stage, the Gaussians undergo an iterative enhancement to enrich the delicate details. Due to the sub-optimal performance of the
Figure 2: Compared to previous methods, Gsgen alleviates the Janus problem by representing the 3D scene using 3D Gaussian Splatting, which is capable of applying direct 3D geometry guidance and expressing content with delicate details. Note that the results of DreamFusion and Magic3D are obtained using Stable DreamFusion (Tang, 2022) and threestudio (Guo et al., 2023) since the official implementations are not publicly available due to the utilization of private diffusion models. All the results are obtained using StableDiffusion (Rombach et al., 2022) on checkpoint _runwayml/stable-diffusion-v1-5_ for a fair comparison. Videos of these images are provided in the supplemental video.
original adaptive control under SDS loss, we introduce an additional compactness-based densification technique to enhance appearance and fidelity. Besides, to prevent potential degeneration and break the symmetry in the early stage, the Gaussians are initialized with a coarse point cloud generated by a text-to-point-cloud diffusion model. As a result of these techniques, our approach can generate 3D assets with accurate geometry and exceptional fidelity. Fig.2 illustrates a comparison between Gsgen and previous state-of-the-art methods on generating assets with asymmetric geometry.
In summary, our contributions are:
* We propose Gsgen, the first text-to-3D generation method using 3D Gaussians as representation. By incorporating geometric priors, we highlight the distinctive advantages of Gaussian Splitting in text-to-3D generation.
* We introduce a two-stage optimization strategy that first exploits joint guidance of 2D and 3D diffusion prior to shaping a coherent rough structure in geometry optimization; then enriches the details with a compactness-based densification in appearance refinement.
* We validate Gsgen on various textual prompts. Experiments show that our method can generate 3D assets with more accurate geometry and enhanced fidelity than previous methods. Especially, Gsgen demonstrates superior performance in capturing _high-frequency components_ in objects, such as feathers, surfaces with intricate textures, animal fur, etc.
## 2 Related Work
### 3D Scene Representations
Representing 3D scenes in a differentiable way has achieved remarkable success in recent years. NeRFs (Mildenhall et al., 2020) demonstrates outstanding performance in novel view synthesis by representing 3D scenes with a coordinate-based neural network. After works have emerged to improve NeRF in reconstruction quality (Barron et al., 2021, 2023; Wang et al., 2022c), handling large-scale (Tancik et al., 2022; Zhang et al., 2020; Martin-Brualla et al., 2021; Chen et al., 2022b) and dynamic scenes (Park et al., 2021; Attal et al., 2023; Wang et al., 2022b; Sara Fridovich-Keil and Giacomo Meanti et al., 2023; Pumarola et al., 2021), improving training (Yu et al., 2021a; Chen et al., 2022a; Sun et al., 2022; Muller et al., 2022) and rendering (Reiser et al., 2023; Hedman et al., 2021; Yu et al., 2021b) speed. Although great progress has been made, NeRF-based methods still suffer from low rendering speed and high training-time memory usage due to their implicit nature. To tackle these challenges, Kerbl et al. (2023) propose to represent the 3D scene as a set of anisotropic Gaussians and render the novel views using a GPU-optimized tile-based rasterization technique. 3D Gaussian Splitting could achieve comparing reconstruction results while being capable of real-time rendering. Our research highlights the distinctive advantages of Gaussian Splitting within text-to-3D by incorporating explicit 3D prior, generating 3D consistent and highly detailed assets.
### Diffusion Models
Diffusion models have arisen as a promising paradigm for learning and sampling from a complex distribution. Inspired by the diffusion process in physics, these models involve a forward process to gradually add noise and an inverse process to denoise a noisy sample with a trained neural network. After DDPM (Ho et al., 2020; Song et al., 2021b) highlighted the effectiveness of diffusion models in capturing real-world image data, a plethora of research has emerged to improve the inherent challenges, including fast sampling (Lu et al., 2022; Bao et al., 2022; Song et al., 2021a) and backbone architectural improvements (Bao et al., 2023; Podell et al., 2023; Liu et al., 2023b; Dhariwal and Nichol, 2021; Hoogeboom et al., 2023; Peebles and Xie, 2022). One of the most successful applications of diffusion models lies in text-to-image generation, where they have shown remarkable progress in generating realistic images from text prompts (Ho and Salimans, 2022; Ramesh et al., 2022; Alex et al., 2023). To generate high-resolution images, current solutions either adopt a cascaded structure that consists of a low-resolution diffusion model and several super-resolution models (Saharia et al., 2022; Balaji et al., 2022; Alex et al., 2023) or trains the diffusion model in latent space with an auto-encoder (Rombach et al., 2022; Gu et al., 2022). Our proposed Gsgen is built upon StableDiffusion (Rombach et al., 2022), an open-source latent diffusion model that provides fine-grained guidance for high-quality 3D content generation.
### Text-to-3D generation
Early efforts in text-to-3D generation, including CLIP-forge (Sanghi et al., 2021), Dream Fields (Jain et al., 2022), Text2Mesh (Michel et al., 2022), TANGO (Chen et al., 2022c), CLIPNeRF (Wang et al., 2022a), and CLIP-Mesh (Khalid et al., 2022), harness CLIP (Radford et al., 2021) guidance to create 3D assets. To leverage the stronger diffusion prior, DreamFusion (Poole et al., 2023) introduces the score distillation sampling loss that optimizes the 3D content by minimizing the difference between rendered images and the diffusion prior. This development sparked a surge of interest in text-to-3D generation through image diffusion prior (Wang et al., 2023a; Raj et al., 2023; Lorraine et al., 2023; Zhu and Zhuang, 2023; Qian et al., 2023). Magic3D (Lin et al., 2023) employs a coarse-to-fine strategy, optimizing a NeRF with a low-resolution diffusion prior and then enhancing texture under latent diffusion prior with a DMTet initialized with the coarse NeRF. Latent-NeRF (Metzer et al., 2022) trains a NeRF within the latent space of StableDiffusion and introduces the Sketch-Shape method to guide the generation process. Fantasia3D (Chen et al., 2023) disentangles the learning of geometry and material, harnessing physics-based rendering techniques to achieve high-fidelity mesh generation. ProlificDreamer (Wang et al., 2023c) introduces variational score distillation to improve SDS and facilitate the generation of high-quality and diverse 3D assets, whose contribution is orthogonal to ours since we focus on incorporating 3D prior with more advanced representation. Another line of work lies in generating 3D assets directly through a 3D diffusion model based on NeRF or other differentiable representations (Wang et al., 2023b; Jun and Nichol, 2023; Liu et al., 2023a; Cheng et al., 2023). Our approach builds upon Point-E (Nichol et al., 2022), a text-to-point-cloud diffusion model trained on millions of 3D models, which offers valuable 3D guidance and coarse initialization.
## 3 Preliminary
### Score Distillation Sampling
Instead of directly generating 3D models, recent studies have achieved notable success by optimizing 3D representation with a 2D pre-trained image diffusion prior based on score distillation sampling, as proposed by Poole et al. (2023). In this paradigm, the scene is represented as a differentiable image parameterization (DIP) denoted as \(\theta\), where the image can be differentially rendered based on the given camera parameters through a transformation function \(g\). The DIP \(\theta\) is iteratively refined to ensure that, for any given camera pose, the rendered image \(\mathbf{x}=g(\theta)\) closely resembles a plausible sample derived from the guidance diffusion model. DreamFusion achieves this by leveraging Imagen (Saharia et al., 2022) to provide a score estimation function denoted as \(\epsilon_{\phi}(x_{t};y,t)\), where \(x_{t}\), \(y\), and \(t\) represent the noisy image, text embedding, and timestep, respectively. This estimated score plays a pivotal role in guiding the gradient update, as expressed by the following equation:
\[\nabla_{\theta}\mathcal{L}_{\text{SDS}}=\mathbb{E}_{\epsilon,t}\left[w(t)( \epsilon_{\phi}(x_{t};y,t)-\epsilon)\frac{\partial\mathbf{x}}{\partial\theta}\right] \tag{1}\]
where \(\epsilon\) is a Gaussian noise and \(w(t)\) is a weighting function. Our approach combines score distillation sampling with 3D Gaussian Splatting at both 2D and 3D levels with different diffusion models to generate 3D assets with both detailed appearance and 3D-consistent geometry.
### 3D Gaussian Splatting
Gaussian Splatting, as introduced in Kerbl et al. (2023), presents a pioneering method for novel view synthesis and 3D reconstruction from multi-view images. Unlike NeRF, 3D Gaussian Splatting adopts a distinctive approach, where the underlying scene is represented through a set of anisotropic 3D Gaussians parameterized by their positions, covariances, colors, and opacities. When rendering, the 3D Gaussians are projected onto the camera's imaging plane (Zwicker et al., 2001). Subsequently, the projected 2D Gaussians are assigned to individual tiles. The color of \(\mathbf{p}\) on the image plane is rendered sequentially with point-based volume rendering technique (Zwicker et al., 2001):
\[C(\mathbf{p})=\sum_{i\in\mathcal{N}}c_{i}\alpha_{i}\prod_{j=1}^{i-1}(1-\alpha _{j})\quad\text{ where, }\alpha_{i}=o_{i}e^{-\frac{1}{2}(\mathbf{p}-\mu_{i})^{T}\Sigma_{i}^{-1}( \mathbf{p}-\mu_{i})}, \tag{2}\]
where \(c_{i}\), \(o_{i}\), \(\mu_{i}\), and \(\Sigma_{i}\) represent the color, opacity, position, and covariance of the \(i\)-th Gaussian respectively, and \(\mathcal{N}\) denotes the Gaussians in this tile. To maximize the utilization of shared memory,
Gaussian Splatting further designs a GPU-friendly rasterization process where each thread block is assigned to render an image tile. These advancements enable Gaussian Splatting to achieve more detailed scene reconstruction, significantly faster rendering speed, and reduction of memory usage during training compared to NeRF-based methods. In this study, we expand the application of Gaussian Splatting into text-to-3D generation and introduce a novel approach that leverages the explicit nature of Gaussian Splatting by integrating 3D diffusion priors, highlighting the potential of 3D Gaussians as a fundamental representation for generative tasks.
## 4 Approach
Our goal is to generate 3D content with accurate geometry and delicate detail. To accomplish this, Gsgen exploits the 3D Gaussians as representation due to its flexibility to incorporate geometry priors and capability to represent high-frequency details. Based on the observation that a point cloud can be seen as a set of isotropic Gaussians, we propose to integrate a 3D SDS loss with a pre-trained point cloud diffusion model to shape a 3D-consistent geometry. With this additional geometry prior, our approach could mitigate the Janus problem and generate more sensible geometry. Subsequently, in appearance refinement, the Gaussians undergo an iterative optimization to gradually improve fine-grained details with a compactness-based densification strategy, while preserving the fundamental geometric information. The detailed Gsgen methodology is presented as follows.
### Geometry Optimization
Many text-to-3D methods encounter the significant challenge of overfitting to several views, resulting in assets with multiple faces and collapsed geometry (Poole et al., 2023; Lin et al., 2023; Chen et al., 2023). This issue, known as the Janus problem (Armandpour et al., 2023; Seo et al., 2023), has posed a persistent hurdle in the development of such methodologies. In our early experiments, we faced a similar challenge that relying solely on 2D guidance frequently led to collapsed results. However, we noticed that the geometry of 3D Gaussians can be directly rectified with a point cloud prior, which is not feasible for previous text-to-3D methods using NeRFs and DMTet. Recognizing this distinctive advantage, we introduce a geometry optimization process to shape a reasonable structure. Concretely, in addition to the ordinary 2D image diffusion prior, we further optimize the positions of Gaussians using Point-E (Nichol et al., 2022), a pre-trained text-to-point-cloud diffusion model. Instead of directly aligning the Gaussians with a Point-E generated point cloud, we apply a 3D SDS loss to lead the positions inspired by image diffusion SDS, which avoids challenges including registration, scaling, and potential degeneration. Notably, we only apply the Point-E SDS gradients to positions, as empirical observations suggest that Point-E may generate relatively simple color patterns. We summarize the loss in the geometry optimization stage as the following equation:
\[\nabla_{\theta}\mathcal{L}_{\text{geometry}}=\mathbb{E}_{\epsilon_{I},t}\left[w _{I}(t)(\epsilon_{\phi}(x_{t};y,t)-\epsilon_{I})\frac{\partial\mathbf{x}}{ \partial\theta}\right]+\lambda_{\text{3D}}\cdot\mathbb{E}_{\epsilon_{P},t} \left[w_{P}(t)(\epsilon_{\psi}(p_{t};y,t)-\epsilon_{P})\right], \tag{3}\]
Figure 3: **Overview of the proposed Gsgen.** Our approach aims at generating 3D assets with accurate geometry and delicate appearance. Gsgen starts by utilizing Point-E to initialize the positions of the Gaussians (Sec 4.3). The optimization is grouped into geometry optimization (Sec 4.1) and appearance refinement (Sec 4.2) to meet a balance between coherent geometry structure and highly detailed texture.
where \(p_{t}\) and \(x_{t}\) represent the noisy Gaussian positions and the rendered image, \(w_{*}\) and \(\epsilon_{*}\) refer to the corresponding weighting function and Gaussian noise.
### Appearance Refinement
While the introduction of 3D prior does help in learning a more reasonable geometry, we experimentally find it would also disturb the learning of appearance, resulting in insufficiently detailed assets. Based on this observation, Gsgen employs another appearance refinement stage that iteratively refines and densities the Gaussians utilizing only the 2D image prior. To densify the Gaussians, Kerbl et al. (2023) propose to split Gaussians with a large view-space spatial gradient. However, we encountered challenges in determining the appropriate threshold for this spatial gradient under score distillation sampling. Due to the stochastic nature of SDS loss, employing a small threshold is prone to be misled by some stochastic large gradient thus generating an excessive number of Gaussians, whereas a large threshold will lead to a blurry appearance, as illustrated in Fig.8. To tackle this, we propose compactness-based densification as a supplement to positional gradient-based split with a larger threshold. Specifically, for each Gaussian, we first obtain its K nearest neighbors with a KD-Tree. Then, for each of the neighbors, if the distance between the Gaussian and its neighbor is smaller than the sum of their radius, a Gaussian will be added between them with a radius equal to the residual. As illustrated in Fig.4, compactness-based densification could "fill the holes", resulting in a more complete geometry structure. To prune unnecessary Gaussians, we add an extra loss to regularize opacity with a weight proportional to its distance to the center and remove Gaussians with opacity smaller than a threshold \(\alpha_{min}\) periodically. Furthermore, we recognize the importance of ensuring the geometry consistency of the Gaussians throughout the refinement phase. With this concern, we penalize Gaussians which deviates significantly from their positions obtained during the preceding geometry optimization. The loss function in the appearance refinement stage is summarized as the following:
\[\nabla_{\theta}\mathcal{L}_{\text{refine}}=\lambda_{\text{SDS}}\mathbb{E}_{ \epsilon_{t},t}\left[w_{I}(t)(\epsilon_{\phi}(x_{t};y,t)-\epsilon_{I})\frac{ \partial\mathbf{x}}{\partial\theta}\right]+\lambda_{\text{mean}}\nabla_{ \theta}\sum_{i}||\mathbf{p}_{i}||+\lambda_{\text{opacity}}\nabla_{\theta}\sum _{i}\mathbf{sg}(||\mathbf{p}_{i}||)-o_{i}, \tag{4}\]
where \(\mathbf{sg}(\cdot)\) refers to the stop gradient operation, \(\mathbf{p}_{i}\) and \(o_{i}\) represents the position and opacity of the \(i\)-th Gaussian respectively. \(\lambda_{\text{SDS}}\), \(\lambda_{\text{mean}}\) and \(\lambda_{\text{opacity}}\) are loss weights.
### Initialization with Geometry Prior
Previous studies (Chen et al., 2023; Lin et al., 2023; Metzer et al., 2022) have demonstrated the critical importance of starting with a reasonable geometry initialization. In our early experiments, we also found that initializing with a simple pattern could potentially lead to a degenerated 3D object. To overcome this, we opt for initializing the positions of the Gaussians either with a generated point cloud or with a 3D shape provided by the users (either a mesh or a point cloud). In the context of general text-to-3D generation, we employ a text-to-point-cloud diffusion model, _Point-E_(Nichol et al., 2022), to generate a rough geometry according to the text prompt. While Point-E can produce colored point clouds, we opt for random color initialization
Figure 4: An illustration of the proposed compactness-based densification. For two Gaussians, if the distance between them (\(r_{12}\)) and smaller than the sum of their radius (\(r_{1}+r_{2}\)), a Gaussian will be augmented to achieve a more complete geometry.
Figure 6: The impact of adopting Point-E generated color.
based on empirical observations, as direct utilization of the generated colors has been found to have detrimental effects in early experiments (shown in Fig.6). The scales and opacities of the Gaussians are assigned with fixed values, and the rotation matrix is set to the identity matrix. For user-guided generation, we convert the preferred shape to a point cloud. To avoid too many vertices in the provided shape, we use farthest point sampling (Eldar et al., 1997) for point clouds and uniform surface sampling for meshes to extract a subset of the original shape instead of directly using all the vertices or points.
## 5 Experiments
In this section, we present our experiments on validating the effectiveness of the proposed approach. Specifically, we compare Gsgen with previous state-of-the-art methods in general text-to-3D generation. Additionally, we conduct several ablation studies to evaluate the importance of initialization, 3D guidance, and densification strategy. The detailed results are shown as follows.
Figure 5: Qualitative comparison between the proposed Gsgen and state-of-the-art generation methods, including DreamFusion (Poole et al., 2023), Magic3D (Lin et al., 2023), Fantasia3D (Chen et al., 2023). Our approach achieves better visual quality, especially in high-frequency details, such as the hatched roof and the surface of the strawberry. The prompts are provided under the images. For more qualitative comparison results, please refer to Appendix B.3. Videos of these images are provided in the supplemental video.
### Implementation Details
**Guidance model setup.** We implement the guidance model based on the publicly available diffusion model, StableDiffusion (Rombach et al., 2022; von Platen et al., 2022). For the guidance scale, we adopt 100 for _StableDiffusion_ as suggested in DreamFusion and other works. We also exploit the view-dependent prompt technique proposed by DreamFusion. All the assets demonstrated in this section are obtained with StableDiffusion checkpoint _runwayml/stable-diffusion-v1-5_.
**3D Gaussian Splatting setup.** We implement the 3D Gaussian Splatting with a pytorch CUDA extension, and further add learnable background support to facilitate our application. For densification, we split the Gaussians by view-space position gradient every 500 iterations with a threshold \(T_{pos}=0.02\), as suggested by the original implementation (Kerbl et al., 2023), and perform compactness-based densification every 1000 iterations which we empirically found effective for achieving a complete geometry. For pruning, we remove Gaussians with opacity lower than \(\alpha_{min}=0.05\), and excessively large world-space or view-space radius every 200 iterations.
**Training setup.** We use the same focal length, elevation, and azimuth range as those of DreamFusion (Poole et al., 2023). To sample more uniformly in the camera position, we employ a stratified sampling on azimuth. We choose the loss weight hyperparameters \(\lambda_{\text{SDS}}=0.1\) and \(\lambda_{\text{3D}}=0.01\) in geometry optimization stage, and \(\lambda_{\text{SDS}}=0.1\), \(\lambda_{\text{mean}}=1.0\) and \(\lambda_{\text{opacity}}=100.0\) in appearance refinement.
### Text-to-3D Generation
We evaluate the performance of the proposed Gsgen in the context of general text-to-3D generation and present qualitative comparison results against state-of-the-art methods. As illustrated in Fig.2, our approach produces delicate 3D assets with more accurate geometry and intricate details. In contrast, previous state-of-the-art methods (Tang, 2022; Poole et al., 2023; Lin et al., 2023; Guo et al., 2023; Chen et al., 2023) struggle in generating collapsed geometry under the same guidance and prompt, which underscores the effectiveness of our approach. We present more qualitative comparison results in Fig.5, where we compare the 3D assets generated by Gsgen with those generated by Magic3D (Lin et al., 2023) and Fantasia3D (Chen et al., 2023). Our approach showcases notable enhancements in preserving high-frequency details such as the intricate patterns on sushi, the feathers of the peacock, and the thatched roof. In contrast, Magic3D and Fantasia3D yield over-smooth geometry due to the
Figure 7: Ablation study results on initialization and 3D prior. _Coarse Model_ here refers to the rough assets obtained after geometry optimization. We can observe that the contents generated with random initialization suffer from degeneration with completely inconsistent geometry (in the first column). Although the Point-E initialized assets have a slightly better geometry, they still suffer from the Janus problem. The proposed Gsgen utilizes Point-E initialization and 3D guidance to generate shapes with better multi-view consistency.
limitation of mesh-based methods, making the generated assets less realistic. For more one-to-one qualitative comparisons, please refer to the supplemental material for the video results and appendix B.3 for multi-view image comparison.
### Ablation Study
**Initialization.** To assess the impact of initialization, we introduce a variant that initiates the positions of the Gaussians with an origin-centered Gaussian distribution which emulates the initialization adopted in DreamFusion (Poole et al., 2023). The qualitative comparisons are shown in Fig.7a. It is evident that assets generated with DreamFusion-like initialization encounter severe degeneration issues, especially for prompts depicting asymmetric scenes, resulting in collapsed geometry. In contrast, Point-E initialization breaks the symmetry by providing an anisotropic geometry prior, leading to the creation of more 3D-consistent objects.
**3D Prior.** We evaluate the necessity of incorporating 3D prior by generating assets without point cloud guidance during geometry optimization. The qualitative comparisons of multi-view images are visualized in Fig.7b. Although achieved better geometry consistency compared to random initialization, relying solely on image diffusion prior still suffers from the Janus problem, which is particularly evident in cases with asymmetric geometries, such as the dog and the panda. In contrast, our approach effectively addresses this issue with the introduction of 3D prior, rectifying potentially collapsed structures in the geometry optimization stage and resulting in a 3D-consistent rough shape.
**Densification Strategy.** To valid the effectiveness of the proposed densification strategy, we propose two variants for comparison: (1) The original densification strategy that split Gaussians with an average view-space gradient larger than \(T_{pos}=0.0002\). (2) With larger \(T_{pos}=0.02\) that avoids too many new Gaussians. While effective in 3D reconstruction, the original densification strategy that relies only on view-space gradient encounters a dilemma in the context of score distillation sampling: within limited times of densification, a large threshold tends to generate an over-smoothed appearance while a small threshold is easily affected by unstable gradients. As shown in Fig.8, the proposed compactness-based densification is an effective supplement to the original densification strategy under SDS guidance.
## 6 Limitations and Conclusion
**Limitations.**Gsgen tends to generate unsatisfying results when the provided text prompt contains a complex description of the scene or with complicated logic due to the limited language understanding ability of Point-E and the CLIP text encoder used in _StableDiffusion_. Moreover, although incorporating 3D prior mitigates the Janus problem, it is far from eliminating the potential degenerations, especially when the textual prompt is extremely biased in the guidance diffusion models. Concrete failure cases and corresponding analyses are illustrated in appendix C.
**Conclusion.** In this paper, we propose Gsgen, a novel method for generating highly detailed and 3D consistent assets using Gaussian Splatting. In particular, we adopt a two-stage optimization strategy including geometry optimization and appearance refinement. In the geometry optimization stage, a rough shape is established under the joint guidance of a point cloud diffusion prior along with the common image SDS loss. In the subsequent appearance refinement, the Gaussians are further optimized to enrich details and densified to achieve better continuity and fidelity with compactness-based densification. We conduct comprehensive experiments to validate the effectiveness of the proposed method, demonstrating its ability to generate 3D consistent assets and superior performance in capturing high-frequency components. We hope our method can serve as an efficient and powerful approach
Figure 8: Ablation study on densification strategy. The textual prompt used in this figure is _A mug of hot chocolate with whipped cream and marshmallows_.
for high-quality text-to-3D generation and could pave the way for more extensive applications of Gaussians Splatting and direct incorporation of 3D prior.
|
2309.08963 | Struc-Bench: Are Large Language Models Really Good at Generating Complex
Structured Data? | Despite the remarkable capabilities of Large Language Models (LLMs) like
GPT-4, producing complex, structured tabular data remains challenging. Our
study assesses LLMs' proficiency in structuring tables and introduces a novel
fine-tuning method, cognizant of data structures, to bolster their performance.
We unveil Struc-Bench, a comprehensive benchmark featuring prominent LLMs
(GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna), which spans text tables, HTML, and
LaTeX formats. Our proposed FormatCoT aids in crafting format-specific
instructions from the intended outputs to populate this benchmark. Addressing
the gap in task-centered evaluation, we propose two innovative metrics, P-Score
(Prompting Score) and H-Score (Heuristical Score), to more accurately gauge LLM
performance. Our experiments show that applying our structure-aware fine-tuning
to LLaMA-7B leads to substantial performance gains, outshining its LLM
counterparts across most measures. In-depth error analysis and creating an
ability map across six dimensions -- coverage, formatting, reasoning,
comprehension, pragmatics, and hallucination -- highlight areas for future
enhancements and suggest forthcoming research trajectories. Our code and models
can be found at https://github.com/gersteinlab/Struc-Bench. | Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, Mark Gerstein | 2023-09-16T11:31:58Z | http://arxiv.org/abs/2309.08963v3 | # Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?
###### Abstract
Despite the power of Large Language Models (LLMs) like GPT-4, they still struggle with tasks that require generating complex, structured outputs. In this study, we assess the capability of Current LLMs in generating complex structured data and propose a structure-aware fine-tuning approach as a solution to improve this ability. To perform a comprehensive evaluation, we propose Struc-Bench, include five representative LLMs (i.e., GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna) and evaluate them on our carefully constructed datasets spanning raw text, HTML, and LaTeX tables. Based on our analysis of current model performance, we identify specific common formatting errors and areas of potential improvement. To address complex formatting requirements, we utilize a FormatCoT (Chain-of-Thought) to generate format instructions from target outputs. Our experiments show that our structure-aware fine-tuning method, when applied to LLAMA-7B, significantly improves adherence to natural language constraints, outperforming other evaluated LLMs. Based on these results, we present an ability map of model capabilities from six dimensions (i.e., coverage, formatting, reasoning, comprehension, pragmatics, and hallucination). This map highlights the weaknesses of LLMs in handling complex structured outputs and suggests promising directions for future work. Our code and models can be found at [https://github.com/gersteinlab/Struc-Bench](https://github.com/gersteinlab/Struc-Bench).
## 1 Introduction
Significant advancements have been made in various natural language processing tasks by Large Language Models (LLMs) Brown et al. (2020); Scao et al. (2022); Ouyang et al. (2022); Muennighoff et al. (2022); OpenAI (2023); Zhao et al. (2023), especially in text generation tasks Qin et al. (2023). The ability to output structured data, one of the key aspects of generative capability, has also attracted great interest in previous studies Wu et al. (2022); Zhao et al. (2023).
However, LLMs still underperform in generating complex structured outputs-a critical ability for various applications ranging from coding assistance to automated report writing. Furthermore, most evaluation of LLMs has been on natural text or code generation, and relatively less research has been conducted to evaluate LLMs on their ability to generate structured output. This leaves it unclear _whether LLMs can generate complex structured data effectively_. We aim to address these unanswered questions and deliver an in-depth examination in our research.
_First, there is a lack of systematic analysis_ of the ability of LLMs to output complex structured data. Previous efforts on evaluating LLMs Qin et al. (2023); Ma et al. (2023) on structured data primarily centered around simple Information Extraction (IE) tasks: recognizing named entities, extracting relations, and detecting events. Here the goal of IE tasks is to gathered the extracted data in a highly structured form Zhong and Chen (2020). Much earlier work was considerably more task-centric as opposed to LLM-centric. The focus was predomi
Figure 1: A system for describing complex structured formats and learning to follow this format in human language. We use zero-shot for inference.
nantly on generating structured data from text (text-to-data) tasks with pre-trained models He et al. (2023); Rossiello et al. (2022); Whitehouse et al. (2023); Pietruszka et al. (2022) like BART Lewis et al. (2019) and T5 Raffel et al. (2020).
_Second, there is a lack of fine-grained evaluation and comprehensive benchmarks_ of LLMs performance. Existing benchmarks often rely on rudimentary objective metrics such as word overlap to measure the accuracy of the content generated by the model Li et al. (2023); Wu et al. (2022); Pietruszka et al. (2022). This may be insufficient for evaluating whether LLMs can generate structured output, as an ideal evaluation metric ought to also consider the format of generated content.
Third, is there potential for enhancing the performance of current LLMs to better _follow human natural language inputs, thereby generating outputs with the accurate format and error-free content?_
This work aims to fill in these gaps in the literature and expand on both the evaluation metrics and training datasets for LLMs generating structured output. Our contributions are summarized as:
(1) We develop a benchmark, called StrucBench focusing on generating structured texts in raw text, HTML, and LaTeX formats, and thoroughly examine the capabilities of popular LLMs, uncovering key issues in content accuracy, formatting, numerical reasoning, and handling long tables.
(2) Incorporating prominent datasets and expanding to diverse domains, we conduct empirical evaluations of popular LLMs on our structured text generation benchmark, providing a deeper understanding of the prevalent error types and dimensions of shortcomings. Our findings suggest that both GPT-3.5 and GPT-4 struggle to produce outputs that are exactly correct, with issues primarily stemming from erroneous content, inaccurate formatting, inadequate numerical reasoning abilities, and their inability to handle long tables. (3) To address these issues, we introduce structure-aware instruction tuning, using ChatGPT to generate format instructions and then training the LLaMA model to follow these formats. The promising results on both seen and unseen data indicate that it could greatly enhance the ability of LLMs to generate structured outputs.
## 2 Problem Analysis and Benchmark
### Preliminary
The task of generating complex structured data presents a notable challenge that tests the capabilities of LLMs in producing intricate, format-specific outputs. This task moves beyond conventional text generation. The complexity lies not only in the need to generate accurate and coherent content but also in maintaining a strict and specific data structure or format. For example, text-to-table is a task that aims to convert unstructured textual data into structured tabular data, by extracting necessary contents from text and following the required structure or format.
### Problem Analysis
In our study, we have identified a significant limitation of GPT-3.5 and GPT-4 in handling complex structured output. Despite being state-of-the-art LLMs developed by OpenAI, these models both have demonstrated certain limitations in generating output in more intricate formats, examples could be found in Appendix A.
This shortcoming becomes evident when the model is tasked with producing data that adhere to specific structural formats or templates, such as tables. We find that only 3% of the output of GPT-3.51 is completely correct, while GPT-4 is only 9%. This could be attributed to the inherent design of the GPT family, which, while excelling at capturing the statistical patterns of human language, does not specifically account for structured outputs that require maintaining a state across a longer span of tokens. Here, we select Rotowire as an investigation, as shown in Appendix B. We utilized the crowdsourcing approach on MTurk (See Appendix C) to examine the error types in 100 example instances. Figure 2 presents the proportions of errors and each error type: Element Errors, Element Format Errors, Structure Error, Structure Naming Errors.
Footnote 1: In all our scenarios we are using Azure OpenAI Service models. GPT-3.5 means gpt-35-turbo. We noticed that the results of the Azure deployed gpt-35-turbo-v0301 model diverge substantially from OpenAI gpt-3.5-turbo-0301.
### Benchmark
In our investigation, we incorporate four prominent data-to-text datasets: Rotowire Wiseman et al. (2019).
Figure 2: Error analysis by human annotation. Some error types are explained in Appendix A.
2017), E2E (Novikova et al., 2017), WikiTableText (Bao et al., 2018), and WikiBio (Lebret et al., 2016), we specifically selected tables with dimensions greater than 3x3 to ensure a sufficient level of complexity. Concurrently, we construct more diverse datasets drawn from broader domains, encompassing tables from LaTeX and HTML data sourced from GitHub. Each of these table types comes with its unique nuances, complexities, and levels of structuration, providing extensive coverage for our experiments. Table 1 gives statistics for the Rotowire dataset and our constructed datasets. Through empirical testing, we evaluate the capacity of popular LLMs, including GPT-NeoX-20B (Black et al., 2022), GPT-3.5 (Ouyang et al., 2022), GPT-4 (OpenAI, 2023) and Vicuna-13B (Chiang et al., 2023), on our Struc-Bench, see Section 4.2. For LaTex and HTML without paired text, we use GPT-3.5 to construct synthetic descriptions as input for our benchmark.
Raw text tables are more informal, unstandardized, and often need manual interpretation. In contrast, LaTeX tables are used for scientific documents and demand high precision in their structure and syntax. HTML tables, widely used on the web, carry their own tags and structure, aligning with the rules of HTML language.
## 3 Methodology
### Data Generation
As shown in Figure 1, we propose FormatCoT and self-instruct with GPT-3.5 to generate data, instruction pairs. Inspired by Gorilla (Patil et al., 2023), We provide three demos with in-context learning and task the model with generating instructions that describe the format of the given structure. We specifically instruct the model to use natural language. We have structured 6 demos for each of the three data formats, all of which are hand-written or modified data.
### Finetuning LLaMA-7B
Here we propose a structure-aware instruction tuning method to bolster the capability of LLMs in generating structured text. We employ the standard instruction tuning method to fine-tune LLaMA-7B (Touvron et al., 2023). Our ultimate goal is to enable LLaMA to comprehend the task at hand and deliver the output in a conversational mode. This is akin to engaging in a dialogue with the user, culminating in the successful completion of our defined task. The entire pipeline can be found in Figure 1.
### Evaluation Metrics
Evaluating the similarity of generated tables to the ground-truth tables is non-trivial: for instance, the same table can be formatted in many different ways in HTML or LaTeX. Hence, our evaluation metric should ideally capture meaningful differences in the data presented, while being invariant to insignificant differences in formatting.
We propose to break down the similarity of two tables into two coarse components: _content_ and _structure_. In scoring content similarity, we attempt to parse _content_ out the data within the table cells, and compute the similarity. This similarity is computed between the generated and ground-truth table cells by commonly used similarity metrics. In scoring structure similarity, we place higher emphasis on components such as the number of columns and rows, cell alignment, and the table caption. Both similarity scores do overlap (e.g. a table with the wrong number of rows/columns would likely score poorly on content), but we find that these two scoring categories allow us to perform more involved analysis on where predicted and ground-truth tables differ.
#### 3.3.1 GPTscore
We further take two approaches to score each metric. First, we perform model-based evaluation, querying GPT-3.5 with both tables and having it score the similarity of content and structure separately. Following Wang et al. (2023), we prompt the model to perform Chain-of-Thought Wei et al. (2023) reasoning before outputting its scores, and we query the model with the predicted and ground-truth tables in both orders and average the scores. We report these as the _GPTscore_. The prompt of GPTscore can be found in Appendix D.
#### 3.3.2 H-Score
In addition to model-based evaluation, we also implement hand-crafted scoring functions to score the similarity of the tables. Because of the many ways, the tables can be presented in the different data formats, we implement several heuristics to normalize the tables and to compute their similarity. The spe
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\# Train** & **\# Test** & **Format** & **Rows \& Columns** \\ \hline Rotowire (Wiseman et al., 2017) & 3.4k & 728 & Raw text & 7.26 \& 8.75 \\ Struc-Bench HfX & 5.3k & 500 & HfX & 2.75 \& 4.47 \\ Struc-Bench HTML & 5.4k & 499 & HTML & 5.0 \& 3.54 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Struc-Bench data statistics. The number of Rows & Columns has been averaged.
cific implementation of scoring functions for different formats can be found in Appendix D. Where similarities between strings or data structures are computed, we use an average of Levenshtein distance and the Ratcliff/Obershelp similarity metric. We report these heuristically normalized metrics as the _H-Score_.
## 4 Experiments
### Basic Settings
For metrics, we use SacreBLEU, ROUGE-L, BERTScore, BARTScore and BLEURT metrics as they are all classical metrics to evaluate text similarity, which is also useful in this task. Besides, we use our two proposed metrics: GPT score and H-score. We evaluate the following models: GPT-NeoX-20B, GPT-3.5, GPT-4, Vicuna-13B, our structure-aware finetuning LLaMa-7B and original LLaMa-7B. GPT-NeoX-20B, GPT-3.5 and GPT-4 represent the state-of-art performance of current LLMs and Vicuna-13B is another version finetuned by LLaMa, which can reach 90% of the capacity of ChatGPT. We think these models are strong enough to be persuasive. For the first 4 models, we simply call their APIs from OpenAI or HuggingFace to generate results without further finetuning. In our dataset, each item consists of three parts: instruction, input, and output. When generating results, we put each item's instruction and input together as the final input to models.
During the inference process, we will provide the model with a natural language prompt to describe the form and content of our task, as well as the expected response (e.g., "please generate a table given by the following information and format").
### Results
Table 2 provides a comparative analysis of different language models based on several performance metrics. For 'Tables from Raw Text', the Ours-7B outperforms the other models in every metric. Interestingly, without fine-tuning, the performance drops significantly, particularly in SacreBLEU, ROUGE-L, and BERTScore. The results for 'LaTeX' reveal a similar trend where we again achieve the best results across all metrics, except for the BLEURT metric, where GPT-4 takes the lead. In the 'HTML' category, GPT-4 scores the highest in SacreBLEU and BERTScore. However, ours comes out on top for the rest of the metrics.
Considering the inconsistency observed by different metrics, we also conducted a human evaluation. We also carried out a human evaluation on 100 examples using MTurk. Evaluators rated each example on a scale from 0 to 10, assessing both format consistency and content consistency. Although we cannot enumerate the details due to space constraints, we discovered that the Content GPTscore and Content H-Score are more closely aligned with existing metrics. However, our proposed Format GPTscore and Format H-Score significantly surpass other metrics, particularly in terms of instance-level Spearman correlation for format accuracy. These human evaluations underscore the efficacy
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & SacreBLEU & ROUGE-L & BERTScore & BARTScore & BLEURT & Content GPTscore & Format GPTscore & Content H-Score & Format H-Score \\ \hline \multicolumn{11}{c}{_Tables from Raw Text_} & & & & & & & & \\ GPT-NeoX-20B & 35.24 & 55.78 & 68.91 & -2.34 & 33.51 & 3.86 & 6.10 & 0.50 & -1.32 \\ GPT-3.5 & 56.92 & 70.97 & 91.35 & -1.68 & 36.85 & 6.19 & 8.16 & 0.52 & -1.27 \\ GPT-4 & 68.13 & 75.44 & 94.89 & -0.99 & 55.24 & 6.88 & 8.30 & 0.85 & 0.53 \\ Vicuna-13B & 40.12 & 50.77 & 75.21 & -2.05 & 40.02 & 4.07 & 6.33 & 0.55 & -1.38 \\ Our-7B & **90.6** & **89.85** & **98.54** & **-0.69** & **66.07** & **7.69** & **8.60** & **1.65** & **3.61** \\ \(w.a.finetune\) & 9.9 & 36.56 & 81.63 & -2.50 & 70.24 & 4.58 & 6.00 & 0.51 & -1.01 \\ \hline \multicolumn{11}{c}{_LaTeX_} \\ \hline \multicolumn{11}{c}{_GPT-NeoX-20B_} & 45.92 & 65.10 & 76.09 & -2.05 & 40.87 & 7.23 & 7.02 & 0.56 & 0.72 \\ GPT-3.5 & 56.94 & 75.99 & 86.25 & -1.30 & 42.89 & 8.22 & 8.41 & 0.99 & 1.27 \\ GPT-4 & 78.15 & 85.34 & 88.07 & -1.09 & **67.11** & 8.78 & 8.81 & 1.10 & 1.35 \\ Vicuna-13B & 50.80 & 69.48 & 80.44 & -1.07 & 36.74 & 7.70 & 8.10 & 0.78 & 1.06 \\ Our-7B & **89.13** & **88.99** & **98.55** & **-0.69** & 66.07 & **8.94** & **9.45** & **1.14** & **1.52** \\ \(w.a.finetune\) & 47.24 & 70.89 & 73.27 & -2.13 & 38.13 & 7.10 & 6.98 & 0.51 & 0.69 \\ \hline \multicolumn{11}{c}{_HITML_} \\ \hline \multicolumn{11}{c}{_GPT-NeoX-20B_} & 60.36 & 72.13 & 86.88 & -1.59 & 30.06 & 8.42 & 8.94 & 0.81 & 0.92 \\ GPT-3.5 & 73.80 & 85.19 & 96.76 & -1.46 & 34.81 & 9.11 & 9.35 & 1.10 & 2.15 \\ GP-4 & **79.25** & 85.95 & **97.22** & -1.31 & 41.59 & 9.17 & 9.62 & 1.15 & 2.29 \\ Vicuna-13B & 58.75 & 70.37 & 88.65 & -1.58 & 31.11 & 8.55 & 8.88 & 0.79 & 0.93 \\ Ours-7B & 77.50 & **86.08** & 96.25 & **-1.30** & **42.89** & **9.20** & **9.70** & **1.18** & **2.49** \\ \(w.a.finetune\) & 65.30 & 78.24 & 88.12 & -1.57 & 32.78 & 8.22 & 8.81 & 0.92 & 0.96 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Automated evaluation results on the test set, involving five types of previous metrics and four proposed ones. \(w.o.finetune\) means that we also compared the performance of our model without structure-aware finetuning as an ablation study.
of our proposed metrics. However, larger-scale human evaluations are needed to further explore and substantiate these findings.
Moreover, we delve into an in-depth analysis, attributing observed shortcomings to several error types, spanning two key dimensions: Content Selection and Format Planning, as well as the Reasoning Process, see details in Appendix G. Based on these, we present an ability map of model capabilities from six dimensions.
## 5 Conclusion
In conclusion, this research offers a comprehensive exploration of the structured text generation limitations inherent in Large Language Models (LLMs) like ChatGPT and GPT-4. Through developing a benchmark specifically designed for structured text generation and integrating a wide range of datasets, we have been able to thoroughly assess the capabilities of prevalent LLMs. Our analysis has identified several areas of concern, particularly in regard to content accuracy, formatting, numerical reasoning, and the handling of long tables.
## 6 Limitations
Although we present an in-depth and comprehensive analysis, the exploration of LLMs in structured text generation presented in this paper has several limitations:
Domain-Specific Benchmark DevelopmentWhile we've made strides in constructing benchmarks for structured text generation, it may be beneficial to develop benchmarks that cater to specific domains. Different fields might have unique structural requirements and understanding these nuances can significantly improve the models' applicability across diverse contexts.
Expand the Range of DatasetsThere are endless data types and sources that can be explored. Incorporating a broader variety of datasets could expose the models to an even wider range of structural formats, ultimately enhancing their overall performance.
Enhancing Numerical Reasoning CapabilitiesOur study identified inadequate numerical reasoning as one of the challenges faced by LLMs. Investigating techniques to bolster numerical reasoning in these models could lead to significant improvements in their performance.
Developing Advanced MethodsWhile our structure-aware instruction tuning method showed promising results, more sophisticated techniques could be developed. For instance, future work could explore ways of incorporating more explicit structural information into the model or developing methods that allow the model to learn structural patterns more effectively.
Exploring Multimodal LLMsAs LLMs continue to evolve, there are opportunities to explore multimodal models that can process and generate both text and other forms of data, such as sound or images (Kamigaito et al., 2023), in a structured manner.
|
2309.08793 | Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and
Explanation Generation | Fact-checking in financial domain is under explored, and there is a shortage
of quality dataset in this domain. In this paper, we propose Fin-Fact, a
benchmark dataset for multimodal fact-checking within the financial domain.
Notably, it includes professional fact-checker annotations and justifications,
providing expertise and credibility. With its multimodal nature encompassing
both textual and visual content, Fin-Fact provides complementary information
sources to enhance factuality analysis. Its primary objective is combating
misinformation in finance, fostering transparency, and building trust in
financial reporting and news dissemination. By offering insightful
explanations, Fin-Fact empowers users, including domain experts and end-users,
to understand the reasoning behind fact-checking decisions, validating claim
credibility, and fostering trust in the fact-checking process. The Fin-Fact
dataset, along with our experimental codes is available at
https://github.com/IIT-DM/Fin-Fact/. | Aman Rangapur, Haoran Wang, Ling Jian, Kai Shu | 2023-09-15T22:24:00Z | http://arxiv.org/abs/2309.08793v2 | # Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation Generation
###### Abstract
Fact-checking in financial domain is under explored, and there is a shortage of quality dataset in this domain. In this paper, we propose Fin-Fact, a benchmark dataset for multimodal fact-checking within the financial domain. Notably, it includes professional fact-checker annotations and justifications, providing expertise and credibility. With its multimodal nature encompassing both textual and visual content, Fin-Fact provides complementary information sources to enhance factuality analysis. Its primary objective is combating misinformation in finance, fostering transparency, and building trust in financial reporting and news dissemination. By offering insightful explanations, Fin-Fact empowers users, including domain experts and end-users, to understand the reasoning behind fact-checking decisions, validating claim credibility, and fostering trust in the fact-checking process. The Fin-Fact dataset, along with our experimental codes is available at [https://github.com/IIT-DM/Fin-Fact/](https://github.com/IIT-DM/Fin-Fact/).
## 1 Introduction
In an era characterized by the rapid spread of misinformation and the proliferation of fake news, fact-checking has emerged as a critical tool for ensuring the accuracy and reliability of information (Saakyan et al., 2021; Wadden et al., 2020; Sarrouti et al., 2021). The emergence of social media platforms and the wide accessibility of multimodal content have intensified the complexities linked with verifying the accuracy of assertions (Mishra et al., 2022). Notably, the financial sector introduces its distinctive array of difficulties, given that precise and timely data plays a pivotal role in enabling well-informed investment choices and upholding market stability. Additionally, financial fact-checking encounters specific hurdles, such as the need for customized data to address unique requirements and nuances. Furthermore, the manipulation of images to exploit visualization bias presents another significant challenge in the verification process (Mansoor and Harrison, 2018).
The rise of misinformation in the financial domain has become a pressing concern, with potential impacts on public trust, investor decisions, and overall market stability (Kogan et al., 2019; Clarke et al., Forthcoming; Zhi et al., 2021; Liu and Moss, 2022). To counter the spread of misleading information, fact-checking methods have gained importance in financial reporting and analysis (Zhi et al., 2021; Mohankumar et al., 2023). However, the development of effective and dependable models in this domain has been hindered by the lack of suitable benchmark datasets that accurately represent the intricacies of financial information and context.
In recent years, there has been notable progress in creating various datasets for fact-checking (Wadden et al., 2020; Wadden and Lo, 2021; Wadden et al., 2022; Sarrouti et al., 2021; Saakyan et al., 2021). However, there is a noticeable gap in addressing the unique demands of fact-checking within the financial domain. Financial fact-checking faces several significant challenges.
Figure 1: A demonstration of comprehensive multimodal fact-checking and the creation of explanations.
**Firstly**, it requires meticulously curated data that can encompass the intricate nuances of financial discourse. Financial documents and journalistic pieces often employ specialized language that differs from conventional structures. However, existing datasets frequently lack comprehensive coverage of financial news articles, and the absence of expert annotations diminishes the reliability of the data. **Secondly**, financial data is highly context-sensitive and constantly evolving, emphasizing the need for a dataset that can accurately capture the dynamic nature of financial markets. **Lastly**, the landscape of financial fact-checking introduces the challenge of visualization bias, where deliberate manipulation of visual content can shape perception and distort the accuracy of claims.
In this paper, we tackle the challenge of compiling, annotating, and refining a comprehensive corpus of financial texts that faithfully represents financial reporting, accounting methodologies, and market fluctuations. The realms of financial fact-checking and explanation generation present distinct obstacles that require specialized approaches. The necessity for tailored data capable of navigating financial terminology and intricate visual elements underscores the interdisciplinary nature inherent in this research endeavor. Figure 1 illustrate a comprehensive multimodal fact-checking and the creation of explanations while Table 1 displays an example instance from the corpus.
Presenting Fin-Fact, a novel benchmark dataset specifically designed for multimodal financial fact-checking and explanation generation. Our key contributions and discoveries are as follows:
* We introduce Fin-Fact, the inaugural benchmark dataset designed for verifying claims within the financial domain. This dataset encompasses 3,562 claims, each accompanied by expert fact-checkers' justifications.
* Fin-Fact holds the potential for explanation generation, facilitated by the inclusion of authoritative ruling comments from skilled fact-checking professionals.
* Our investigation reveals performance challenges in applying the state-of-the-art models to Fin-Fact in open-domain contexts, underlining the need for improved generalization.
## 2 Related Work
**Fact Checking.** Significant efforts have been dedicated to creating fact-checking datasets for automated fact-checking systems (Wadden et al., 2020, 2022; Thorne et al., 2018; Saakyan et al., 2021). Previous studies have predominantly focused on predicting the accuracy of claims from diverse sources. While large-scale datasets from various domains have been utilized (Gupta and Srikumar, 2021), they might not be suitable for identifying misinformation related to financial matters due to domain-specific disparities.
Although general-content misinformation datasets are readily accessible, only a limited number of datasets pertain to online financial misinformation (Clarke et al., 2020; Kogan et al., 2019; Zhi et al., 2021; Liu and Moss, 2022; Zhang et al., 2022; Boehm and Kroner, 2023). Current financial misinformation datasets lack clear labeling and justifications, raising concerns about result reliability. In contrast, the Fin-Fact dataset is distinct with genuine data and a multimodal structure, combining text and images to encompass a wide range of financial information. Additionally, it includes expert fact-checker comments, enabling comprehensive explanations by models.
**Explanation Generation.** Explanation generation plays a pivotal role in facilitating human comprehension of claim credibility. It involves leveraging external knowledge graphs to create semantic traces originating from the claim itself (Gad-Elrab et al., 2019; Li et al., 2020; Sarrouti et al., 2021). These semantic traces function as explanations that substantiate the veracity of claims. This approach offers valuable insights into the rationale behind the model's decision-making, thereby fostering trust. Moreover, the process of explanation generation relies on drawing evidence from diverse sources (Thorne et al., 2018; Hanselowski et al., 2019; Fan et al., 2020) to validate claims. However,
\begin{table}
\begin{tabular}{l l} \hline \hline \multirow{2}{*}{**Claim**} & Student debt relief is taxable by the Internal \\ & Revenue Service. \\ Author & Jeff Cercone \\ Posted & 08/30/2022 \\
**Justification** & The up to $20,000 of federal student loan for- \\ & giveness announced by Joe Biden wont... \\
**Evidence** & The post shows a screenshot of an IRS web \\
**Image** & page about canceled debt, with this sentence... \\ & [https://politifact.com/photos/2a603743e.jpg](https://politifact.com/photos/2a603743e.jpg) \\ Caption & President Joe Biden speaks about student loan \\ & debt foreigness... \\ Visual Bias & 0 \\ Issues & Debt, Taxes \\
**Label** & True \\ \hline \hline \end{tabular}
\end{table}
Table 1: An example from Fin-Fact
this evidence is frequently comprised of isolated sentences extracted from extensive collections of documents, which can make it challenging for humans to interpret the broader context. To effectively generate explanations, a high-quality dataset annotated by humans is essential. This paper addresses the need for such a dataset.
## 3 The Fin-Fact Dataset
The Fin-Fact dataset presents a diverse array of labels that enhance the depth of analysis when evaluating financial claims. These labels contribute a multifaceted perspective to the fact-checking process, augmenting its analytical capabilities.
At the core of the dataset are the 'Claim' and 'Author' labels, which respectively denote the primary assertion and its originating source. The inclusion of the 'Posted Date' attribute introduces temporal context, while the 'Sci-digest' label provides the summary of the claim. Further contextualization is achieved through the 'Justification' label, elucidating the accuracy of the claim, and the 'Evidence' label, which presents corroborative information connected through 'Evidence link.' The dataset also acknowledges the visual dimension through 'Image link' and 'Image Caption.' Critically, the 'Visualisation Bias Label' evaluates potential biases linked to images. Complexities inherent in the claims are highlighted by the 'Issues' label, while the ultimate assessment is provided by the "Claim Label", offering a definitive classification of "True", "False", or "NEI (Not Enough Information)".
By amalgamating these labels, the dataset establishes a comprehensive and multidimensional resource. This resource accommodates textual, temporal, evidentiary, and visual components, all of which are imperative for a thorough evaluation of claims during the fact-checking process.
### Data Collection
PolitiFact1 and FactCheck2 are prominent online platforms dedicated to countering the spread of false information. These platforms engage skilled fact-checkers to meticulously analyze and verify individual claims, subsequently producing articles that offer their conclusions supported by relevant evidence. In our study, we leveraged these platforms as our primary sources of data. Specifically, we devised a comprehensive process to extract essential information from these platforms.
Footnote 1: [http://politifact.com/](http://politifact.com/)
Footnote 2: [http://factcheck.org](http://factcheck.org)
To elaborate, we devised a systematic process to gather essential information from PolitiFact and FactCheck. This encompassed the extraction of text-based claims and the assignment of corresponding truthfulness labels. Moreover, we retrieved both textual and visual evidence, along with their associated links, which contributed substantively to the assessment of claim accuracy.
It's noteworthy that the initial claims were collected by journalists affiliated with these platforms. These claims originated from diverse sources, including online speeches, public statements, news articles, and social media posts. Importantly, the fact-checkers from these platforms played a pivotal role by providing truthfulness labels, pertinent evidence, references to corroborating sources, and the articles delivering their final verdict. This comprehensive approach ensured the thorough and reliable collection of data, reinforcing the credibility of our assessment of claim accuracy.
### Dataset Statistics
The Fin-Fact dataset is an encompassing compilation of claims within the financial domain, spanning diverse sectors such as Income, Finance, Economy, Budget, Taxes, and Debt, as visualized in Figure 2. This dataset is meticulously crafted, comprising a total of 3,562 claims, curated to encapsulate the intricacies inherent in financial discourse.
In the Fin-Fact dataset, claims are categorized into three labels: "True", "False", and "NEI (Not Enough Information)" representing the veracity of each claim in the financial domain. The dataset contains 1,807 'True' claims that are verified as accurate, 1,315 'False' claims that have been proven inaccurate through fact-checking procedures, and 440 'NEI' instances where there is insufficient ev
Figure 2: Diverse sectors within the Fin-Fact dataset.
idence to make a determination. With its comprehensive span across a variety of claims, diverse sectors, and an equitable distribution of labels, the Fin-Fact dataset stands as a robust cornerstone for the development, assessment, and progression of fact-checking models in the domain of finance.
## 4 Experimental Results
In this series of experiments, the primary focus revolved around evaluating the accuracy of Natural Language Inference (NLI) models for fact-checking tasks. The assessment encompassed a range of prominent models, including ELECTRA Clark et al. (2020), BART Lewis et al. (2020), RoBERTa Liu et al. (2019), and GPT-2 Radford et al. (2019). Each model underwent scrutiny using the Fin-Fact dataset, enabling an assessment of their effectiveness in distinguishing financial claims. The outcomes of these fact-checking endeavors provided thought-provoking insights: ELECTRA demonstrated 29%, BART-Large achieved 33%, RoBERTa-Large showcased 32%, and GPT-2 emerged as the leader with an accuracy of 43% as shown in Table 2. These findings underscore the intricate challenges posed by financial fact-checking, with models displaying varying degrees of performance within this domain. Figure 3 illustrates the graph that compares the performance of ANLI models on Fin-Fact.
The final phase of experimentation delved into the intricate realm of generating explanations for the claims. For each claim in the dataset, we employed the BART model to generate explanations, extracting insights that highlight the key factors contributing to the determination of claim accuracy. These explanations were obtained using the justifications provided to the claim. To quantitatively evaluate the quality of these explanations, we leveraged the GLUE and ROUGE metrics as shown in Table. 3. The Evidence label in the dataset served as the ground truth, enabling us to assess the alignment between the generated explanations and the human-provided justifications.
While our primary focus was on evaluating NLI models for fact-checking and explanation generation, the Fin-Fact dataset offers versatility for various applications. Researchers in multimodal machine learning can leverage it as a valuable benchmark. With its unique combination of textual and visual financial data, Fin-Fact provides an ideal testbed for experimenting with state-of-the-art multimodal models. Users are encouraged to assess and enhance these models, contributing to advancements in this critical area.
## 5 Conclusion and Future Work
The emergence of Fin-Fact signifies a pivotal advancement in the quest to counter misinformation within the financial domain. By providing expert annotations, comprehensive claims, and the potential for explanatory insights, Fin-Fact empowers fact-checking systems to achieve heightened levels of precision and transparency. Its interdisciplinary design tackles financial language intricacies and evolving contextual complexities. This construct serves as a robust cornerstone for more effective and reliable fact-checking processes.
In the realm of finance, Fin-Fact fosters trust, bolsters credibility, and aids fact-checking. We're committed to addressing visualization bias, a prominent challenge, and exploring the impact of manipulated images on claim interpretation. We plan to extract quantitative insights from media, enhancing the dataset with tables and charts. Our goal is to release an improved dataset, offering a more comprehensive resource for researchers, fact-checkers, and decision-makers.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Accuracy** \\ \hline BART & 0.369 & 0.344 & 0.300 & 0.331 \\ RoBERTa & 0.393 & 0.346 & 0.285 & 0.318 \\ ELECTRA & 0.351 & 0.330 & 0.276 & 0.287 \\ GPT-2 & 0.347 & 0.337 & 0.312 & 0.430 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of Fin-fact on ANLI models.
Figure 3: Comparison of Scores for ANLI Models.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Model & ROUGE-1 & ROUGE-2 & ROUGE-3 & GLUE \\ \hline BART & 0.84 & 0.63 & 0.46 & 0.062 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance scores of Fin-fact on BART model for explaination generation. |
2307.16773 | AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of
Autism Spectrum Disorder | To easily obtain the knowledge about autism spectrum disorder and help its
early screening and diagnosis, we create AsdKB, a Chinese knowledge base on
autism spectrum disorder. The knowledge base is built on top of various
sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical
descriptions on mental and behavioural disorders, 2) the diagnostic knowledge
from DSM-5 and different screening tools recommended by social organizations
and medical institutes, and 3) the expert knowledge on professional physicians
and hospitals from the Web. AsdKB contains both ontological and factual
knowledge, and is accessible as Linked Data at https://w3id.org/asdkb/. The
potential applications of AsdKB are question answering, auxiliary diagnosis,
and expert recommendation, and we illustrate them with a prototype which can be
accessed at http://asdkb.org.cn/. | Tianxing Wu, Xudong Cao, Yipeng Zhu, Feiyue Wu, Tianling Gong, Yuxiang Wang, Shenqi Jing | 2023-07-31T15:40:45Z | http://arxiv.org/abs/2307.16773v2 | # AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of Autism Spectrum Disorder
###### Abstract
To easily obtain the knowledge about autism spectrum disorder and help its early screening and diagnosis, we create AsdKB, a Chinese knowledge base on autism spectrum disorder. The knowledge base is built on top of various sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical descriptions on mental and behavioural disorders, 2) the diagnostic knowledge from DSM-5 and different screening tools recommended by social organizations and medical institutes, and 3) the expert knowledge on professional physicians and hospitals from the Web. AsdKB contains both ontological and factual knowledge, and is accessible as Linked Data at [https://w3id.org/asdkb/](https://w3id.org/asdkb/). The potential applications of AsdKB are question answering, auxiliary diagnosis, and expert recommendation, and we illustrate them with a prototype which can be accessed at [http://asdkb.org.cn/](http://asdkb.org.cn/).
Keywords:Autism Spectrum Disorder Knowledge Base Ontology.
## 1 Introduction
Autism spectrum disorder (ASD) is a kind of neurodevelopmental disability which begins before the age of 3 years and can last throughout a person's whole life. People with ASD have problems in social communication and interaction, and may have stereotypic or repetitive behaviors (or interests). According to the most recent statistics [17] published by the Centers for Disease Control and Prevention (CDC), about 1 in 36 children aged 8 years has been identified with ASD, and this proportion is quite high. However, there is no quantitative medical test to diagnose such a disorder, and professional physicians only use screening tools and look at the behaviors for some time to make a diagnosis. In
this way, many children cannot receive a final diagnosis until much older, which causes the children with ASD might not get the early help they need. In China, the situation on screening and diagnosing the children with ASD maybe much worse compared with western developed countries. The 2020 China rehabilitation report of children developmental disorder6 points out that the ASD incidence in China is around 1% and the number of ASD children is more than three million, but the number of professional physicians who can diagnose ASD is only about 500, let alone the number of board certified behavior analysts. This does hinder the timely diagnosis on ASD, which inspires us to think about if we can apply artificial intelligence techniques to solve the early screening and diagnosis of ASD. The key problem is how to extract and integrate ASD relevant knowledge from heterogeneous sources to support upper-level intelligent applications.
Footnote 6: [http://pkucarenjk.com/news-family/2303.html](http://pkucarenjk.com/news-family/2303.html)
To solve this problem, we build AsdKB, a Chinese knowledge base for the early screening and diagnosis of ASD, from various sources (see Figure 1), such as SNOMED CT [5] (a large collection of medical terms), ICD-107 (the 10th revision of the classification system of diseases published by WHO) clinical descriptions on mental and behavioural disorders [21], DSM-5 [1] (the 5th edition of diagnostic and statistical manual of mental disorders), the screening tools recommended by CDC and so on. Specifically, we first build an ontology covering important concepts about the screening and diagnosis of ASD from DSM-5, ICD-10 clinical descriptions on mental and behavioural disorders, SNOMED CT, CDC materials, and other Web sources. Using this ontology as the schema, we then extract and integrate factual knowledge on diseases, diagnosis, experts, and
Figure 1: The data sources for building AsdKB.
others. Besides, we use and develop Web crawler and natural language processing (NLP) tools for data extraction, keyword extraction, knowledge extraction, machine translation, and etc., over various formats of data, including text, tables, and structured knowledge. All classes, properties, and instances in AsdKB are identified by permanent dereferenceable URIs in w3id8. All data are available as RDF dump files on Zenodo9, and the basic information of the AsdKB project can be accessed at Github10. All the resources are published under CC BY-SA 4.0. The main contributions of this paper are summarized as follows:
Footnote 8: [https://w3id.org/asdkb/](https://w3id.org/asdkb/)
Footnote 9: [https://zenodo.org/record/8199698](https://zenodo.org/record/8199698)
Footnote 10: [https://github.com/SilenceSnake/ASDKB](https://github.com/SilenceSnake/ASDKB)
* We first build a Chinese knowledge base for the early screening and diagnosis of ASD, i.e., AsdKB, which contains both ontological and factual knowledge, and publish it following Linked Data best practices.
* We present a prototype system on question answering, auxiliary diagnosis, and expert recommendation with AsdKB, and discuss how to support the early screening and diagnosis of ASD with this system.
The rest of this paper is organized as follows. Section 2 introduces the process of ontology building. Section 3 describes the extraction of factual knowledge. Section 4 presents the potential applications of AsdKB. Section 5 outlines related work, and we conclude in the last section.
## 2 Ontology Building
This section introduces the process of building the AsdKB ontology as the schema which is used to guide extracting and integrating factual knowledge from various sources. We follow Ontology Development 101 [20] to build the ontology (Figure 2 shows a part of it) as follows.
Step 1: Determine the domain and scope of the ontology.AsdKB is expected to cover the ASD relevant knowledge on the early screening and diagnosis, so the ontology needs to cover important concepts in widely recognized materials about the screening and diagnosis of ASD. Here, we select relevant materials from CDC, DSM-5, ICD-10, SNOMED-CT, and other Web sources.
Step 2: Consider reusing existing ontologies.In this part, we reuse the standard RDF, RDFS, and OWL vocabularies, including rdf:type linking from instances to classes, rdfs:label recording the Chinese (or English) labels of classes and properties, rdfs:comment providing textual descriptions to clarify meanings of classes, rdfs:subClassOf describing the class hierarchy, equivalent classes are linked by owl:equivalentClass from the AsdKB ontology to other ontologies, and rdfs:domain and rdfs:range specifying the resources and values of a property are instances of one or more classes, respectively.
**Step 3: Enumerate important terms in the ontology.** We read the ASD materials from CDC, DSM-5, ICD-10, SNOMED CT and other Web sources mentioned in the first step, to manually identify a list of important concept-level terms. For example, important symptom concepts in disease knowledge include "Impairments in Social Interaction" and " Restrictive, Repetitive and Stereotyped Behaviors". Important concepts in expert knowledge include "Physician" and "Hospital". Besides, "Screening Tool" and "Diagnostic Standard" are related to screening and diagnosis.
**Step 4: Define the classes and the class hierarchy.** Based on the previous identified important terms, we start to create disease classes (e.g., "Autism Spectrum Disorder" and "Asperger's Syndrome"), diagnosis classes (e.g., "Screening Tool" and "Screening Question"), expert classes (e.g., "Physician" and "Hospital"), and others. For the class hierarchy, we consider the hierarchies within disease classes, symptom classes, and diagnosis classes, respectively. For example, as shown in Figure 2, we have " Asperger's Syndrome rdfs:subClassOf Autism Spectrum Disorder" and "Standard of Social Interaction rdfs:subClassOf Diagnostic Standard". Specifically, we have created a class "Screening Question" in the diagnosis classes to facilitate the exploration of the association between instances of "Screening Question" and "Diagnostic Standard".
Figure 2: A part of the AsdKB ontology.
**Step 5: Define the properties of classes.** After selecting classes from the list of terms, we start to attach properties to classes using rdfs:domain. We distinguish datatype properties and object properties. For example, for the class "Physician", we have the object property workAt and datatype properties Name, Title, Specialty, Hospital Department, and etc.
**Step 6: Define the facets of the properties.** We specify the value type of each property by defining rdfs:range. The range of a datatype property is an XML Schema datatype. For example, the ranges of properties Address (attached to the class "Hospital") and Score (attached to the class "Option") are xsd:string and xsd:float, respectively. Besides, the range of an object property is a class. For example, the range of hasSymptom is the "Symptom".
**Step 7: Create instances.** We do not define instances in the ontology but only use it as the schema of AsdKB. The creation of instances belongs to factual knowledge extraction, and it will be described in Section 3.
**Statistics about the AsdKB ontology.** The built ontology is currently online: [https://w3id.org/asdkb/ontology/](https://w3id.org/asdkb/ontology/). It contains 32 classes, 25 datatype properties, and 16 object properties. The maximum depth of a class in the class hierarchy is 4. Note that we apply google translate11 to translate the English labels of all elements in the ontology into Chinese ones, and also perform careful manual proofreading and correction.
Footnote 11: [https://translate.google.com/](https://translate.google.com/)
**Mapping to other ontologies.** To facilitate schema knowledge sharing across different ontologies, We map our AsdKB ontology to the Unified Medical Language System [2] (UMLS) and the Autism DSM-ADI-R (ADAR) ontology [19]. UMLS is the largest integrated biomedical ontology covering the vocabularies from around 200 sources including SNOMED CT, DSM-5, FMA [23], and etc. ADAR is built from Autism Diagnostic Interview-Revised [16] (ADI-R), which is a structured interview used for autism diagnosis. ADAR focuses on autism symptom classification, and it constructs many fine-grained symptom classes, e.g., "First walked unaided" and "Daily spontaneous and meaningful speech", which are taken as instances in AsdKB. This is why we do not directly re-use ADAR in AsdKB.
Since the disease classes in the AsdKB ontology are extracted from SNOMED CT, which is also a part of UMLS, such classes are naturally linked to UMLS (see the Example 1 in Figure 3). For the rest eighteen classes in AsdKB, we submit each of their labels to UMLS Metathesaurus Browser12 and manually make comparisons between the returned classes and submitted ones to decide whether there exist owl:equivalentClass or rdfs:subClassOf relations (the Example 2 in Figure 3 gives a mapping result). Besides, we apply Agreement-MakerLight [6] to mapping the AsdKB ontology to ADAR, and the Example 3 in Figure 3 also shows a mapping result.
## 3 Factual Knowledge Extraction
This section presents the extraction of factual knowledge of ASD. Due to limited spaces, we do not explain every detail but focus on the main content.
### Disease Knowledge
For disease knowledge, we need to extract the factual knowledge about disease and symptom instances according to the AsdKB ontology. Disease instances (e.g., "Atypical Rett syndrome") are derived from SNOMED CT, and they are actually the leaf nodes in the disease taxonomy in SNOMED CT. For each disease instance, we extract the values of the properties: Label (instance name), SCTID (the term ID in SNOMED CT), ICD-10 code (the corresponding ICD-10 category), and Synonym from SNOMED CT, respectively. We also manually design templates (i.e., regular expressions) to extract the values of properties Introduction (a brief description of the given disease instance), Patient Groups (e.g., "children" or "female children"), and Pathogen (e.g., "genetic and environmental factors") from ICD-10 clinical descriptions on mental and behavioural disorders [21], respectively. Besides, for the values of properties Label, Synonym, Introduction, Patient Groups, and Pathogen, we obtain the corresponding Chinese versions by Google Translate and manual proofreading. We collect 49 disease instances relevant to ASD in total, and their corresponding property information.
Symptom instances are also extracted from ICD-10 clinical descriptions on mental and behavioural disorders. We model the symptom instance extraction as the task of sequence labeling. We first take each paragraph as a document, and apply Term Frequency-Inverse Document Frequency [15] (TF-IDF) to identify keywords. Based on this, we then label a small amount of symptom instances in the corpus to train an extraction model. Here, we use BioBERT [14], a pre-trained biomedical language representation model for biomedical text mining, to encode each word as an embedding. Afterwards, we utilize BiLSTM [7] to capture textual context features for each word. Finally, we apply conditional
Figure 3: Examples of mapping the AsdKB ontology to UMLS and ADAR.
random fields [13] to finishing sequence labeling, which naturally classifies symptom instances to the pre-defined symptom classes, i.e., "Impairments in Social Interaction", " Restrictive, Repetitive and Stereotyped Behaviors", and "Other Symptoms". High-quality results of sequence labeling obtained by the trained model will be added to the labeled data to train a new model. We repeat this process until the maximum number of iterations is reached. Google Translate is also used to get the Chinese description of each symptom instance. Figure 4 shows the triples of the symptom <[https://w3id.org/asdkb/instance/symptom64](https://w3id.org/asdkb/instance/symptom64)>. Finally, We collect 65 symptom instances in total.
### Diagnostic Knowledge
For diagnostic knowledge, we extract the factual knowledge on the instances of diagnostic standards, screening tools, screening questions, and the corresponding options. Instances of diagnostic standards are acquired from the Chinese edition13 of DSM-5 [1], so we only have Chinese descriptions for the instances of diagnostic standards. We follow a similar process used for extracting symptom instances, and only replace the pre-trained model BioBERT with a more general model BERT [11] because BioBERT does not support Chinese but BERT does. Different from symptom instances, instances of diagnostic standards do not refer to specific behaviors or activities of the people with ASD, they actually present textual summarizations for specific classes of diagnostic standards (i.e., "Standard of Repetitive Behavior" and "Standard of Impairments in Social Interaction"). For example, an instance of diagnostic standards in AsdKB is expressed as " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "" " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "" " "" " " " " " " " " " " " " " " " "" " " " " " " "" " " " " " "" " " " " "" " " " " " " " " "" " " "" " " " "" " " "" " " " " "" " " " "" " " " " "" " " " "" " " "" " "" " "" " " "" " "" " " " "" " "" " " " "" " " "" " " " " "" " " "" " "" " "" "" " "" " "" " " "" " " "" " "" " "" "" " "" " "" " "" " "" " "" " "" " "" " "" " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" """ "" "" "" """ "" "" """ "" "" "" "" """ "" "" "" """ """ """ "" """ "" """ "" "" """ """ "" "" """ """ """ """ """ "" """ """ "" "" """ """ """ """ "" """ """ """ """ """ """ """ """ """ """ """ """ """" """ "" """ "" """ """ """ """ """" """ """" """ """ """ """ """ """" """" """" """ """ """" """ """ """ """ """ """" """" """" """" """ """" """" """" """"" """" """" """" """"" """"" """"" """" """"" """" """"" """"" """"" """"" """""" """"" """"""" """""" """"""" """""" """""""""
organizations and medical institutes, including CDC14, ALSOLIFE15 (China ASD Evaluation and Intervention Platform), Autism Canada16, and OCALI17 (The Ohio Center for Autism and Low Incidence). Instances of screening tools in AsdKB are actually screening scales, which have the properties Introduction (basic information and instructions), Author, User (the one filling in the scale, e.g., parents or teacher), Age (applicable ages of screening targets), Time (the time it takes to fill in the scale), Rule (screening principles and details), and Screening Boundary (the score of screening boundary after finishing the scale). After careful selection, we extract twenty instances of screening tools, containing fifteen English screening scales and five Chinese ones, such as ABC [12], CARS2 [24], and M-CHAT [27]. Google Translate is used here to translate English scales into Chinese ones, and manual proofreading is also conducted.
Footnote 14: [https://www.cdc.gov/ncbdd/autism/hcp-screening.html#Tools](https://www.cdc.gov/ncbdd/autism/hcp-screening.html#Tools)
Footnote 15: [https://www.alsolife.com/autism/screen/](https://www.alsolife.com/autism/screen/)
Footnote 16: [https://autismcanada.org/autism-explained/screening-tools/](https://autismcanada.org/autism-explained/screening-tools/)
Footnote 17: [https://www.ocali.org/project/assessment_measures](https://www.ocali.org/project/assessment_measures)
Footnote 18: [https://www.haodf.com/](https://www.haodf.com/)
Footnote 19: [https://www.familydoctor.com.cn/](https://www.familydoctor.com.cn/)
For instances of screening questions and options, they can be directly obtained from screening scales through table extraction. Besides keeping their textual content as the property, we also establish correspondingSymptom relationships between instances of screening questions and symptom instances, and matchStandard relationships between option instances and instances of diagnostic standards. These two kinds of relationships (i.e., object properties) benefit to the interpretability of screening results. For example, as shown in Figure 5, AsdKB can tell users that the current question investigates what specific symptoms are and whether the current option matches some diagnostic standard or not, in order to help users better understand screening questions and provide explanations to screening results. To identify correspondingSymptom relationships, we first use FNLP [22] to perform Chinese word segmentation on the instances of screening questions and symptom instances. After removing stopwords, we then compare string similarities between two word sequences to decide whether the correspondingSymptom relationship exists. The method of extracting matchStandard relationships is similar to that of correspondingSymptom relationships, and the only difference is to additionally consider the property Score of each option. If an option has the highest score or lowest score, it means the result of the current screening question is abnormal or normal (it ups to the design of screening scales), and abnormal results could help identify matchStandard relationships.
### Expert Knowledge
For expert knowledge, we extract factual knowledge of the instances of professional physicians, and the hospitals they work at, from the Web. We select two famous Chinese healthcare websites The Good Doctor18 and Family Doctor19
as the data sources to extraction, so the string values of some datatype properties are only presented in Chinese. We first submit the following keywords "\(\mathcal{H}\)" (ASD)", "\(\mathcal{H}\)" (pervasive developmental disorders)", "\(\mathcal{L}\)" (childhood autism)", and "(Asperger's syndrome)", to the search engines of the selected websites, which locates the Web pages of professional physicians on ASD. Faced with the structures like infobox tables in Wikipedia, we then extract physician instances and the values of properties Name, Title (e.g., "\(\mathcal{L}\)" (chief physician)" and "\(\mathcal{H}\)" (attending physician)"), Specialty (e.g., "\(\mathcal{L}\)" (various types of mental disorders in childhood)"), Hospital Department (e.g., "\(\mathcal{L}\)" (child healthcare department)" and "\(\mathcal{H}\)" (psychiatry department)"), and workAt (i.e., hospital instances). We collect 499 physician instances in total.
According to the values of the property workAt, we locate the Web pages of hospital instances. Similar to the extraction on physician instances and the corresponding property information, we extract the hospital instances and the values of properties Name, Address, Contact Details, and Hospital Level (e.g., "\(\mathcal{L}\)" (Grade-A tertiary hospital)"). We collect 270 hospital instances in total.
Since physician and hospital instances are extracted from different sources, we perform instance matching using heuristics. Given two hospital instances, if their values for at least one of the properties Address and Contact Details are the same, they are treated as equivalent. Given two physician instances, if their values for the property workAt are equivalent, and the values for the properties Name and Title are the same respectively, these two instances are determined as equivalent. Equivalent instances are fused as one instance in AsdKB.
Figure 5: An example to show the benefits of corresponding Symptom and matchStandard relationships.
### Other Knowledge
In this part, we extract factual knowledge on the instances of intervention methods, and the information of China administrative divisions. Instances of intervention methods are obtained from The National Clearinghouse on Autism Evidence and Practice20 (NCAEP), and such instances are all evidence-based practices, including "Discrete Trial Training", "Social Skills Training", "Peer-Based Instruction and Intervention", and etc. For each instance of intervention methods, we extract the values of properties Label (instance name) and Introduction (a brief description on the instance information). English string values are translated to Chinese by Google Translate, and we also conduct careful proofreading.
Footnote 20: [https://ncaep.fpg.unc.edu/](https://ncaep.fpg.unc.edu/)
With expert knowledge introduced in Section 3.3, a potential application is to seek expertise help from physicians to diagnosis. In order to find professional physicians in the target districts, cities, and provinces, we extract instances of China administrative divisions from National Bureau of Statistics21. The extracted instances are specific districts, cities, and provinces, and we also build locateAt relationships among them. To link each hospital to the corresponding administrative divisions, we first use Amap (a leading provider of digital map in China) API22 to get the latitude and longitude of each hospital by inputting the value of property address. With the information of latitudes and longitudes, Amap API can return the corresponding districts, cities, and provinces of hospitals. Besides, we record the Population of each instance of China administrative divisions, which could help regional analysis on ASD.
Footnote 21: [http://www.stats.gov.cn/](http://www.stats.gov.cn/)
Footnote 22: [https://github.com/amapapi](https://github.com/amapapi)
### Quality of AsdKB
AsdKB contains 6,166 entities (including conceptual entities, i.e., classes, and individual entities, i.e., instances) and 69,290 triples in total. All class URIs in the namespace [http://w3id.org/asdkb/ontology/class/](http://w3id.org/asdkb/ontology/class/) and instance URIs in the namespace [http://w3id.org/asdkb/instance/](http://w3id.org/asdkb/instance/) are dereferenceable. To evaluate the quality of AsdKB, we design two evaluation methods: accuracy evaluation, and task evaluation.
**Accuracy Evaluation.** There is no ground truth available, and it is impossible to evaluate all triples manually. Therefore, we apply a random evaluation strategy. We first randomly select 100 entities distributed across classes and instances, and obtain 732 triples. These samples can reflect the distribution of triples in the entire knowledge base. We then conduct manual labeling to evaluate the accuracy of the samples. The accuracy of the entire AsdKB is estimated by evaluating the accuracy of the samples.
Five graduate students participate in the labeling process. We provide three choices, which are _correct_, _incorrect_, and _unknown_ to label each sample. After each student label all the samples, we calculate the average accuracy. Finally,
similar to YAGO [10], Zhishi.me [28], and Linked Open Schema [29], we use the Wilson interval [3] when \(\alpha=5\%\) to extend our findings on the subset to the entire knowledge base. The Wilson interval is a binomial proportion confidence interval calculated from the results of a series of Bernoulli trials, and \(\alpha\) is the significance level. For the randomly selected 732 triples, the average _correct_ votes is 712, so the accuracy is 97.02% \(\pm\) 1.21%, and it demonstrates the high quality of AsdKB.
**Task Evaluation.** Besides the accuracy of the triples in AsdKB, we try to evaluate the effectiveness of AsdKB in answering real-world ASD relevant questions. Thus, we collect 100 frequently asked questions (e.g., "What are the clinical symptoms of autism?)" and "Which interventions are effective?)") on ASD from Chinese healthcare websites The Good Doctor and Family Doctor (introduced in Section 3.3), which are also the data sources of the expert knowledge in AsdKB. We store AsdKB in a graph database Neo4j [26], and also invite five graduate students to manually write Cypher (Neo4's graph query language) queries for the collected questions so as to check whether the returned query results can answer the questions. According to the above evaluation, AsdKB can answer 81 questions, i.e., the coverage reaches to 81%, which reflects the practicality of AsdKB.
## 4 Application of AsdKB
To illustrate the potential application of AsdKB, this section describes the implementation of a prototype system23 for the early screening and diagnosis of ASD based on AsdKB. This system has three main applications, including question answering, auxiliary diagnosis, and expert recommendation. Users of this system are parents, teachers, and caregivers.
Footnote 23: [http://asdkb.org.cn/](http://asdkb.org.cn/)
### Question Answering
We implement a natural language question answering (QA) system based on AsdKB, and expect that the QA system can answer various common-sense and factual questions on ASD. As mentioned in Section 3.5, AsdKB is stored in Neo4j, so we aim to translate each natural language question to a Cypher query, in order to query the graph database to return the answer. We use two strategies to design the QA system. The first one is to manually write common ASD relevant natural language query patterns (i.e., regular expressions) according to AsdKB ontology and the corresponding Cypher query templates. If a user query matches one of our patterns, then we construct and execute the corresponding Cypher query based on the pre-defined Cypher query template to get the answer. If the user query does not match our patterns, we use the second strategy, which applies the idea of the method for translating natural language questions to formal queries with semantic query graph modeling [31] to generating the Cypher query.
Figure 6 shows the interface of our QA system. We also detect the intention of each question to check whether the user would like to further fill in screening scales. If so, the system will directly give the link of auxiliary diagnosis to help choose screening scales (see Figure 6). The intention identification is modeled as a task of binary classification, where we use BERT to encode questions in the labeled data, and then train a SVM [9] classifier to predict whether users are willing to conduct screening or not.
### Auxiliary Diagnosis
We have developed an auxiliary diagnosis system based on AsdKB. This system provides users with screening scales to assess the risk of being ASD. As long as the screening result of a screening scale shows a risk, the system will prompt the user to seek professional medical evaluation and recommend experts using our expert recommendation system (will be introduced in Section 4.3).
As shown in Figure 7(a), before filling the screening scales, users can select appropriate screening conditions based on their situations, such as the child's age and existing symptoms, and the system will return the corresponding screening scales with a brief introduction (see Figure 7(b)). Figure 7(c) shows the questions and options when filling in the ABC screening scale. After completing a screening scale, the system will give the screening result (i.e., risky or not) based on the total score of all options and the screening boundary.
When users are filling in screening scales, they can check what specific symptoms the current question investigates to better understand the question, so as to help make a choice more precisely. Besides, after completing screening scales, this system can also analyze which option matches some diagnostic standard, to provide explanations of the screening results. More details have already been introduced in Section 3.2 and Figure 5.
Figure 6: The interface of the QA system.
### Expert Recommendation
If our auxiliary diagnosis system reports the risk of being ASD, users may have requirements to find experts on diagnosing ASD in the target administrative divisions. Thus, we design an expert recommendation system with facet search on AsdKB. Users can choose the target province, city and district by selecting a checkbox or directly clicking their locations on the map (see Figure 8). The recommendation result is a list of professional physicians with their names, titles, hospital departments, hospitals, hospital addresses, and specialties.
The recommendation has two steps: candidate physician generation and candidate physician ranking. In candidate physician generation, we use the location information of hospitals in AsdKB to match user selected administrative divisions, and the physicians in AsdKB working at such hospitals are candidates. Note that if no candidate physician returns, we will consider more hospitals in surrounding administrative divisions by distance calculation with latitudes and longitudes. In the candidate physician ranking, three aspects are taken into consideration. Firstly, the higher the title, the higher the ranking. Secondly, the higher the hospital level, the higher the ranking. Finally, the higher the number of thumbs up minus the number of thumbs down (Figure 8 gives an example), the higher the ranking.
Figure 7: An illustration of the auxiliary diagnosis system.
## 5 Related Work
Tu et al. [25] first proposed an autism ontology with domain terms and relationships relevant to autism phenotypes. The main target is to enable user queries and inferences about such phenotypes using data in the NDAR repository, but it does not include DSM criteria, so it does not support diagnosis of ASD. McCray et al. [18] also developed an ASD-phenotype ontology assessing and comparing different ASD diagnostic instruments, but it also does not include DSM-IV or DSM-5 criteria phenotypes. ADAR [19] extends an ontology proposed by Tu et al [25]. with additional SWRL rules to infer phenotypes from ADI-R [16] items, and it covers various symptoms and features of DSM IV and DSM-5 diagnostic criteria, such as difficulties with social interaction, language and communication issues, and stereotyped and repetitive behaviors. However, many fine-grained classes are actually instances in the generic sense.
The most recent work is AutismOnt [8], an ontology for autism diagnosis and treatment, which covers various covers autism research directions. AutismOnt includes the classes: Diagnosis, Risk Factors, Treatments, Strength and Weakness, Services, Lifespan Issues, Profile, and Family Relationships. However, although the authors claim that AutismOnt is available in the NCBO BioPortal, it cannot be found in the repository.
Some large-scale medical knowledge bases also contain ASD knowledge. For example, SNOMED CT [5] contains a large-scale number of medical terms, and the disease classes in AsdKB also comes from SNOMED CT, but it does not cover other kinds of knowledge, such as diagnostic knowledge and expert knowledge. Yuan et al. [30] proposed a method for constructing knowledge graphs with minimal supervision based on unstructured biomedical domain-specific contexts.
Figure 8: An illustration of the expert recommendation system.
They collected 24,687 abstracts of articles related to ASD from PubMed24, and constructed a knowledge graph on ASD. However, they did not design the ontology and the knowledge graph is not publicly available. CMeKG [4] is a Chinese medical knowledge graph developed using natural language processing and text mining techniques from a large amount of medical text data. CMeKG mistakenly uses drugs as the treatment for ASD, but drugs are only used to alleviate the complications of ASD in fact.
Footnote 24: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/)
Compared with all existing works, AsdKB is the first publicly available Chinese knowledge base on ASD, and it contains both ontological and factual knowledge about diseases, diagnosis, experts, and others. AsdKB has been applied in developing applications of the early screening and diagnosis of ASD.
## 6 Conclusions and Future Work
We develop and publish a Chinese knowledge base on ASD called AsdKB by extracting and integrating knowledge from various data sources with different formats. To the best of our knowledge, AsdKB is the most comprehensive ASD knowledge base on the Web, and it supports the different applications on the early screening and diagnosis of ASD, such as question answering, auxiliary diagnosis, and expert recommendation. However, there are still some limitations to our work that we plan to address in the future.
#### 6.0.1 Quality of AsdKB.
During our preliminary evaluations of AsdKB, we discovered that the entities contained within the knowledge base are of high quality. However, errors do exist during the automatic extraction process. These errors stem from a variety of factors such as the quality of the original data sources, differences in data formats, and our integration methods. To address this issue, we plan to introduce crowd-sourcing techniques to fix the existing errors in AsdKB and study automatic error detection methods to ensure the accuracy of knowledge in the process of knowledge update.
#### 6.0.2 Applications of AsdKB.
We have explored various applications for AsdKB, including QA, auxiliary diagnosis, and expert recommendation. The integrated prototype system has demonstrated the potential for AsdKB to play a critical role in early ASD screening and diagnosis. To further improve the accuracy of QA and auxiliary diagnosis, we will incorporate data-driven machine learning models on more user log data in our prototype system. In addition to this, we plan to analyze electronic medical records if possible using AsdKB to assist physicians in ASD diagnosis. By analyzing medical histories, symptoms, and other relevant information using AsdKB, physicians can make more accurate diagnosis and give appropriate and personalised treatment suggestions to the people with ASD.
#### Acknowledgements
This work is supported by the NSFC (Grant No. 62006040, 62072149), the Project for the Doctor of Entrepreneurship and Innovation in Jiangsu Province (Grant No. JSSCBS20210126), the Fundamental Research Funds for the Central Universities, and ZhiShan Young Scholar Program of Southeast University.
|
2306.17744 | Zespol: A Lightweight Environment for Training Swarming Agents | Agent-based modeling (ABM) and simulation have emerged as important tools for
studying emergent behaviors, especially in the context of swarming algorithms
for robotic systems. Despite significant research in this area, there is a lack
of standardized simulation environments, which hinders the development and
deployment of real-world robotic swarms. To address this issue, we present
Zespol, a modular, Python-based simulation environment that enables the
development and testing of multi-agent control algorithms. Zespol provides a
flexible and extensible sandbox for initial research, with the potential for
scaling to real-world applications. We provide a topological overview of the
system and detailed descriptions of its plug-and-play elements. We demonstrate
the fidelity of Zespol in simulated and real-word robotics by replicating
existing works highlighting the simulation to real gap with the milling
behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic
computing in swarming scenarios, which involves using the modules in Zespol to
simulate the behavior of neurons and their connections as synapses. This will
enable optimizing and studying the emergent behavior of swarm systems in
complex environments. Our goal is to gain a better understanding of the
interplay between environmental factors and neural-like computations in
swarming systems. | Shay Snyder, Kevin Zhu, Ricardo Vega, Cameron Nowzari, Maryam Parsa | 2023-06-30T15:52:18Z | http://arxiv.org/abs/2306.17744v1 | # Zespol: A Lightweight Environment for Training Swarming Agents
###### Abstract.
Agent-based modeling (ABM) and simulation have emerged as important tools for studying emergent behaviors, especially in the context of swarming algorithms for robotic systems. Despite significant research in this area, there is a lack of standardized simulation environments, which hinders the development and deployment of real-world robotic swarms. To address this issue, we present Zespol, a modular, Python-based simulation environment that enables the development and testing of multi-agent control algorithms. Zespol provides a flexible and extensible sandbox for initial research, with the potential for scaling to real-world applications. We provide a topological overview of the system and detailed descriptions of its plug-and-play elements. We demonstrate the fidelity of Zespol in simulated and real-word robotics by replicating existing works highlighting the simulation to real gap with the milling behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic computing in swarming scenarios, which involves using the modules in Zespol to simulate the behavior of neurons and their connections as synapses. This will enable optimizing and studying the emergent behavior of swarm systems in complex environments. Our goal is to gain a better understanding of the interplay between environmental factors and neural-like computations in swarming systems.
multi-agent systems, swarm intelligence, modeling and simulation, applied neuromorphic computing +
Footnote †: journal: Computer Vision and Pattern Recognition
+
frameworks presents additional difficulties. There are also notable performance issues associated with this framework that make running large-scale simulations a computationally expensive task.
MASON (Zespol et al., 2016) is an agent-based simulation library designed from the ground-up to support custom Java-based (Bahdan et al., 2017) simulations. There are many similarities between MASON and Zespol such as the inherent separation between the environment and visualization systems along with the compartmentalized nature of individual simulations. Both MASON and Zespol allow agents to be given arbitrary dynamics. The major limitation of MASON is a consequence of using very advanced Java where the barrier to entry for new users can be high. This issue is only compounded when we consider the lack of a mature and low-barrier system for distributing these simulations among heterogeneous computing systems. Addressing these issues is one of the major goals of Zespol.
OpenAI Gym, introduced in 2016, was a pioneering platform in the field of single-agent reinforcement learning (Beng et al., 2017). Out of the box, they support a wide variety of applications for classic control problems such as Box2D (Brock et al., 2018), and Atari (Auri et al., 2018). Compared to Zespol, Gym has two major limitations in that it is primarily designed for reinforcement learning and the programmatic architecture around Gym is focused purely on single agent simulations which severely limits its applicability to multi-agent robotics (Zespol et al., 2016; Snojkovic et al., 2017).
NetLogo (Zespol et al., 2016) is another multi-agent simulation environments. It is primarily designed to be used in educational environments, as evidenced by its integrated IDE with a drag-and-drop GUI. This makes programming behaviors easy, but the NetLogo language is limited. It is possible to run Python and R code from within NetLogo, as well as invoke a NetLogo simulation from a Java environment, but the interfaces are clunky and limited; thus NetLogo is largely incompatible with current means of distributing computation and simulation environments among heterogeneous computing systems and modern learning frameworks. NetLogo's simulation speed is, at best, equal to that of MASON (Zespol et al., 2016), but struggles with anything higher than two-dimensional environments.
In (Snojkovic et al., 2017), they conducted an interactive simulation in the design loop where simulated experiments where tightly coupled with real-world experiments. This study was broken up into four distinct portions to minimize the simulation to reality gap: 1) Characterizing the salient capabilities of the real robot, 2) Building a minimally viable simulation environment that characterizes the measured capabilities of physical robots, 3) Developing and exploring potential emergent behaviors in simulation, and 4) Deploying real robots based on simulation-driven-hypothesis and evaluating the performance penalties associated with the domain shift. They used a binary controller (Beng et al., 2017) for the salient capabilities of real robots and created stable milling behaviors in NetLogo that also performed the same behavior on physical robots. Despite their ability to minimize the simulation to reality gap, we are interested in deploying low-power and scalable neuromorphic computing platforms and explore novel methods of arriving at emergent behaviors. Zespol is designed as a simulation framework compatible with existing neuromorphic frameworks (Zespol et al., 2016; Snojkovic et al., 2017; Snojkovic et al., 2017) and hardware (Zespol et al., 2016).
Some of the key differences between prior works and Zespol are summarized in Table 1. Zespol is the only simulator that is written in user-friendly and well documented Python code, provides native capability for distributed (dist) simulation environments, and allows for arbitrary agent states and dynamics.
## 3. Programmatic Architecture
Zespol's underlying architecture is designed with modularity in mind where each fundamental building block has a plug and play interface. This design philosophy allows users to develop their own blocks such as sensor modules, controllers, and physical dynamics. All simulations are designed to minimize inter-object dependencies to reduce the chance of segmentation faults and minimize communication latency by only passing critical information between blocks. Each interface is thoroughly documented with the provided examples showing how users can extend the framework to support their needs. More formally, each building block is represented by two data structures that form an **object-state** relationship. We have provided three fundamental object-state pairs, Agent-AgentState, Swarm-SwarmState, and World-WorldState. A more detailed description of these pairs is given in the following.
Zespol objects are data structures responsible for containing all elements required for the object to function. For example, a robot object would contain the robot's current location, all of its sensor objects, controller objects, and control the interaction between these elements at every simulation time step.
The "_Agent_" object base class should be extended to support the specific requirements of a user's application. For example, The base class defines position and orientation vectors along three dimensions, a unique identifier, and the simulation time step fidelity. However, the _tick_ method must be updated based on user requirements to control the interactions between sensors, controllers, and physical dynamics.
\begin{table}
\begin{tabular}{r l l l l}
**Simulator** & **Lang** & **Dist** & **State** & **Dynamics** \\ \hline \hline VMAS (Beng et al., 2017) & Python & No & Arbitrary & Holonomic \\ Swarm-Sim (Snojkovic et al., 2017) & Python & No & Discrete & Holonomic \\ SwarmLab (Snojkovic et al., 2017) & MATLAB & No & Continuous & Drone \\ MASON (Zespol et al., 2016) & Java & No & Arbitrary & Arbitrary \\ NetLogo (Zespol et al., 2016) & NetLogo & No & Arbitrary & Arbitrary \\ Gym (Beng et al., 2017) & Python & No & Arbitrary & Arbitrary \\ \hline
**Zespol** & **Python** & **Yes** & **Arbitrary** & **Arbitrary** \\ \end{tabular}
\end{table}
Table 1. Comparison of multi-agent simulators
Figure 1. A flowchart presenting the critical programmatic flow within and between Zespol’s main components.
The _"Swarm"_ class contains references to all agents within the swarm and controls the interactions between agents at every simulation time step. This is where the distributed nature of Zespol is highlighted because the memory and process spaces for all agents are separated, the processing of individual agent updates at every time step can be distributed across heterogeneous compute clusters with tools such as Dask (Dask, 2015).
Bringing everything together, we have the _"World"_ class that contains every object and actionable element within the simulation environment. Therefore, this object maintains references to all swarms, visualization systems, and environmental objects such as world boundaries and obstacles. The last major responsibility of World objects is to manage the interactions between all swarms and environmental objects to manage the programmatic flow at every simulation time step.
For every agent, swarm, and world object there are associated states that contains a holistic view of the object with the central idea being the establishment of a shareable data structure that only contains fundamental information. This avoids repeatedly passing redundant information between objects. For example, AgentStates contain an Agent's location and orientation but shouldn't contain a copy of the Agent's sensor or controller.
_"AgentsStates"_ are defined by a snapshot of the given Agent's current location and heading, the change in these values from the previous step, along with their unique identifier. _"SwarmStates"_ are represented by a collection of states from all member agents along with a variety of metrics such as angular momentum, center of mass, scatter, and radial variance. Lastly, the _"WorldState"_ encompasses the states of all swarms along with all polygons that define the boundaries of the environment.
Besides the three predefined object-state pairs, there are three other notable objects within the system: _Sensors_, _Controllers_, and _Visualizers_. Each _"Sensor"_ is representative of a real-world sensor such as an RGB camera or LIDAR scanner that uses information within the WorldState to recreate a synthetic version of the perspective an Agent would see from their location in the world. _"Controllers"_ accept input from Sensors and modify the location, orientation, and heading of an agent based on their physical dynamics. These dynamics are arbitrary so they can be modified to fit a user's specific application. _"Visualizers"_ in _Zespol_ are separable, optional components of the simulator. They take a WorldState at every time step and generate visual output. We include a visualization system based around Matplotlib(Matplotlib, 2017) to provide users with an example to follow when extending these utilities to support their specific application.
The overall algorithmic flow starts at (1) initializing all Swarms and Agents within the World. (2) The WorldState object is constructed by querying all Swarms and Agents for their SwarmStates and AgentStates, respectively. (3) For every Agent within every Swarm, an artificial sensory perception is calculated in the Agent's Sensor based on its location relative to all other elements in the environment. (4) This perception is then passed to the associated Controller where the AgentState is modified. (5) Once every Agent in every Swarm has calculated their new states, any visualizations and logs can be created. (6) Lastly, the newly accumulated WorldState is used to progress through the next simulation time-step. Figure 1 provides a visual representation for the algorithmic flow between the fundamental Zespol elements.
## 4. Initial Results & Discussion
Our initial use case for Zespol was recreating the circular milling behavior from (Zegol, 2017; Zegol, 2017) where agents move in a uniform circle. Using knowledge gained from (Zegol, 2017) and Zespol's modular framework, we set up a simulation environment consisting of 9 Flockbots (Zegol, 2017) with each being equipped with a front-facing infrared proximity sensor and a differential drive system. An image of a real-world Flockbot can be seen in Figure 2.
To fully implement this environment in Zespol, we extended the Agent class with the FlockbotAgent class, a BinarySensor class, and a DifferentialDriveController class. As shown in Figure 3, the entire process starts at the WorldState going into the BinarySenor where a synthetic binary output is calculated based on the Agent's current location and orientation with respect to the rest of the world. Next, the binary sense is transferred to the DifferentialDriveController where the agent turns left if it senses something or turns right if it senses nothing.
There are numerous parameters for the Flockbot milling behavior that we selected based on the results of (Zegol, 2017) where the World ticks at 30 ticks per second, the Swarm contains 9 agents, and Sensors have a view distance of 3 meters and the same asymmetric field-of-view found in (Zegol, 2017) with a left bound of 11.5 degrees left of center and a right bound of 4 degrees left of center.
Zespol successfully models the complex coordination between multiple agents that results in a stable milling behavior. A visualization of the resulting formation is shown in Figure 4. This highlights the ability of Zespol to recreate emergent behaviors from other simulated environments and experimental results that have been validated on real-world robotic systems.
Figure 3. A flowchart presenting a detailed view of the interprocess communication within our example Zespol application using a Flockbot robot with a binary sensor
Figure 2. Image of a real-world Flockbot (Zegol, 2017)
## 5. Conclusion and Future Work
In conclusion, the field of agent-based modeling and simulation for studying emergent behaviors has witnessed substantial growth in parallel with the demand for robotic systems that can perform collective tasks. However, the lack of standardization in simulation environments makes it challenging to compare and contrast newfound research ideas with existing methods. The Zespol environment is introduced to serve as a lightweight and modular, Python-based simulation environment for developing multi-agent control algorithms. It offers ample opportunities for adoption and expansion by the broader research community. Moreover, the fidelity of Zespol is evaluated against previously published results in simulated and real-world robotics, demonstrating its ability to replicate existing swarming algorithms with the comparison between Zespol, NetLogo, and real robots conducting the milling behavior with Flockbots. With Zespol, users can develop and standardize swarming algorithms before transitioning over to real-world experiments or higher fidelity simulations. Zespol also provides native support for distributed parallelization across compute clusters and is compatible with neuromorphic computing platforms. As a result, it is a promising solution to issues slowing the advancement of emergent behaviors in robotic swarms of low-powered and individually incapable robotic systems.
Although Zespol is already demonstrating promising results, there is still room for improvement to make it a solid foundation for research on the application of neuromorphic computing in swarming robotics. Our plans include developing formal interfaces for common neuromorphic computing frameworks such as Lava (Lava, 2018) and Nengo (Nengo, 2020). We will also incorporate formal support for evolutionary algorithms (Nengo, 2020) and Bayesian optimization learning schemes (Zespol, 2020). To simplify the distributed nature of Zespol, we will create a user-friendly interface that minimizes the hassle of dealing with Dask (Dask, 2020) and multiprocessing (Zespol, 2020). Additionally, we will incorporate a vectorized simulation module to run simulations on multiple GPUs across heterogeneous systems. Finally, we will leverage spiking controllers to discover novel swarming behaviors.
## Acknowledgement
This work was supported in part by the Department of the Navy, Office of Naval Research (ONR), under federal grant N00014-22-1-2207.
|
2309.06972 | Gravitational bremsstrahlung in plasmas and clusters | We study the gravitational bremsstrahlung owing to collisions mediated by a
$1/r$ potential. We combine classical and first order Born approximation
results in order to construct an approximate gravitational `Gaunt factor' for
the total emitted energy. We also obtain the cross-section with an angular
momentum cut-off, and hence the cross-section for emission via close hyperbolic
encounters in a gravitating cluster. These effects are the dominant source of
very high frequency gravitational noise in the solar system. The total
gravitational wave power of the Sun is $76\pm 20\,$MW. | A. M. Steane | 2023-09-13T14:05:48Z | http://arxiv.org/abs/2309.06972v2 | # Gravitational bremsstrahlung in plasmas and clusters
###### Abstract
We study the gravitational bremsstrahlung owing to collisions mediated by a \(1/r\) potential. We combine classical and first order Born approximation results in order to construct an approximate gravitational 'Gaunt factor' for the total emitted energy. We also obtain the cross-section with an angular momentum cut-off, and hence the cross-section for emission via close hyperbolic encounters in a gravitating cluster. These effects are the dominant source of very high frequency gravitational noise in the solar system. The total gravitational wave power of the Sun is \(76\pm 20\,\)MW.
The aim of this paper is to review and extend the understanding of gravitational bremsstrahlung during collisions in a \(1/r\) potential. In practice this is Coulomb collisions and gravitational collisions where the potential is well-approximated as \(1/r\). Such processes take place in plasmas such as stellar interiors, and in gravitating clusters such as those of black holes believed to be present in many galactic nuclei, or in the early universe. However the motivation to study these processes is mainly their innate interest. They involve a combination of quantum theory and dynamic gravitation. For Coulomb collisions in the Sun the resulting gravitational wave amplitude is small and undetectable on Earth using any technology liable to be realised in the near future, but in principle it contributes to the limits on coherence of matter-wave interferometry owing to gravitational field noise.[1; 21; 8; 22]
Introductory material is set out in the first two sections below. Section I provides a brief historical survey of work related to gravitational wave (GW) emission during collisions in a \(1/r\) potential at low (non-relativistic) speeds. Section II introduces notation and methods of the present work. Section III obtains the total cross-section for the GW energy emission after integrating over impact parameter. This consists in first reporting existing work treating classical and quantum (first order Born approximation) limits of the motion, and then providing approximate formulae for the intermediate regime. Section IV considers the power and energy emission during a single hyperbolic encounter. Section V presents the cross-section obtained if one imposes a cut-off on the angular momentum. This is useful for treating emission in the case of attractive forces, where it makes sense to separate the collisions into those leading to capture and those where the bodies escape to infinity. Section VI applies the results of the previous sections so as to obtain the GW energy emission cross-section for close hyperbolic encounters in a gravitating cluster. Section VII uses the formulae of the paper to estimate the total GW power of the Sun. Section VIII concludes.
## I Historical survey
Early work on graviton emission during scattering of fundamental particles was carried out by Ivanenko and Sokolov (1947, 1952). [18; 19]. In 1965 Weinberg published an account of gravitational bremsstrahlung during Coulomb collisions, using quantum field theory in the limit where the gravitons are'soft', meaning they have negligible impact on the energy-momentum in external legs of the relevant Feynman diagrams.[32] The following year Carmeli confirmed this and also provided a classical calculation, for the case of a repulsive potential, which gives the total emitted energy after integration over impact parameters.[5] His clever method of calculation did not require an expression for the emitted energy in each hyperbolic encounter. Boccaletti (1972) extended this method to the Yukawa potential, and estimated emission from neutron stars.[3] Meanwhile Barker _et al._ 1969 gave the Born approximation calculation for graviton emission during collisions in a \(1/r\) potential, among other results.[2]. Emission from binary stars on Keplerian orbits had also been calculated, pioneered by Peters and Matthews (1963). [25; 27].
The above all concern low velocities and Euclidean geometry. Pioneering calculations for the case of a Schwarzschild-Droste metric and arbitrary velocity were provided by Peters (1970).[26] In the present survey we will not pursue the high-velocity or non-Newtonian cases. We are interested in cases where the velocity of the participating masses are small compared to \(c\) and the quadrupole approximation applies.
Gal'Tsov and Grats 1974 carried out Born approximation calculations, giving some further information not included in Barker _et al._.[12] They subsequently (1983) extended their study towards a more complete kinetic theory of a plasma such as that of the Sun.[13]
The first person to have correctly reported the total GW energy emitted during a hyperbolic encounter in a \(1/r\) potential, according to classical (not quantum) physics, appears to be Turner (1977), correcting a minor error in a
previous calculation by Hansen.[17; 29] This work was duly noted in a comprehensive review by Kovacs and Thorne in 1978, who comment: "Such computations are straightforward and simple," but in view of the fact that errors exist in the literature (we will point out some more in the next paragraphs) such computations are clearly not straightforward for ordinary mortals.[20]
Dehnen and Ghaboussi 1985 treated a general central potential and report some useful results for that case.[10; 11] They apply their methods to the \(1/r\) potential as an example and obtain the total scattered energy. Their formula agrees with that of Turner. They did not cite Turner, presumably an indication that they were not aware of his work. (Different authors report the formula in terms of different parameters so the agreement is not self-evident; we shall display both versions in section IV.)
Further reviews of astrophysical sources of gravitational waves are provided by Papini and Valluri 1977, Cutler and Thorne 2002 and Aggarwal _et al._ 2021.[1; 8; 24] Whereas Papini and Valluri discuss bremsstrahlung inside stars along with other processes, Cutler and Thorne do not because their review is focussed on signals that may be detectable now or in the near future.
Recently a further case has gained interest: the emission from clusters of black holes which may have been produced in the early universe or in the centres of galaxies. [4; 9; 14; 15; 23] The emission is partly from pairs (or larger numbers) of masses in bound orbits, and partly from a background of close hyperbolic encounters. Capozziello _et al._ (2008) calculated the emitted power and total emitted energy per encounter. Their results reproduce those of Turner and of Dehnen and Ghaboussi though they cite neither; they cite the review by Kovacs and Thorne which includes Turner but they do not make the comparison. De Vittori _et al._ (2012) follow the method of Capozziello explicitly but their eqn (6) has a sign error in the last term and their eqn (8) has the total power too large by a factor 4. Garcia-Bellido and Nesseris, and also Grobner _at al._, point out further mistakes. In view of these discrepancies a new calculation may be useful and we provide one.
The spectrum of the emitted radiation was treated by various authors, with noteworthy contributions from Turner, O'Leary _et al._, De Vittori _et al._, Garcia-Bellido and Nesseris and Grobner _at al._. (Grobner _et al._'s opening statement that De Vittori _et al._ constitutes 'the first calculation of the frequency spectrum' understates the contribution of Turner who gave explicit formulae for the cases of eccentricity \(e=1\) and \(e\gg 1\) and much of the analysis for general \(c\); subsequent authors completed the Fourier integrals for all \(e\)). Some mistakes in [9] are corrected in [14; 16].
The overall picture of work to date is one in which the calculations presented for electrical plasmas and those presented for gravitating clusters appear to be unaware of one another although they are often calculating the same things (i.e. emission during scattering in a \(1/r\) potential). The present work makes the following contributions. (i) bring together the two communities or histories just outlined; (ii) present the work of Galt'sov and Grats afresh; (iii) estimate the case, intermediate between classical and quantum, which is not amenable to classical nor Born approximations, obtaining an approximate 'Gaunt factor' for the total emitted power; (iv) obtain an emission cross section by using an angular momentum cut-off; (v) show how the above can be applied to calculate the emission from gravitating clusters and from a stellar plasma.
## II Notation and general approach
For two colliding partners of masses \(m_{1}\), \(m_{2}\) we define the total mass \(M=m_{1}+m_{2}\) and the reduced mass \(\mu=m_{1}m_{2}/M\). We shall also occasionally use the unadorned \(m\) (with no subscript) as an alternative notation for reduced mass; thus \(m\equiv\mu\). A given binary collision is described in the COM frame, such that it consists in a particle of mass \(\mu\) moving in a fixed central potential of the form either \(V(r)=Z_{1}Z_{2}e^{2}/r\) or \(V(r)=-Gm_{1}m_{2}/r\). It is only necessary to treat one of these two cases since the other is then given by making the replacement \(Z_{1}Z_{2}e^{2}\leftrightarrow-Gm_{1}m_{2}\). In the following we mostly present the former (Coulomb scattering) case since it includes both attractive and repulsive collisions, and also preserves in the notation the distinction between the role of the potential and the role of \(G\) in the emission of gravitational waves. For a slightly more succinct notation we define \(e_{1}e_{2}\equiv Z_{1}Z_{2}e^{2}\). We adopt electromagnetic units such that the Coulomb force between electrons is \(e^{2}/r^{2}\). In order to convert expressions involving \(e^{2}\) into SI units it suffices to replace \(e^{2}\) by \(e^{2}/(4\pi\epsilon_{0})\).
For a collision with the masses initially far apart, \(v_{0}\) is the initial velocity and \(b\) is the impact parameter. The collision energy is \(E=(1/2)\mu v_{0}^{2}\) and angular momentum \(L=\mu bv_{0}\).
If a flux \(n_{2}v\) is incident on a single collision centre, then the rate of collisions is \(n_{2}v\sigma\) where \(\sigma\) is the cross section (this defines \(\sigma\)). If there is a density \(n_{1}\) of collision centres, then the collision rate per unit volume is \(n_{1}n_{2}v\sigma\) if the particle types 1 and 2 are distinct, and it is \((1/2)n_{1}^{2}v\sigma\) if the particle types are not distinct. In this paper we shall write \(n_{1}n_{2}v\sigma\) and expect the reader to understand that in the case of identical particles the factor half must be introduced.
Our discussion is entirely non-relativistic. This is a good approximation for conditions in the core of the Sun, where \(\gamma-1\) (the difference of the Lorentz factor from 1) is 0.004 for electrons at the r.m.s velocity.
The gravitational bremsstrahlung process has some features in common with electromagnetic bremsstrahlung, which
has been studied extensively owing to its importance in astrophysics. For the electromagnetic case, for an otherwise free electron moving in the Coulomb field of an atomic ion of charge \(Z\), the emitted power per photon solid angle and per photon frequency range at frequency \(\nu\) from an electron of impact velocity \(v\) is written
\[j(\nu,v)=\frac{8\pi Z^{2}e^{6}n}{3\sqrt{3}c^{3}m_{e}^{2}v}g_{\rm ff}(\nu,v)\]
where the first part of the expression is the result of approximate classical electrodynamics and the factor \(g_{\rm ff}\) is called the "free-free _Gaunt factor_" which incorporates quantum and other corrections. Complicated expressions exist for \(g_{\rm ff}\) but for many purposes it is useful to have a simpler formula of reasonable accuracy. For the electromagnetic case this has recently been provided by Weinberg [33].
For an approximate classical calculation, one way to proceed is to integrate the emitted power at each moment (obtained from the acceleration) for an electron moving on the trajectory it would follow if no radiation were emitted. For the electromagnetic case this approximation is not always good, but for the GW emission it holds very well for particle collisions and we shall adopt it.
Whether in the electromagnetic or GW case, there are two significant energy scales in the collision dynamics: the collision kinetic energy and the potential energy at a distance of order a de-Broglie wavelength. The former is \((1/2)mv^{2}\) where \(v\) can be taken as the speed at infinity for a repulsive potential, or as the speed at the distance of closest approach for an attractive potential. It is important to note that for low angular momentum the speed and acceleration have very different behaviours for attractive and repulsive cases, leading to different formulae for GW emission even though the differential cross section of the collision may be independent of the sign of the potential.
For Coulomb collisions between particles of charges \(Z_{1}e\), \(Z_{2}e\) we define the dimensionless parameter \(n_{\rm B}\) called the _Born parameter_ by Galt'sov and Grats (and called \(\xi\) by Weinberg [33]):
\[n_{\rm B}\equiv\frac{|Z_{1}Z_{2}e^{2}|}{\hbar v}=|Z_{1}Z_{2}|\alpha\frac{c}{v} \tag{1}\]
where \(\alpha\) is the fine structure constant. The Born parameter can be read as a statement either about energy or about angular momentum. It is the ratio of the Coulomb energy at \(2\lambd\) to the collision energy. It is also approximately equal to the angular momentum in units of \(\hbar\). For a repulsive potential the distance at closest approach is \(2n_{\rm B}\lambd\) according to classical mechanics. The case \(n_{\rm B}\lesssim 1\) is the quantum limit; the Born approximation for the scattering holds when \(n_{\rm B}\ll 1\). The case \(n_{\rm B}\gg 1\) is the classical limit. Thus low temperatures give classical trajectories. The ground state of hydrogen has \(n_{\rm B}\approx 1\).
A further relevant energy is that of the emitted photons or gravitons, \(h\nu\). We say the photons or gravitons are'soft' when \(h\nu\ll(1/2)mv^{2}\) and 'hard' otherwise. The maximum possible emitted photon or graviton energy is equal to the entire kinetic energy of the incident particle, \((1/2)mv^{2}\). More generally if a single photon or graviton is emitted then the initial and final momenta of the scattered particle (e.g. electron) in the COM frame are related by
\[\frac{p_{i}^{2}}{2m}-\frac{p_{f}^{2}}{2m}=h\nu. \tag{2}\]
In bremsstrahlung the collision process itself has a timescale \(\tau\approx r_{0}/v\) where \(r_{0}\) is the distance of closest approach. Classical mechanics predicts that the emitted spectral power extends up to the angular frequency range near \(1/\tau\), but quantum mechanics insists there is the hard cut-off at \(\omega=(1/2)mv^{2}/\hbar\). The question arises, then, whether the classically 'preferred' frequency is available, or whether it is not because it is beyond the cut-off. The condition that \(1/\tau\) is less than the cut-off is \(2\hbar<mvr_{0}\), i.e. \(n_{\rm B}>1\).
### Methods of calculation
In the compact source approximation in linearized gravity, the luminosity (i.e. the emitted power) of a source is given by
\[L_{\rm GW}=\frac{G}{5c^{5}}\left<\ddot{Q}_{ij}\,\ddot{Q}^{ij}\right> \tag{3}\]
where
\[Q^{ij}=\frac{1}{c^{2}}\int T^{00}(x^{i}x^{j}-\frac{1}{3}\delta_{ij}x^{k}x_{k}) \,{\rm d}^{3}{\bf x} \tag{4}\]
is the quadrupole moment of the mass distribution and the angle bracket indicates an average over a small region of spacetime.
For given collision partners, a collision is parametrised by two quantities: the initial velocity \(v\) and the impact parameter \(b\). We can express the total power generated in some small volume \(V\) of a plasma, as a result of collisions between particles of types 1 and 2, as
\[P=Vn_{1}n_{2}\left\langle v\Sigma\right\rangle \tag{5}\]
where \(n_{1}\) and \(n_{2}\) are number densities of two species (\(n_{1}n_{2}\) should be replaced by \((1/2)n_{1}^{2}\) if the species are identical, as already remarked) and \(\Sigma\) is a cross section (to be calculated) with the physical dimensions of energy times area.
We shall obtain \(\Sigma\) by calculating the total GW energy emitted during a single collision, integrated over impact parameter \(b\). We adopt and compare four methods of calculation, as follows.
1. **Purely classical**. A good classical approximation is to take it that the emission does not significantly change the trajectory of the colliding partners. We calculate that trajectory in the COM frame and then the total emitted energy is \(\int L_{\rm GW}{\rm d}t\) per collision, with \(\tilde{Q}_{ij}\) obtained from the trajectory. The GW emission cross section is \[\Sigma=\int_{-\infty}^{\infty}{\rm d}t\int_{0}^{\infty}2\pi b\,{\rm d}b\,L_{ \rm GW}\] (6) The integral over time is conveniently done by using the particle separation \(r\) as a parameter, and exploiting the symmetry of the inward and outward motion. Thus one finds \[\Sigma=2\,\int_{r_{0}}^{\infty}\frac{{\rm d}r}{|\dot{r}|}\int_{0}^{b_{\rm max }}2\pi b\,{\rm d}b\,L_{\rm GW}\] (7) where \(r_{0}\) is the smallest distance of closest approach and \(b_{\rm max}\) is the largest impact parameter whose associated trajectory can reach a given \(r\); see Fig. 1 for an elucidation of this.
2. **Born approximation**. For a calculation of GW scattering in first order Born approximation in the non-relativistic limit we adopt results obtained by Barker _et al._; and by Gal'tsov and Grats (GG).[2; 12]
3. **Soft photon theorem**. Weinberg has obtained a very general expression for the emission of soft massless particles in any collision. In the non-relativistic limit his'soft photon theorem' applied to gravitons yields an expression for the power in the radiated spectrum up to frequency \(\Lambda/\hbar\): \[P_{<\Lambda}\simeq V\frac{8G}{5\pi c^{5}}m^{2}v^{5}n_{1}n_{2}\frac{\Lambda}{ \hbar}\int\frac{{\rm d}\sigma}{{\rm d}\Omega}\sin^{2}\theta\,{\rm d}\Omega\] (8) where \(\Lambda\) is an energy cut-off which has to be taken low enough so that it is small compared to relevant kinetic energies in the problem, and \({\rm d}\sigma/{\rm d}\Omega\) is the differential cross section for the collision in the absence of radiant emission. The term'soft' here means the graviton's energy-momentum is small compared to that of the particle emitting it.
Figure 1: The region of integration of (7) and (55). \(b\) is the impact parameter, \(r\) is the distance from the origin in the COM frame. At any given impact parameter \(b\), the trajectory does not reach values of \(r\) below \(r_{\rm min}\) given by (7) and therefore at any given \(r\) it does not reach values of \(b\) above \(b_{\rm max}\).
Note that Weinberg's result does not give the whole emitted power, only the part owing to soft gravitons, and only that part up the frequency cut-off \(\Lambda/\hbar\). Therefore we should not expect it to reproduce in full the result of a calculation of the whole power. Nonetheless it offers a useful consistency check on other calculations. The presence of the fifth power (\(v^{5}\)) in this result can be recognised as one from the particle flux and 4 from \(Q_{ij}^{2}\). Expressed as a cross-section we have \[\Sigma_{<\Lambda}\simeq\frac{8G}{5\pi c^{5}}m^{2}v^{4}\frac{\Lambda}{\hbar} \int\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\sin^{2}\theta\,\mathrm{d}\Omega\,.\] (9) The soft photon (or graviton) theorem concerns gravitons attached to external legs of a Feynman diagram and which do not significantly change the momentum. Here 'external' means lines for which the 4-momentum is near the mass shell. This is a useful method for repulsive potentials where the particles have their highest momentum in the initial and final states. For an attractive potential, however, Eqn (8) is less useful in the classical limit, as we shall see. The above formula implies that the emitted spectrum is uniform over frequency, and this is indeed the prediction for soft gravitons at low particle energies. For general particle energies the theorem gives the low-frequency part of the spectrum as \(\omega^{B}\) where \(B\) is a function of velocity which is of order \(G\bar{Q}_{ij}^{2}/\hbar c^{5}\); this is very small (\(<10^{-38}\)) for collisions of fundamental particles.
4. **Modified classical**. With a view to gaining intuition about the quantum limit, and to obtain formulae which are approximately valid for any initial velocity, we explore the effect of modifying the classical formula (7). This is not a modification to the equation of motion; it is merely a rough method to gain reasonable insight and approximate formulae. The idea is that the quantum behaviour can be modelled roughly by using a classical mass distribution with mass density equal to \(m\left|\psi\right|^{2}\) where \(\psi\) is a wavefunction in position space, and we suppose this distribution has a peaked (e.g. Gaussian) form with a standard deviation to be discovered and a mean which follows the classical trajectory. We then suppose that, to sufficient approximation, the result of such a model can be estimated by some simple adjustment to the integrand in (7). One idea, for example, is to replace \(r\) in the integrand of (7) with some simple function such as \((r^{2}+\Delta^{2})^{1/2}\) where \(\Delta\) is a parameter to be set so as to reproduce the known behaviour in the limits of small and large Born parameter. One would expect this \(\Delta\) to be of the order of the de Broglie wavelength. This was explored, and so were other possibilities. In particular, one might leave the integrand unchanged and simply adjust the lower limit of the integral, whether over \(b\) or \(r\) or both. It was found that this simpler approach gives a good approximation. This is presented in sections III.5, V.
## III Total emission cross section
### Order-of-magnitude estimate
In order to get some general insight into the results to be discussed, we first present a simple order-of-magnitude estimate of GW radiation during repulsive Coulomb collisions.
From (3) we have
\[L_{\mathrm{GW}}\approx\frac{G}{5c^{5}}\left(\frac{\overline{Mx^{2}}}{\tau^{3 }}\right)^{2}\approx\frac{4G}{5c^{5}}\left(\frac{E_{Q}}{\tau}\right)^{2} \tag{10}\]
where \(\tau\) is the timescale and \(E_{Q}\) is the part of the kinetic energy associated with non-spherical (i.e. quadrupolar) movements. The timescale of the changing quadrupole moment is \(\tau\simeq 0.5\,b_{E}/v\) where \(b_{E}\) is a characteristic distance scale for a collision at energy \(E\) and \(v\) is the relative speed of the colliding partners. In the case of Coulomb collisions of particles, the timescale \(\tau\) is very much smaller for electrons than protons so it is the electron collisions which dominate \(L_{\mathrm{GW}}\). We take as characteristic distance
\[b_{E}=2e_{1}e_{2}/E \tag{11}\]
where \(E\) is the collision energy in the COM frame. This \(b_{E}\) is equal to the impact parameter for Rutherford scattering through 90 degrees (and this is twice the distance of closest approach of a head-on collision.)
The duration of each collision is about \(2\tau\) so the emitted energy per collision is \((8G/5c^{5})E^{2}/\tau\). Multiplying this by the collision rate \(n_{2}\sigma v\) and the number density \(n_{1}\), and using \(\sigma=\pi b_{E}^{2}\), we obtain the power per unit volume of the gravitational wave production:
\[\frac{P}{V}\approx n_{1}n_{2}e_{1}e_{2}\frac{64\pi G}{5c^{5}}\frac{E^{2}}{\mu} \tag{12}\]
where \(\mu\) is the reduced mass of the colliding partners and \(E=(1/2)\mu v^{2}\).
Eqn (12) is compared with the result of a precise calculation in the next section. We there find that it captures correctly the scaling with parameters of the classical result for a repulsive potential, and gets the numerical factor about right.
### Classical treatment
We treat the two-body dynamics as a single-body motion of a particle of mass \(\mu\) moving in a static potential centred on the origin. Let \(D_{ij}\equiv 3Q_{ij}\), then \(D_{ik}=\mu(3x_{i}x_{k}-x^{j}x_{j}\delta_{ik})\) and
\[\tilde{D}_{ik}=6\mu v_{i}v_{k}-6\frac{\mathrm{d}V}{\mathrm{d}r}\frac{1}{r}x_{ i}x_{k}+\left[-2\mu v_{j}v^{j}+2\frac{\mathrm{d}V}{\mathrm{d}r}\frac{1}{r}x^{j}x_ {j}\right]\delta_{ik}. \tag{13}\]
The calculation of \(\stackrel{{\cdot}}{{D}}_{ik}\stackrel{{\cdot}}{{D}} ^{ik}\) is straightforward and the result is given by Boccaletti. [3] For Coulomb collisions one finds
\[L_{\mathrm{GW}}=\frac{8G}{15c^{5}}\frac{(e_{1}e_{2})^{2}}{r^{4}}\left(v^{2}+11 v_{\perp}^{2}\right) \tag{14}\]
where \(v_{\perp}^{2}=v^{2}-\dot{r}^{2}\) and in this expression \(v\), \(v_{\perp}\) and \(r\) are all functions of time.
The case of gravitational scattering can be treated by the replacement \(e_{1}e_{2}\rightarrow-Gm_{1}m_{2}\).
The potential is
\[V(r)=e_{1}e_{2}/r\,, \tag{15}\]
which may be positive or negative, depending on the signs of the charges. Let
\[r_{0}\equiv\frac{2e_{1}e_{2}}{mv_{0}^{2}} \tag{16}\]
where \(v_{0}\) is the initial velocity. In the case of a repulsive force (potential hill) \(r_{0}\) is a positive number equal to the minimum distance attained in a head-on collision. In the case of an attractive force (potential well) \(r_{0}\) has no such interpretation but we retain the formula as a definition, and then \(r_{0}<0\).
From conservation of energy and angular momentum we have
\[v^{2} = v_{0}^{2}(1-r_{0}/r) \tag{17}\] \[v_{\perp} = v_{0}b/r \tag{18}\]
where \(v_{0}\) is the initial velocity and \(b\) is the impact parameter. Hence
\[\dot{r}=v_{0}\sqrt{1-r_{0}/r-b^{2}/r^{2}}. \tag{19}\]
Using (7) and the above definitions, we have
\[\Sigma=\frac{32\pi G}{15c^{5}}(e_{1}e_{2})^{2}v_{0}\int_{r_{\mathrm{min}}}^{ \infty}\int_{0}^{\sqrt{r^{2}-rr_{0}}}\frac{(1-r_{0}/r)+11b^{2}/r^{2}}{r^{4} \sqrt{(1-r_{0}/r)-b^{2}/r^{2}}}\,b\mathrm{d}r\mathrm{d}b. \tag{20}\]
Taking the integration with respect to \(b\) first, we note that, for constants \(A,B,C,D\),
\[\int\frac{C+Db^{2}}{\sqrt{A-Bb^{2}}}b\mathrm{d}b=-\frac{1}{3B^{2}}\sqrt{A-Bb ^{2}}\left(3BC+2AD+BDb^{2}\right). \tag{21}\]
Therefore
\[\Sigma=\frac{64\pi G}{9c^{5}}\frac{(e_{1}e_{2})^{2}v_{0}}{|r_{0}|}\chi \tag{22}\]
where
\[\chi=\frac{5|r_{0}|}{2}\int_{r_{\rm min}}^{\infty}\frac{1}{r^{2}}\left(1-\frac{r _{0}}{r}\right)^{3/2}\,{\rm d}r\;=\;\frac{5}{2}\int_{x_{\rm min}}^{\infty} \frac{1}{x^{2}}\left(1\pm\frac{1}{x}\right)^{3/2}\,{\rm d}x \tag{23}\]
where the plus(minus) sign corresponds to an attractive(repulsive) potential. The lower limit on the integral with respect to \(r\) is the smallest \(r\) attained in the motion. This is zero for an attractive collision and \(r_{0}\) for a repulsive one. It follows that \(x_{\rm min}=0\) for an attractive collision and \(x_{\rm min}=1\) for a repulsive one. Consequently \(\chi\) diverges for an attractive collision and one obtains \(\chi=1\) for a repulsive collision. Hence the classical calculation (with no adjustment for quantum effects) yields a divergent result for an attractive collision (owing to infinite acceleration in a head-on collision), and for a repulsive collision yields
\[\Sigma_{\rm r}=\frac{32\pi G}{9c^{5}}Z_{1}Z_{2}e^{2}mv^{3} \tag{24}\]
where we now use \(v\) to indicate \(v_{0}\) in order to make the comparison with other results more transparent. This is the equation first obtained by Carmeli ([5], eqn (4.4)). We observe that when substituted into (5) it confirms our rough estimate (12).
### Quantum treatment
We now review results of quantum scattering theory for this problem, obtained by previous authors. Both Barker _et al._ and GG treat the Born approximation and also give some higher-order results; they differ in their choices of which further results to consider. We shall present the results for the Born approximation, and some further observations by GG.
Equation (8) of GG is the same as eqn (10) of Barker _et al._ after the replacement \((GMm/\hbar c)\rightarrow(e^{2}/\hbar c)\). (This replacement is the one Barker _et al._ point out after their eqn (15), except that they adopt rationalised electromagnetic units.) In our units, Barker _et al._, and also GG, find that the contribution to \(\Sigma\) of the graviton frequency range \({\rm d}\omega\), in the case of Coulomb scattering, is:
\[{\rm d}\Sigma=\frac{64G\hbar}{15c^{3}}\left(\frac{e_{1}e_{2}}{\hbar c}\right) ^{2}\left(5x+\frac{3}{2}(1+x^{2})\ln\frac{1+x}{1-x}\right)\hbar{\rm d}\omega \tag{25}\]
where \(x=p^{\prime}/p\) is the ratio of final to initial momentum of a particle scattering off a fixed potential. For single graviton emission (i.e. Born approximation) we have, by conservation of energy, \(\hbar\omega=(p^{2}-p^{\prime 2})/2m=(1-x^{2})p^{2}/2m\) so \(\hbar{\rm d}\omega=-xp^{2}/m\). When \(\omega\) ranges from 0 to the hard cut-off, \(x\) ranges from 1 to 0, so
\[\Sigma_{\rm B} = \frac{64G}{15c^{5}\hbar}(e_{1}e_{2})^{2}\frac{p^{2}}{m}\int_{0}^ {1}\left(5x^{2}+\frac{3}{2}x(1+x^{2})\ln\frac{1+x}{1-x}\right){\rm d}x \tag{26}\] \[= (160G/9\hbar c^{5})(e_{1}e_{2})^{2}mv^{2}. \tag{27}\]
However one should keep in mind that the Born approximation is only valid when \(n_{\rm B}\ll 1\) for both the initial and final momenta. At the hard end of the spectrum \(p^{\prime}\to 0\) so \(n_{\rm B}\rightarrow\infty\). Therefore the above formula has to be corrected at the hard end. This is the region where \(x\to 0\). GG obtain
\[{\rm d}\Sigma\rightarrow\pm\frac{1024\pi G}{15c^{5}}(e_{1}e_{2})^{2}\frac{ \tilde{\alpha}c}{v}\frac{{\rm d}\omega}{(e^{\pm 2\pi\tilde{\alpha}c/xv}-1)} \tag{28}\]
where the \(+\) sign is for repulsion and the \(-\) sign is for attraction, and \(\tilde{\alpha}\equiv Z_{1}Z_{2}\alpha\). Since \(xv\) is the final speed, the corrected formula should match the uncorrected one when the final Born parameter \(\alpha c/xv\ll 1\), as indeed it does. But at the hard end, \(x\to 0\), the spectrum is different in the two cases:
\[{\rm d}\Sigma\rightarrow\frac{1024\pi G}{15c^{5}}(e_{1}e_{2})^{2}\frac{\tilde {\alpha}c}{v}{\rm d}\omega\left\{\begin{array}{cc}e^{-2\pi\tilde{\alpha}c/xv }&\mbox{repulsion}\\ 1&\mbox{attraction}\end{array}\right. \tag{29}\]
It follows that (27) overestimates the power in the repulsive case, and underestimates it in the attractive case; c.f. Fig. 2. Note also that \(\mathrm{d}\Sigma\) scales as \((Z_{1}Z_{2})^{3}\).
The above Born approximation results apply when \(n_{\mathrm{{B}}}\ll 1\). Closed formulae are also available in the other limit, \(n_{\mathrm{{B}}}\gg 1\). For repulsion one then regains the classical result (24). For attraction the classical result (with no angular momentum cut-off) diverges; the quantum treatment derived by GG (their eqn (17)) gives
\[\Sigma_{\mathrm{a}}=\frac{8G}{5c^{5}}12^{1/3}\Gamma^{2}(2/3)\,Z_{1}Z_{2}e^{2} mv^{4/3}(\tilde{\alpha}c)^{5/3}. \tag{30}\]
where the subscript 'a' stands for 'attractive'.
In order to compare the various results, let us define in each case
\[\chi\equiv\Sigma/\Sigma_{\mathrm{r}} \tag{31}\]
where \(\Sigma_{\mathrm{r}}\) is given by (24). From (27) one obtains
\[\chi_{\mathrm{B}}\equiv\frac{\Sigma_{\mathrm{B}}}{\Sigma_{\mathrm{r}}}=\frac{ 9}{2\pi}n_{\mathrm{{B}}}. \tag{32}\]
Thus quantum effects here act to suppress the power by a factor \(9n_{\mathrm{{B}}}/2\pi\) compared to what would be expected classically. (Roughly speaking, the spread-out nature of the wavefunction results in a less-rapidly-changing quadrupole moment.)
Comparing now attraction and repulsion in the low-velocity limit, we have
\[\chi_{\mathrm{a}}\equiv\frac{\Sigma_{\mathrm{a}}}{\Sigma_{\mathrm{r}}}\simeq 0.6013(\tilde{\alpha}c/v)^{5/3}=0.6013\,n_{\mathrm{{B}}}^{5/3}. \tag{33}\]
The power in the attractive case greatly exceeds that in the repulsive case for low \(v\). This is because the relevant speed for the attractive case is not the incident speed but the speed at closest approach. For a classical trajectory at angular momentum \(L\), the speed at closest approach is approximately \(n_{\mathrm{{B}}}v\hbar/L=\tilde{\alpha}c\hbar/L\) in the limit \(n_{\mathrm{{B}}}\gg L/\hbar\). The scaling \(v^{4/3}\) exhibited in (30) can be interpreted as the cube of a velocity which makes a compromise (roughly a geometric mean) between \(v\) and \(n_{\mathrm{{B}}}v\).
The predictions of (24), (32) and (33) are plotted as dashed lines on figure 3.
Figure 2: Spectrum of GW emission in a Coulomb collision in the first order Born approximation for the collision (\(n_{\mathrm{{B}}}\ll 1\)), as given by (25). The dashed lines show the corrected spectrum near the hard end, eqn (28). Blue dashed: \(v=0.1c\), red dash-dot: \(v=0.3c\).
### Soft photon theorem
The soft photon theorem has to be applied with caution in the case of Coulomb collisions owing to the divergence of the collision cross-section term in (9). That is, the quantity
\[\tilde{\sigma}\equiv\int\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\sin^{2}\theta \mathrm{d}\Omega \tag{34}\]
diverges. Therefore the approximations invoked in the theorem do not hold in the case of the Coulomb potential. The problem is connected to the long-range nature of \(1/r\); similar problems arising in other scattering problems associated with this potential. In practice in a plasma there will be Debye screening so the potential is not well modelled by a \(1/r\) form at large \(r\), and is better modelled by a Yukawa potential. For the Yukawa potential one finds that \(\tilde{\sigma}\sim v^{-4}\ln v\) in the limit where the exponential term in the potential has a large length scale.
The soft photon/graviton theorem does not give the whole emitted power and one only expects order-of-magnitude agreement with the full \(\Sigma\) in general. However by judicious choice of the cut-off \(\Lambda\) one may expect to reproduce the full \(\Sigma\) to within a factor 2 in cases where the emission is mostly soft.
For Coulomb collisions there are two relevant frequency scales: the inverse of the collision time, and the hard cut-off at \(K/\hbar\) where \(K=(1/2)mv^{2}\) is the collision energy. The collision time is of order \(|r_{0}|/v\) where \(v\) is the maximum speed, which is \(v_{0}\) for repulsive collisions and for attractive collisions a suitable value is given by the case \(r=\upgamma_{\mathrm{dB}}=\hbar/mv\) with \(v^{2}=v_{0}^{2}+2|e_{1}e_{2}|/mr\) from energy conservation. One finds \(v=|\tilde{\alpha}|c+\sqrt{\tilde{\alpha}^{2}c^{2}+v_{0}^{2}}\). Hence the characteristic frequency is
\[\omega\simeq\frac{v}{r}=\left\{\begin{array}{ll}\mu v_{0}^{3}/2e_{1}e_{2}& \mbox{repulsion}\\ (\mu c^{2}/\hbar)\left(|\tilde{\alpha}|+\sqrt{\tilde{\alpha}^{2}+v_{0}^{2}/c^ {2}}\right)^{2}&\mbox{attraction}\end{array}\right. \tag{35}\]
For the attractive case this \(\omega\) is above the hard cut-off so to good approximation one has simply \(\Lambda\simeq K\) for that case. If we take \(\tilde{\sigma}\propto v^{-4}\) and use as \(\Lambda\) the smaller of \(\hbar\omega\) and \(K\) then the behaviour shown in figure 3 for repulsive collisions is reproduced by the formula (9) in both limits of low and high \(v_{0}\). To be specific, this is the case for
\[\tilde{\sigma}\simeq 32\pi\alpha^{2}\left(\hbar/\mu c\right)^{2}(c/v)^{4}. \tag{36}\]
The quantity in the squared bracket here is the Compton wavelength of the reduced particle.
For attractive collisions the soft photon theorem is less successful, but gives a good estimate at high \(v_{0}\) (low Born parameter).
Figure 3: Predictions for GW radiation in Coulomb collisions. The dashed lines show the limiting cases as described by (24) and (32) (low \(v\)) and (33) (high \(v\)). The full (dotted) line shows the predictions of the modified classical method described in section III.5 (eqns (37), (38)). The horizontal axis is \(\tilde{\alpha}/n_{\mathrm{h}}\); this is equal to \(v/c\) in the case of electron collisions.
### Modified classical
As noted in section II, our proposed modified classical method of calculation is merely an adjustment of the classical integrals so as to yield a reasonable approximation. In this section we consider the effect of adjusting the lower limit \(x_{\rm min}\) of the integral in (23). We have
\[\chi_{r}(\lambda) = \frac{5}{2}\int_{1+\lambda}^{\infty}\frac{1}{x^{2}}\left(1-\frac{ 1}{x}\right)^{3/2}\,{\rm d}x=1-\left(1+1/\lambda\right)^{-5/2}, \tag{37}\] \[\chi_{a}(\lambda) = \frac{5}{2}\int_{\lambda}^{\infty}\frac{1}{x^{2}}\left(1+\frac{ 1}{x}\right)^{3/2}\,{\rm d}x=-1+\left(1+1/\lambda\right)^{5/2} \tag{38}\]
where \(\lambda\) is a parameter which one would expect to be of the order of the de Broglie wavelength divided by a relevant distance scale such as \(|r_{0}|\).
Defining \(\lambda_{\rm dB}\equiv 2\pi\hbar/\mu v_{0}\), one finds \(\lambda_{\rm dB}/|r_{0}|=\pi/n_{\rm B}.\) By setting the parameter value
\[\lambda=0.5515\pi/n_{\rm B} \tag{39}\]
one finds that (37) reproduces the known results in both classical and quantum limits, and gives reasonable behaviour at all \(v<c\), see Fig. 3.
For attractive collisions the distance scale where quantum effects must be allowed-for is not simply \(|r_{0}|\) but may be considerably smaller. By solving for \(r\) the equation \(r=h/\mu v\) with \(v=v_{0}(1+|r_{0}|/r)^{1/2}\) one finds \(r\simeq\pi\lambda_{\rm C}/\alpha\) where \(\lambda_{\rm C}=h/\mu c\) is the Compton wavelength. We mention this merely to indicate that the attractive case is less straightforward. We shall choose the parameter \(\lambda\) so as to match (33) in the low-velocity limit and (32) in the high-velocity limit. We also have a further piece of information provided by (29), namely that \(\chi\) approaches the asymptote from above at small Born parameter in the attractive case. These constraints are achieved by adopting (for example)
\[\lambda=\left(5.20+1.84\,n_{\rm B}\right)^{1/3}/n_{\rm B}. \tag{40}\]
The result is shown in Fig. 3.
Eqns (37)-(40) together provide a formula for \(\chi\) which is approximately valid at all collision speeds \(v\). This \(\chi\) is the "Gaunt factor" for the total (i.e. integrated over frequency) emission. It allows one to obtain \(\Sigma\) by taking \(\Sigma_{\rm r}\) given by (24) and multiplying by a correction factor.
## IV Power and energy for a given scattering event
So far we have not treated the motion during individual scattering events, because it was convenient to integrate over impact parameter. We now treat individual events of given \(b\), \(v_{0}\). We shall present the gravitational (Keplerian), i.e. attractive case.
The orbit can be described by the parameters \(b\), \(v_{0}\) or by a number of other pairs, including \(E,L\) (energy and angular momentum, both conserved) and \(a,e\) where \(a\equiv GM/v_{0}^{2}=-r_{0}/2\) and \(e\) is the eccentricity defined by
\[e=\sqrt{1+b^{2}/a^{2}}\,. \tag{41}\]
For a hyperbolic orbit one then finds that the distance of closest approach is
\[r_{\rm min}=a(e-1)=b\sqrt{\frac{e-1}{e+1}} \tag{42}\]
and
\[e=-1/\cos\phi_{0} \tag{43}\]
where \(\phi_{0}\) is half the total change in azimuthal angle during the encounter (the deflection angle is \(2\phi_{0}-\pi\)).
On a classical model under the adopted assumptions (i.e. motion in a \(1/r\) potential), the GW power during the scattering process is given by (14), which, after using the conservation laws (18), gives an expression in terms of \(r\) and constants.
Turner gives the following formula (eqn (24) of [29]):
\[P=\frac{8G^{4}}{15c^{5}}\frac{Mm_{1}^{2}m_{2}^{2}}{[(1+e)r_{\rm min}]^{5}}(1+e \cos\phi)^{4}[e^{2}\sin^{2}\phi+12(1+e\cos\phi)^{2}] \tag{44}\]
where \(\phi\) is the azimuthal angle taken from \(\phi=0\) at periastron (the point where \(r=r_{\rm min}\)). Thus \(\phi\) goes from \(-\phi_{0}\) initially to \(\phi_{0}\) finally.
Capozziello _et al._ give (eqn (21) of [4]):
\[P=\frac{32GL^{6}\mu^{2}}{45c^{5}b^{8}}f(\phi_{0},\psi) \tag{45}\]
where
\[f(\phi_{0},\psi)=\frac{\sin^{4}\left(\phi_{0}-\psi/2\right)\sin^{4}\left(\psi /2\right)}{\tan^{2}\phi_{0}\sin^{6}\phi_{0}}\left[150+72\cos 2\phi_{0}+66 \cos 2(\phi_{0}-\psi)-144\cos(2\phi_{0}-\psi)-144\cos\psi)\right]. \tag{46}\]
(This formula is quoted incorrectly in [9] where there is a sign error in the last term). Here \(\psi\equiv\phi+\phi_{0}\) (thus \(\psi\) goes from \(0\) initially to \(2\phi_{0}\) finally). If we express \(f\) in terms of \(\phi\) rather than \(\psi\), it simplifies a little:
\[f=\frac{3}{8}\frac{\left(\cos\phi_{0}-\cos\phi\right)^{4}}{\tan^{2}\phi_{0} \sin^{6}\phi_{0}}\left[25+12\cos 2\phi_{0}-48\cos\phi_{0}\cos\phi+11\cos 2\phi \right]. \tag{47}\]
Equations (14), (44) and (45) give three ways of expressing the same result. They are all equivalent, which one may confirm by employing
\[r=\frac{b\sin\phi_{0}}{\cos\phi-\cos\phi_{0}} \tag{48}\]
(a standard result of orbital mechanics).
The integral of \(P\) over time is conveniently done by converting to an integral over \(\phi\). The result was first obtained by Turner:
\[\Delta E=\frac{8G^{7/2}}{15c^{5}}\frac{M^{1/2}m_{1}^{2}m_{2}^{2}}{r_{\rm min}^ {7/2}}g(e) \tag{49}\]
with
\[g(e)=\frac{\phi_{0}(24+73e^{2}+37e^{4}/4)+\sqrt{e^{2}-1}(602+673e^{2})/12}{(1 +e)^{7/2}} \tag{50}\]
(correcting an earlier calculation of Hansen). In order to bring out the comparison with (24), note that
\[\frac{8G^{7/2}}{15c^{5}}\frac{M^{1/2}m_{1}^{2}m_{2}^{2}}{((e+1)r_{\rm min})^{ 7/2}}=\frac{8G}{15c^{5}}\frac{GM\mu^{2}v_{0}^{3}}{b^{2}(e^{2}-1)^{5/2}}. \tag{51}\]
Dehnen and Ghaboussi'e result (eqn (7) of [10]) is
\[\Delta E=\frac{8G(e_{1}e_{2})^{2}}{15c^{5}}\frac{\mu E^{2}}{L^{3}}\left[(37+3 66z^{2}+425z^{4})\phi_{0}+(673/3+425z^{2})z\right] \tag{52}\]
where
\[z\equiv-\cot\phi_{0}=\frac{1}{\sqrt{e^{2}-1}}. \tag{53}\]
This agrees with Turner after one makes the substitution \(e_{1}e_{2}\rightarrow-Gm_{1}m_{2}\).
The total scattered energy was also obtained by Capozziello _et al._ Their expression is consistent with Turner's if one handles the term \(\sqrt{e^{2}-1}\) correctly. It must be taken positive, which means it is equal to \(-\tan\phi_{0}\) not \(\tan\phi_{0}\) when \(e>1\). Also, [9] give a result a factor 4 larger than that of [4]. In view of these issues a further check is useful. We completed the calculation independently and agree with Turner (and therefore also Dehnen and Ghaboussi) and with Capozziello _et al._ as long as the correct sign is taken, as just noted. Our result (equivalent to [4] eqns (27), (28)) is
\[\Delta E=\frac{G^{2}Mm^{2}v_{0}^{3}}{90c^{5}b^{2}}\frac{(\phi_{0}[2628+2328 \cos 2\phi_{0}+144\cos 4\phi_{0}]-1948\sin 2\phi_{0}-301\sin 4\phi_{0})}{| \tan\phi_{0}|\sin^{4}\phi_{0}}. \tag{54}\]
## V Classical collisions with angular momentum cut-off
So far we have surveyed or confirmed existing work, and contributed a small extension in the modified classical method. The remainder of our discussion is mostly new.
Rather than taking the integral (7) over all impact parameters, we now place a lower limit on \(b\). This will be useful for two purposes. First, the influence of quantum mechanics on collision cross-sections can sometimes be estimated by imposing a low angular momentum cut-off, at a value of order \(\hbar\), on a classical collision integral. Secondly, for attractive collisions the low angular momentum limit has to be considered separately in any case. This is because the approximation that the orbit is almost unaffected by the radiation breaks down.
In place of eqn (7) we introduce
\[\Sigma(L,v_{0})\equiv 2\int_{r_{\rm min}}^{\infty}\frac{{\rm d}r}{|\dot{r}|} \int_{L/mv}^{b_{\rm max}}2\pi b\,{\rm d}b\,L_{\rm GW} \tag{55}\]
where \(L\) is the cut-off and the notation on the left hand side is to indicate explicitly that the result is a function of the cut-off angular momentum \(L\) as well as \(v_{0}\). Then in place of (20) we have
\[\Sigma(L,v_{0})=\frac{32\pi G}{15c^{5}}(e_{1}e_{2})^{2}v_{0}\int_{r_{\rm min} }^{\infty}\int_{b_{0}}^{\sqrt{r^{2}-rr_{0}}}\frac{(1-r_{0}/r)+11b^{2}/r^{2}}{ r^{4}\sqrt{(1-r_{0}/r)-b^{2}/r^{2}}}\,b\,{\rm d}r{\rm d}b \tag{56}\]
where \(b_{0}=L/mv_{0}\), and \(r_{\rm min}\) is given by (42) (and by 59). After using (21) we obtain
\[\Sigma(L,v_{0})=\frac{64\pi G}{9c^{5}}\frac{(e_{1}e_{2})^{2}v_{0}}{|r_{0}|} \chi(L,v_{0}) \tag{57}\]
where
\[\chi(L,v_{0})=\frac{|r_{0}|}{10}\int_{r_{\rm min}}^{\infty}\frac{1}{r^{2}} \left(25\left(1-\frac{r_{0}}{r}\right)+11\frac{b_{0}^{2}}{r^{2}}\right)\left(1 -\frac{r_{0}}{r}-\frac{b_{0}^{2}}{r^{2}}\right)^{1/2}\,{\rm d}r. \tag{58}\]
The lower limit on this integral is the smallest \(r\) attained in the motion when the impact parameter is \(b_{0}\). This is
\[r_{\rm min}=\frac{1}{2}\left(r_{0}+\sqrt{r_{0}^{2}+4b_{0}^{2}}\right) \tag{59}\]
where the positive square root should be taken. (For \(L=0\) this gives \(r_{\rm min}=r_{0}\) for a repulsive collision and \(r_{\rm min}=0\) for an attractive collision.) The integral is doable; one finds
\[\chi(L,v_{0})=\frac{1}{80|y^{5}|}\left[6(1+y^{2})(85+37y^{2})\left(\frac{\pi} {2}-\cot^{-1}y\right)-510y-562y^{3}\right] \tag{60}\]
Figure 4: \(\chi(L,v_{0})\) given by (60) for attractive (upper line, dashed) and repulsive (lower line, full) collisions.
where
\[y\equiv\frac{Lv_{0}}{e_{1}e_{2}}=\pm\sqrt{e^{2}-1},\qquad|y|=\frac{L}{\hbar}\frac{ v}{c}\frac{1}{|Z_{1}Z_{2}|\alpha}=\frac{L/\hbar}{n_{\mbox{\tiny B}}} \tag{61}\]
where the negative square root is taken for the attractive case. \(\chi(L,v_{0})\) is plotted as a function of \(y\) in figure 4. It is remarkable that this \(\chi\) is a function of eccentricity alone.
One finds
\[\chi(L,v_{0})\rightarrow\left\{\begin{array}{cc}1&y\ll 1,\;y>0\\ (51\pi/8)y^{-5}&|y|\ll 1,y<0\\ (111\pi/80)y^{-1}&|y|\gg 1\end{array}\right. \tag{62}\]
Positive \(y\) means the potential is repulsive. At small \(y\) the result is then independent of \(L\) and reproduces the classical calculation without any angular momentum cut-off. This is because at small initial velocities the particles do not approach closely in a repulsive potential. At large \(y\) the result exactly reproduces the first order Born approximation (27) in the limit if we take
\[L=\frac{37\pi^{2}}{120}\hbar\simeq 3.043\,\hbar. \tag{63}\]
It follows that \(\Sigma(3.04\hbar,v_{0})\) can be taken as a reasonable approximation to the exact result (i.e. a quantum scattering calculation to all orders) for GW scattering during Coulomb collisions on a repulsive potential, for any collision energy in the non-relativistic limit. In other words, _for repulsive Coulomb collisions the complete quantum scattering prediction (summed over all orders or Feynman diagrams) closely matches a classical prediction in which low angular momentum states do not contribute at all_. The phrase 'closely matches' here signifies exact agreement in the limits of large or small \(n_{\mbox{\tiny B}}\), and agreement at some unknown accuracy of order 10% in the case \(n_{\mbox{\tiny B}}\sim 1\).
For an attractive potential the situation is less simple. In this case \(\Sigma(3.04\hbar,v_{0})\) produces the correct cross-section at high \(|y|\) but not at low \(|y|\). In other words, for an attractive Coulomb collision it is not sufficient merely to place a lower bound on the angular momentum in order to approximate the quantum physics of a collision at low energy.
## VI Gravitating clusters
In section III we discussed the total emission cross section, integrating over all impact parameters. For emission from a plasma this is a useful quantity, but for gravitational scattering in general it is not. This is because for an attractive potential the approximations break down at low angular momentum. Various situations can arise. Astrophysical bodies are generally not point-like and can crash into each other or otherwise merge. Also, even on a point-like model there can be radiative capture. This happens when
\[\Delta E>\frac{1}{2}\mu v_{0}^{2}. \tag{64}\]
That is, the emitted energy is larger than the initial energy in the binary system, with the result that an initially unbound pair finishes in a bound state. In a bound state the pair subsequently follows an almost periodic, almost elliptical orbit, gradually losing more energy until the bodies coalesce.
In order to treat a gravitating cluster, one way to proceed is to separate the scattering events into those where the bodies emerge to infinity, and those where there is gravitational capture owing to the gravitational radiation. We will employ the condition (64) to separate the two cases, which is valid at a low density of pairs but not at higher density where three-body effects tend to reduce the capture rate. [30]
Using (49) on the left hand side of (64) we find that the limiting case (where \(\Delta E=E\)) is given by
\[e-1=\left(\frac{16}{15}\frac{\mu}{M}\frac{v_{0}^{5}}{c^{5}}g(e)\right)^{2/7}. \tag{65}\]
This method of calculation is approximate since for such collisions the outgoing value of \(e\) will not be equal to the initial value, but it gives a reasonable estimate. Eqn (65) has \(g(e)\) on the right hand side so it is an implicit equation for \(e\) with no analytical solution. But we observe that for \(v_{0}\ll c\) one has \(e-1\ll 1\) as one would expect: \(e=1\) is the parabolic orbit where \(E=0\). In this case we can use \(g(1)\) on the right hand side, obtaining
\[e-1\simeq\left(\frac{85\pi}{6\sqrt{2}}\frac{\mu}{M}\frac{v_{0}^{5}}{c^{5}} \right)^{2/7}. \tag{66}\]
This agrees with eqn (17) of [23]. Non-captured orbits have \(e-1\) larger than this. We should now note two consistency checks. For the Newtonian potential to be valid we require \(r_{\rm min}\gg R_{s}=2GM/c^{2}\) (the Schwarzschild radius). This yields the condition
\[e-1\gg 2v_{0}^{2}/c^{2}. \tag{67}\]
This is comfortably satisfied by (66) for \(v_{0}\ll c\). Also for non-relativistic mechanics we require \(v_{\rm max}\ll c\). Conservation of angular momentum gives \(r_{\rm min}v_{\rm max}=bv_{0}\) and one obtains
\[\frac{e-1}{e+1}\gg\frac{v_{0}^{2}}{c^{2}}. \tag{68}\]
Since \(e+1>2\) this is a stronger condition than the previous one, but still comfortably satisfied.
We have in (66) an expression for the minimum eccentricity, at any given \(v_{0}\), for non-captured orbits. Since \(e-1\ll 1\) we can use \(y\equiv-\sqrt{e^{2}-1}\simeq-\sqrt{2}(e-1)^{1/2}\), and since this is small we can use the small \(|y|\) limit of eqn (60), giving
\[\chi(y)\simeq\frac{51\pi}{32\sqrt{2}}\left(\frac{6\sqrt{2}}{85\pi}\frac{M}{ \mu}\frac{c^{5}}{v_{0}^{5}}\right)^{5/7}. \tag{69}\]
Hence the total cross-section for emission of gravitational wave energy during hyperbolic (i.e. non-captured) encounters, in a low-density, low-velocity gravitating cluster is
\[\Sigma=\frac{\pi}{5}\left(\frac{340\pi}{3\sqrt{2}}\right)^{2/7}\frac{GM}{c^{2} }Gm_{1}m_{2}\left(\frac{\mu}{M}\right)^{2/7}\left(\frac{c}{v}\right)^{4/7}\,. \tag{70}\]
As an example, consider information furnished by O'Leary _et al._. They remark, "20,000 BHs are expected to have segregated into the inner \(\sim\)1 pc of the Milky Way".[23] The number density distributions in their figure 1 give \(n\simeq n_{0}(r_{0}/r)^{2}\) for \(r_{0}<r<0.3\,\)pc, where \(r\) is the distance from the centre of the galaxy, \(n_{0}\simeq 10^{10}\,\)pc\({}^{-3}\) and \(r_{0}=3\times 10^{-4}\,\)pc. They propose black holes in the mass range 5 to 15 \(M_{\odot}\) and encounters with initial relative speeds of order \(v\sim 1000\,\)km/s. Putting these values into (70) and (5) we obtain a total power from close hyperbolic encounters of black holes in the galactic centre of order \(10^{25}\,\)watt after averaging over times long enough for many encounters.
## VII The gravitational radiation of the Sun
Consider now a plasma in thermal equilibrium at the density and temperature of the core of the Sun--c.f. table 1. The thermal energy \(k_{\rm B}T_{\rm core}\simeq 1.35\,\)keV is about twice the Fermi energy of the electrons, and therefore the electron gas is non-degenerate to reasonable approximation. Each electron or proton has a kinetic energy of order \(k_{\rm B}T\) and the r.m.s. energy is approximately \(E_{Q}\simeq 2k_{\rm B}T\).
Gravitational bremsstrahlung in the Sun arises mainly from collisions among electrons, protons and \({}^{4}\)He nuclei. We shall present the result of integrating the emission over the Sun, treating the collisions as Coulomb collisions. This ignores the effect of Debye screening and therefore cannot be taken as an accurate value for the actual situation. But the Debye screening is not expected to change the overall result by as much as an order of magnitude. Therefore a calculation using the unscreened potential is a useful indicator, and also serves to establish which regime of behaviour (low or high Born parameter, attractive or repulsive collisions) dominates.
\begin{table}
\begin{tabular}{l l l} \(T_{\rm core}\) & & \(1.57\times 10^{7}\) K \\ \((3/2)k_{\rm B}T_{\rm core}\) & & \(2.03\) keV \\ Coulomb distance \(b_{E}\) & & \(1.4\) pm \\ plasma wavelength \(\lambda\) & & \(640\) pm \\ Debye (screening) length \(\lambda_{D}\) & & \(12\) pm \\ & & electrons protons \\ mean separation & & \(25\) pm & \(32\) pm \\ \(\lambda_{\rm th}=\hbar\sqrt{2\pi/mk_{\rm B}T}\) & & \(18.8\) pm & \(0.43\) pm \\ \(\lambdown_{\rm dB}=\hbar/\sqrt{2mE}\) & & \(4.3\) pm & \(0.10\) pm \\ \end{tabular}
\end{table}
Table 1: Some properties of the solar core. pm = picometre. \(\lambda_{\rm th}\) is defined such that \(n\lambda_{\rm th}^{3}\) is the onset of degeneracy. \(\lambdown_{\rm dB}\) is the distance over which a de Broglie wave acquires a phase of one radian, for a particle of energy \(E=(3/2)k_{\rm B}T_{\rm core}\).
In the solar core we have \(|n_{\rm B}|\simeq 0.06\) for collisions involving electrons. It was remarked by GG that the emission is therefore substantially reduced below the value predicted by the classical calculation (24) (we find one order of magnitude below, not two as suggested by GG). We observe also that it is important to include the attractive (ep and eHe) collisions as well as the repulsive ones.
The total power is obtained by adding the contributions from the various types of collision, integrated over the temperature and density distribution of the Sun. In order to perform such an integral, we adopted the distributions given by the Standard Solar Model.[28; 31] The result of the numerical integration is indicated in table 2. We find that the total power is 76 MW (in the absence of Debye screening). This is the first time this power has been calculated with better than order of magnitude accuracy. (The previous best estimate was that of GG who estimated the order of magnitude as 10 MW). It follows that the GW power of the Sun is \(76\pm 20\,\)MW, where the uncertainty is mostly owing to the as-yet-uncalculated impact of Debye screening.
It is noteworthy that ee, ep and eHe collisions make almost equal contributions. If it were not for the quantum effects, it would not be so. For if we simply set \(\chi=1\) for all the processes, then one finds the ee collisions dominate owing to their smaller reduced mass, leading to higher velocities. The value \(\chi=1\) also leads to a total power 10 times larger, indicating that the quantum effects are important for the conditions of the Sun. Note also that the increased emission for attractive, as compared with repulsive, collisions also raises the contribution of ep and eHe collisions a little, compared with ee.
From the above one may deduce that there is gravitational noise in the Sun with an rms strain amplitude of order \(10^{-41}\) at \(10^{18}\,\)Hz owing to Coulomb collisions. This is the dominant source of gravitational noise in the solar system at this frequency. The energy density of this radiation arriving at Earth is of order \(10^{-24}\,\)Wm\({}^{-3}\). This is similar to the energy density of relic gravitational waves in the frequency band up to GHz thought to be present owing to early-universe processes.[6; 7; 22] Owing to their lower frequency, the latter will have larger observable effects.
## VIII Conclusion
In conclusion, we have achieved the five aims set out at the end of section I. We have reviewed studies of gravitational bremsstrahlung during Coulomb collisions and presented a formula, based on semi-classical physical reasoning, which is able to reproduce, approximately, the predictions of a full (i.e. quantum) treatment of the total emitted power at any value of the Born parameter, in the non-relativistic limit. Equations (37)-(40) allow one to calculate the energy cross-section with high accuracy in certain limits and with \(\sim\!10\%\) accuracy in general. One can thus obtain the power averaged over many collisions in a homogeneous fluid. As an example, we have applied these equations to a treatment of the Sun, obtaining the total emitted power in the approximation where Debye screening is neglected.
Eqn (60) (combined with (24)) gives the energy cross-section in the classical (high Born parameter) limit for collisions at a given initial velocity after integrating over impact parameters above a lower limit set by a given angular momentum. This has not previously been calculated. We have used it to obtain, in eqn (70), the total cross section for emission of GW energy during close hyperbolic encounters where capture does not occur. This can be used to calculate, for example, the time-averaged emission from galactic nuclei by this process.
It has recently been suggested that black hole collisions in the early universe made a non-negligible to the stochastic gravitational background in the present. One may ask whether Coulomb collisions in the very early universe made a further non-negligible contribution. I have attempted an estimate of this (unpublished); the estimate suggests that the contribution is negligible but it would be interesting nonetheless to look into this more fully.
|
2309.08787 | Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata | Content metadata plays a very important role in movie recommender systems as
it provides valuable information about various aspects of a movie such as
genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can
help understand the user preferences to generate personalized recommendations
and item cold starting. In this talk, we will focus on one particular type of
metadata - \textit{genre} labels. Genre labels associated with a movie or a TV
series help categorize a collection of titles into different themes and
correspondingly setting up the audience expectation. We present some of the
challenges associated with using genre label information and propose a new way
of examining the genre information that we call as the \textit{Genre Spectrum}.
The Genre Spectrum helps capture the various nuanced genres in a title and our
offline and online experiments corroborate the effectiveness of the approach.
Furthermore, we also talk about applications of LLMs in augmenting content
metadata which could eventually be used to achieve effective organization of
recommendations in user's 2-D home-grid. | Saurabh Agrawal, John Trenkle, Jaya Kawale | 2023-09-15T22:11:29Z | http://arxiv.org/abs/2309.08787v1 | # Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata
###### Abstract.
Content metadata plays a very important role in movie recommender systems as it provides valuable information about various aspects of a movie such as genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can help understand the user preferences to generate personalized recommendations and item cold starting. In this talk, we will focus on one particular type of metadata - _genre_ labels. Genre labels associated with a movie or a TV series help categorize a collection of titles into different themes and correspondingly setting up the audience expectation. We present some of the challenges associated with using genre label information and propose a new way of examining the genre information that we call as the _Genre Spectrum_. The Genre Spectrum helps capture the various nuanced genres in a title and our offline and online experiments corroborate the effectiveness of the approach. Furthermore, we also talk about applications of LLMs in augmenting content metadata which could eventually be used to achieve effective organization of recommendations in user's 2-D home-grid.
2020 rights rightsre
adds an additional layer of complexity in accurately labeling movies within specific genres. Furthermore, genre labels do not capture the degree or intensity of a genre within a video. For instance, a movie like Jurassic Park can be classified as a science fiction and adventure film, but it also contains elements of horror and thriller. The genre labels alone fail to convey the nuanced blend of genres present in the movie. Moreover, movies within the same genre can still exhibit substantial differences. For example, consider two movies namely 'Gladiator' and 'Die Hard', both categorized as action films. However, the flavor of action in these movies diverges significantly due to distinct contextual factors. Gladiator is an epic historical action film set in ancient Rome, showcasing thrilling battles and action sequences within the Colosseum. On the other hand, Die Hard centers around intense action scenes taking place in a modern skyscraper during a terrorist siege.
## 2. Genre Spectrum
We propose an alternative approach to examining the genres which we refer to as the _Genre Spectrum_. Our hypothesis is that every title consists of a spectrum of genres and we transform the discrete genre labels data into a latent space where each dimension could be considered as an abstract concept/characteristic of a movie. Every genre will then manifest itself in this latent space as a subspace defined by a range of combinations of all the latent dimensions. We hypothesize that the continuum nature of genre spectrum embeddings enhances its expressive power in comparison to the discrete genre labels.
### Methodology
We use neural network based supervised machine learning to learn genre-spectrum embeddings. The underlying intuition is that the textual metadata of movies (e.g. genre, language, year of release, plot synopsis and summary, ratings, Rotten Tomatoes scores, user reviews, box office information etc.) have rich information to classify a movie into one or more genre labels. We collect textual metadata of about 1.1M movies from various sources and apply language modeling techniques to learn textual embedding of every movie in a text-embedding space. We then formulate a multi-label classification problem that aims to predict the genre labels using learned textual embeddings as input features. In particular, we train a multi-layer feedforward dense neural network that ingests textual embeddings as inputs and emits the probabilities of every genre class as the output. The model is trained using cross-entropy loss averaged over all the genre classes. Both the components of the neural net, the textual-to-genre-spectrum transformer and the genre classifier are trained jointly on the multi-label cross-entropy loss function. Thus, once the model is trained, we simply obtain genre-spectrum embeddings, by doing a forward pass on the transformer component of the neural net, i.e. collect the output from the penultimate layer of the neural net (as shown in Figure 1).
Figure 1. Neural net architecture for learning Genre Spectrum Embeddings
**Data Augmentation:** To further improve the quality of embeddings particularly on less-popular movies that have poor quality of metadata, we applied data augmentation technique proposed in (Dosov et al., 2017) on the training data. This technique randomly samples two training samples and take their random convex combination (on both features and labels) to generate a new synthetic data sample. We applied this technique (with small modifications to increase representation of rarer classes) and increased the training data by a factor of 10.
## 3. Experiments & Results
### Data and Setup
We collected textual metadata and genre labels of about 1.1M movies and tv series from three sources of movie metadata, namely: IMDb, Rotten Tomatoes, and Gracenote. We used 60-20-20 split to generate training, validation, and test set.
### Offline evaluation
For qualitative evaluation, we generated the \(2\)-D plot of genre spectrum embeddings produced using UMAP(Uniform Manifold Approximation and Projection) technique. As can be seen, the genres appear to be cohesive colored clusters in
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l} \hline \hline
**Embeddings** & \multicolumn{6}{c|}{**Top-100 nb genre-similarity score (\%) on popularity-based groups**} \\ \hline & IMDb & IMDb & IMDb & IMDb & IMDb & IMDb & IMDb \\ & votes & votes & votes & votes & votes & votes \\ & \(\in[0,10]\) & \(\in[10,100]\) & \([100,100]\) & \([10^{3},10^{4}]\) & \([10^{4},10^{5}]\) & \([10^{5},10^{6}]\) & \([10^{6},10^{7}]\) \\ \hline Doc2Vec (textual) & 65.60 & 68.96 & 76.66 & 84.07 & 88.08 & 88.54 & 92.30 \\ BERT (textual) & 40.27 & 44.64 & 53.07 & 60.89 & 65.26 & 64.74 & 72.44 \\ GPT-4 (textual) & 56.46 & 63.03 & 67.23 & 74.28 & 77.68 & 77.36 & 85.33 \\ GS on Doc2Vec & 78.50 & 83.28 & 89.84 & 94.43 & 96.48 & 96.45 & 97.89 \\ GS on BERT & 60 & 63.03 & 67.23 & 74.28 & 77.68 & 77.36 & 85.33 \\ GS on GPT-4 & 78.80 & 80.60 & 83.61 & 87.55 & 90.32 & 92.29 & 96.11 \\ GS-Augmented & 80.94 & 84.82 & 91.34 & 95.51 & 97.34 & 97.3 & 98.17 \\ on Doc2Vec & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of latent feature spaces on genre similarity in top-100 neighborhoods
Figure 2. A 2-D plot of genre spectrum embeddings generated using Universal Manifold Approximation and Projection (UMAP) technique
the latent space. Next, we evaluate different variants of Genre Spectrum (GS) embeddings based on the genre similarity in the neighborhood of every movie. Specifically, for each movie, we compute genre similarity in top-\(k\) neighborhood as a fraction of top \(k\) nearest neighbors in genre-spectrum space that share one or more primary genres with the given movie. A primary genre is defined as the one that is assigned to a movie by majority of the labeling sources. To gain deeper insights into the relationship between the metric and popularity, the genre similarity score is calculated as an average across different subsets of movies, which are grouped based on their IMDB votes as shown in Table 1. The first six rows in the table correspond to six variants of embeddings, three of them are textual embeddings generated using a variety of NLP models including Doc2Vec (96 dimensions)(Dong et al., 2019), pretrained BERT model trained on web corpus (Cheng et al., 2019) (768 dimensions), and OpenAI GPT-4 (1536 dimensions) (Dong et al., 2019) the latest version of LLM released by OpenAI. The next three rows correspond to Genre Spectrum embeddings learnt using genre label supervision on each one of the aforementioned textual embeddings. The last row corresponds to another variant of _GS on Doc2Vec_ where we applied data augmentation step described in Section 2.1. We make several insightful observations from the table: i) All the variants of Genre Spectrum embeddings perform better than their corresponding textual embedding variants in all the popularity buckets, validating the effectiveness of our approach. In particular, the effectiveness of our proposed methodology also applies in the context of LLMs. ii) Further, it can be seen that the improvement in genre-similarity is higher on the lower popularity buckets. This could potentially be attributed to the fact that the quality of metadata (e.g. terse synopses, less tags) degrades on non-popular movies. Consequently, the textual embeddings tend to be more unreliable in classifying genres for such movies. However, the noise is considerably reduced in Genre Spectrum embeddings as they are trained using genre labels. iii) _GS-Augmented on Doc2Vec_ beats _GS on Doc2Vec_ consistently in genre similarity scores for all the popularity segments, justifying the utility of data augmentation step. Further in Figure 3, we present an anecdotal example of a popular movie called _Life of Pi\({}^{t}\)_ to compare top-10 neighbors in textual and genre spectrum embedding spaces. In comparison to textual embedding space, neighbors in genre-spectrum latent space are much better aligned with the query movie on genre similarity.
### Online evaluation
To evaluate genre-spectrum embeddings in our online Tubi recommender system, we introduced a retrieval model in our production system. This model retrieves nearest neighbors of movies the user previously watched. Through an A/B test, we compared it to the control variant, which utilized binary genre labels. The test resulted in a statistically significant 0.6 % improvement in our primary user-engagement metric, 'tvt-capped' (total view time capped at 4 hours per day). This improvement validates the effectiveness of genre spectrum embeddings in enhancing personalization and user engagement.
## 4. Conclusion & Future work
We presented a case study on various challenges in incorporating genre label information in movie recommendation systems and how to address those challenges by learning meaningful embeddings to capture genre label information in a video-recommendation problem setting. An evident expansion of our work involves broadening the scope of content metadata to encompass other manually annotated movie datasets that offer a more extensive range of tags. Nevertheless, a common hurdle with such datasets lies in their limited coverage. Given the powerful capabilities of LLMs, one of the potential future directions could be to apply LLMs on textual metadata and generate more specific annotations for every movie in the form of _micro-genres_. Such micro-genres could then be used along with genre labels to learn more precise representation vectors of movies. Additionally, micro-genres could also be very useful in optimal organization of movie
recommendations on user's home screen. In particular, movie recommendations on prominent Video on Demand (VOD) platforms such as Tubi, Netflix, and Amazon Prime are typically presented in a 2-D grid layout using a set of 'carousels.' Each carousel groups together movies with a common theme, such as genre, language, or year, as reflected in its title. Conventional methods often use limited themes (e.g., standard genres or 90's classics) for carousel generation, which might result in sub-optimal personalization of the home-grid. By incorporating LLM-generated micro-genres, we can enrich the pool of carousel themes, leading to more effective personalization. During the presentation, we will also share preliminary results from our explorations in this direction.
## 5. Biographies
**Saurabh Agrawal** is a Senior Machine Learning Engineer at Tubi since August 2022 where he leads deep learning projects for Search and Recommendation Systems at Tubi. Prior to Tubi, he completed his PhD in Computer Science from University of Minnesota before he worked at Amazon for more than three years as an Applied Scientist.
**John Trenkle** is an experienced professional in AI/ML. John's work at Tubi includes significant contributions in Recommendation Systems, AdTech, Natural Language Processing (NLP), and Big Data management, showcasing his adaptable approach to the evolving field of machine learning.
**Jaya Kawale** is the VP of Engineering at Tubi leading all the machine learning efforts at Tubi. She did her PhD in Computer Science from the University of Minnesota and has published 15+ papers at top-tier machine learning conferences. Prior to Tubi, she has worked at Netflix, Adobe Research, Yahoo Research and Microsoft Research.
Figure 3. Comparison of top-10 neighbors of movie _Life of Pi_ in textual embeddings (Doc2Vec) space and the genre spectrum embeddings space (trained on Doc2Vec) in the top and bottom panel respectively. |